Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
6,400
Given the following text description, write Python code to implement the functionality described below step by step Description: Self-Driving Car Engineer Nanodegree Deep Learning Project Step1: Step 1 Step2: Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include Step3: Select one train image Step4: Step 2 Step5: Changing training data Step6: Setup TensorFlow The EPOCH and BATCH_SIZE values affect the training speed and model accuracy. You do not need to modify this section. Step7: Model Architecture Using LeNet-5 based architecture Implement the LeNet-5 neural network architecture. Input The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since German sign images are 32x32 RGB, C is 3 in this case. Architecture Layer 1 Step8: Features and Labels Train LeNet to classify the German signs. x is a placeholder for a batch of input images. y is a placeholder for a batch of output labels. You do not need to modify this section. Step9: Training Pipeline Create a training pipeline that uses the model to classify German sign images. Step10: Model Evaluation Evaluate how well the loss and accuracy of the model for a given dataset. You do not need to modify this section. Step11: Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting. Train the Model Run the training data through the training pipeline to train the model. Before each epoch, shuffle the training set. After each epoch, measure the loss and accuracy of the validation set. Save the model after training. You do not need to modify this section. Step12: Plot data Step13: Evaluate the Model Once you are completely satisfied with your model, evaluate the performance of the model on the test set. Be sure to only do this once! If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data. You do not need to modify this section. Step14: Step 3 Step15: Predict the Sign Type for Each Image Step16: Analyze Performance Step17: Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image. tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids. Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tf.nn.top_k is used to choose the three classes with the highest probability Step18: Project Writeup Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file. Note
Python Code: # Load pickled data import pickle # TODO: Fill this in based on where you saved the training and testing data training_file = "traffic-signs/train.p" validation_file= "traffic-signs/valid.p" testing_file = "traffic-signs/test.p" with open(training_file, mode='rb') as f: train = pickle.load(f) with open(validation_file, mode='rb') as f: valid = pickle.load(f) with open(testing_file, mode='rb') as f: test = pickle.load(f) X_train, y_train = train['features'], train['labels'] X_validation, y_validation = valid['features'], valid['labels'] X_test, y_test = test['features'], test['labels'] Explanation: Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition Classifier In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project. The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file. Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Step 0: Load The Data End of explanation # Basic Summary and Data Set info import numpy as np import matplotlib.pyplot as plt import csv # TODO: Number of training / validation / testing examples n_train = X_train.shape[0] n_validation = X_validation.shape[0] n_test = X_test.shape[0] # TODO: What's the shape of an traffic sign image? image_shape = X_train[0].shape image_shape_v = X_validation[0].shape image_shape_t = X_test[0].shape # TODO: How many unique classes/labels there are in the dataset. n_classes = np.unique(y_train).shape[0] n_classes_v = np.unique(y_validation).shape[0] n_classes_t = np.unique(y_test).shape[0] class_list = [] with open('signnames.csv') as csvfile: reader = csv.DictReader(csvfile) for row in reader: class_list.append(row['SignName']) n_classes_csv = len(class_list) print("Number of training examples =", n_train) print("Number of validation examples =", n_validation) print("Number of testing examples =", n_test) print("Image Shape:") print(" train dataset = ", image_shape) print(" validation dataset = ", image_shape_v) print(" test dataset = ", image_shape_t) print("Number of classes:") print(" distinct labels in train dataset = ", n_classes) print(" distinct labels in validation dataset = ", n_classes_v) print(" distinct labels in test dataset = ", n_classes_t) print(" labels in csv = ", n_classes_csv) Explanation: Step 1: Dataset Summary & Exploration The pickled data is a dictionary with 4 key/value pairs: 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels). 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id. 'sizes' is a list containing tuples, (width, height) representing the original width and height the image. 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas End of explanation print(" ") print("Training samples distribution per class") n_samples=[] for i in range(0, n_classes): n_samples.append(X_train[y_train == i].shape[0]) class_list = np.asarray(list(zip(class_list, n_samples))) plt.figure(figsize=(10, 2)) plt.bar(range(0, n_classes), n_samples,color='blue',edgecolor='black') plt.title("Training samples per class") plt.xlabel("Id") plt.ylabel("Number of samples") plt.show() print(" ") print("Validation samples distribution per class") n_samples=[] for i in range(0, n_classes_v): n_samples.append(X_validation[y_validation == i].shape[0]) plt.figure(figsize=(10, 2)) plt.bar(range(0, n_classes), n_samples,color='blue',edgecolor='black') plt.title("Validation samples per class") plt.xlabel("Id") plt.ylabel("Number of samples") plt.show() print(" ") print("Testing samples distribution per class") n_samples=[] for i in range(0, n_classes_t): n_samples.append(X_test[y_test == i].shape[0]) plt.figure(figsize=(10, 2)) plt.bar(range(0, n_classes), n_samples,color='blue',edgecolor='black') plt.title("Testing samples per class") plt.xlabel("Id") plt.ylabel("Number of samples") plt.show() Explanation: Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python. NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others? End of explanation ### German sign images are already 32x32 import cv2 import random # Visualizations will be shown in the notebook. %matplotlib inline n_classes_csv = len(class_list) n_samples=[] for i in range(0, n_classes): n_samples.append(X_train[y_train == i].shape[0]) class_list = np.asarray(list(zip(class_list, n_samples))) index = random.randint(0, len(X_train)) image = X_train[index].squeeze() plt.figure(figsize=(1,1)) plt.imshow(image) print("Classifier ID = ", y_train[index], ", Description = ", class_list[y_train[index],0]) Explanation: Select one train image: End of explanation def perform_grayscale(image): return cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) def perform_hist_equalization(grayscale_image): return cv2.equalizeHist(grayscale_image) def perform_image_normalization(equalized_image): return equalized_image/255.-.5 def pre_process_image(image): image = perform_grayscale(image) image = perform_hist_equalization(image) image = perform_image_normalization(image) return np.expand_dims(image,axis=3) original_image = X_train[index].squeeze() grayscale_image = perform_grayscale(original_image) equalized_image = perform_hist_equalization(grayscale_image) normalized_image = perform_image_normalization(equalized_image) image_shape = np.shape(normalized_image) print("Original image:") plt.figure(figsize=(1,1)) plt.imshow(original_image) print(y_train[index]) plt.show() print("Grayscale image data shape =", image_shape) print("Preprocess Image techiniques applied") print("Converted to grayscale") plt.figure(figsize=(1,1)) plt.imshow(grayscale_image, cmap='gray') plt.show() print("Converted to grayscale + histogram equalization:") plt.figure(figsize=(1,1)) plt.imshow(equalized_image, cmap='gray') plt.show() print("Converted to grayscale + histogram equalization + normalization:") plt.figure(figsize=(1,1)) plt.imshow(normalized_image, cmap='gray') plt.show() new_image = pre_process_image(image) new_image_shape = np.shape(new_image) print("New Image data shape =", new_image_shape) Explanation: Step 2: Design and Test a Model Architecture Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset. The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem: Neural network architecture (is the network over or underfitting?) Play around preprocessing techniques (normalization, rgb to grayscale, etc) Number of examples per label (some have more than others). Generate fake data. Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. End of explanation import cv2 img_resize = 32 N_classes = 43 image_shape = (img_resize,img_resize) img_size_flat = img_resize*img_resize image_S_train = np.array([pre_process_image(X_train[i]) for i in range(len(X_train))], dtype = np.float32) image_S_valid = np.array([pre_process_image(X_validation[i]) for i in range(len(X_validation))], dtype = np.float32) image_S_test = np.array([pre_process_image(X_test[i]) for i in range(len(X_test))], dtype = np.float32) ### Shuffle the training data. from sklearn.utils import shuffle image_S_train, y_train = shuffle(image_S_train, y_train) Explanation: Changing training data End of explanation import tensorflow as tf EPOCHS = 80 BATCH_SIZE = 128 Explanation: Setup TensorFlow The EPOCH and BATCH_SIZE values affect the training speed and model accuracy. You do not need to modify this section. End of explanation from tensorflow.contrib.layers import flatten n_channels = 1 def dropout_layer(layer, keep_prob): layer_drop = tf.nn.dropout(layer, keep_prob) return layer_drop def LeNet(x): # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer keep_prob = 0.75 mu = 0 sigma = 0.1 # SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x12. 3 inputs colour channels conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, n_channels, 12), mean = mu, stddev = sigma)) conv1_b = tf.Variable(tf.zeros(12)) conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b # SOLUTION: Activation. conv1 = tf.nn.relu(conv1) # SOLUTION: Pooling. Input = 28x28x12. Output = 14x14x12. conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') #layer_conv1_drop = dropout_layer(conv1, 0.5) # SOLUTION: Layer 2: Convolutional. Output = 10x10x32. conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 12, 32), mean = mu, stddev = sigma)) conv2_b = tf.Variable(tf.zeros(32)) conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b # SOLUTION: Activation. conv2 = tf.nn.relu(conv2) # SOLUTION: Pooling. Input = 10x10x32. Output = 5x5x32. conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # TODO: Layer 2-b: Convolutional. Input = 5x5x32. Output = 3x3x64. conv3_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 32, 64), mean = mu, stddev = sigma)) conv3_b = tf.Variable(tf.zeros(64)) conv3 = tf.nn.conv2d(conv2, conv3_W, strides=[1, 1, 1, 1], padding='VALID') + conv3_b # TODO: Activation. conv3 = tf.nn.relu(conv3) # SOLUTION: Flatten. Input = 3x3x64. Output = 800. fc0 = flatten(conv2) fc0 = tf.nn.dropout(fc0, keep_prob) # SOLUTION: Layer 3: Fully Connected. Input = 800. Output = 256. fc1_W = tf.Variable(tf.truncated_normal(shape=(800, 256), mean = mu, stddev = sigma)) fc1_b = tf.Variable(tf.zeros(256)) fc1 = tf.matmul(fc0, fc1_W) + fc1_b # SOLUTION: Activation. fc1 = tf.nn.relu(fc1) fc1 = tf.nn.dropout(fc1, keep_prob) # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84. fc2_W = tf.Variable(tf.truncated_normal(shape=(256, 84), mean = mu, stddev = sigma)) fc2_b = tf.Variable(tf.zeros(84)) fc2 = tf.matmul(fc1, fc2_W) + fc2_b # SOLUTION: Activation. fc2 = tf.nn.relu(fc2) # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43. fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma)) fc3_b = tf.Variable(tf.zeros(43)) logits = tf.matmul(fc2, fc3_W) + fc3_b return logits, conv1, conv2, conv3 Explanation: Model Architecture Using LeNet-5 based architecture Implement the LeNet-5 neural network architecture. Input The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since German sign images are 32x32 RGB, C is 3 in this case. Architecture Layer 1: Convolutional. The output shape should be 28x28x6. Activation. Your choice of activation function. Pooling. The output shape should be 14x14x6. Layer 2: Convolutional. The output shape should be 10x10x16. Activation. Your choice of activation function. Pooling. The output shape should be 5x5x16. Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you. Layer 3: Fully Connected. This should have 120 outputs. Activation. Your choice of activation function. Layer 4: Fully Connected. This should have 84 outputs. Activation. Your choice of activation function. Layer 5: Fully Connected (Logits). This should have 43 outputs. Output Return the result of the 2nd fully connected layer. End of explanation x = tf.placeholder(tf.float32, (None, 32, 32, n_channels)) y = tf.placeholder(tf.int32, (None)) one_hot_y = tf.one_hot(y, 43) keep_prob = tf.placeholder(tf.float32) Explanation: Features and Labels Train LeNet to classify the German signs. x is a placeholder for a batch of input images. y is a placeholder for a batch of output labels. You do not need to modify this section. End of explanation rate = 0.0005 logits, conv1, conv2, conv3 = LeNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = rate) training_operation = optimizer.minimize(loss_operation) Explanation: Training Pipeline Create a training pipeline that uses the model to classify German sign images. End of explanation correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0.0 sess = tf.get_default_session() for offset in range(0, num_examples, BATCH_SIZE): batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE] accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0}) total_accuracy += (accuracy * len(batch_x)) return total_accuracy / num_examples Explanation: Model Evaluation Evaluate how well the loss and accuracy of the model for a given dataset. You do not need to modify this section. End of explanation with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(image_S_train) print("Training...") print() val_accu_list = [] batch_acc_list = [] for i in range(EPOCHS): # X_train, y_train = shuffle(X_train, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = image_S_train[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y}) training_accuracy = evaluate(image_S_train, y_train) validation_accuracy = evaluate(image_S_valid, y_validation) batch_accuracy = evaluate(batch_x, batch_y) val_accu_list.append(validation_accuracy) batch_acc_list.append(batch_accuracy) print("EPOCH {} ...".format(i+1)) print("Training Accuracy = {:.3f}".format(training_accuracy)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() saver.save(sess, './traffic_classifier_data') print("Model saved") Explanation: Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting. Train the Model Run the training data through the training pipeline to train the model. Before each epoch, shuffle the training set. After each epoch, measure the loss and accuracy of the validation set. Save the model after training. You do not need to modify this section. End of explanation plt.plot(batch_acc_list, label="Train Accuracy") plt.plot(val_accu_list, label="Validation Accuracy") plt.ylim(.4,1.1) plt.xlim(0,EPOCHS) Explanation: Plot data End of explanation with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(image_S_test, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy)) Explanation: Evaluate the Model Once you are completely satisfied with your model, evaluate the performance of the model on the test set. Be sure to only do this once! If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data. You do not need to modify this section. End of explanation ### Load the images and plot them here. ### Feel free to use as many code cells as needed. import os import cv2 import matplotlib.pyplot as plt new_images_original = [] test_image_labels = list() test_image_labels.append(27) test_image_labels.append(25) test_image_labels.append(14) test_image_labels.append(33) test_image_labels.append(13) path = "new_images/" files = sorted(os.listdir(path)) print("Original images:") i = 0 for file in files: print(path+file) image = cv2.imread(path+file) image = image[...,::-1] # Convert from BGR <=> RGB resized_image = cv2.resize(image,(32,32)) new_images_original.append(resized_image) label = test_image_labels[i] desc = class_list[[label],0] print("Label = ", label, ". Desc = ", desc) i += 1 plt.figure(figsize=(1, 1)) plt.imshow(image) plt.show() print(test_image_labels) test_images = [] print("Preprocessed images:") for image in new_images_original: preprocessed_image = pre_process_image(image) test_images.append(preprocessed_image) plt.figure(figsize=(1, 1)) plt.imshow(preprocessed_image[:,:,0], cmap='gray') plt.show() Explanation: Step 3: Test a Model on New Images To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type. You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images End of explanation ### Run the predictions here and use the model to output the prediction for each image. ### Make sure to pre-process the images with the same pre-processing pipeline used earlier. ### Feel free to use as many code cells as needed. with tf.Session() as sess: saver.restore(sess, './traffic_classifier_data') top5_prob = sess.run(tf.nn.top_k(tf.nn.softmax(logits), k=5, sorted=True), feed_dict = {x: test_images, keep_prob:1}) # predicted_logits = sess.run(logits, feed_dict={x:test_images, keep_prob:1}) # predicts = sess.run(tf.nn.top_k(top5_prob, k=5, sorted=True)) predicted_labels = np.argmax(top5_prob, axis=1) # predictions_labels = np.argmax(predictions, axis=1) i=0 for image in test_images: plt.figure(figsize=(1, 1)) print("Index=", top5_prob.indices[i, 0]) plt.xlabel(class_list[top5_prob.indices[i, 0],0]) plt.imshow(image[:,:,0], cmap='gray') plt.show() i += 1 Explanation: Predict the Sign Type for Each Image End of explanation ### Calculate the accuracy for these 5 new images. ### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images. with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(test_images, test_image_labels) print("Test Accuracy = {:.2f}".format(test_accuracy)) Explanation: Analyze Performance End of explanation ### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. ### Feel free to use as many code cells as needed. test_images = np.asarray(test_images) print(test_images.shape) plt.figure(figsize=(16, 21)) for i in range(5): plt.subplot(12, 2, 2*i+1) plt.imshow(test_images[i][:,:,0], cmap="gray") plt.axis('off') plt.title(i) plt.subplot(12, 2, 2*i+2) plt.axis([0, 1., 0, 6]) plt.barh(np.arange(1, 6, 1), (np.absolute(top5_prob.values[i, :]/sum(np.absolute(top5_prob.values[i, :]))))) labs=[class_list[j][0] for j in top5_prob.indices[i, :]] plt.yticks(np.arange(1, 6, 1), labs) plt.show() Explanation: Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image. tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids. Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tf.nn.top_k is used to choose the three classes with the highest probability: ``` (5, 6) array a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]]) ``` Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces: TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32)) Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices. End of explanation ### Visualize your network's feature maps here. ### Feel free to use as many code cells as needed. # image_input: the test image being fed into the network to produce the feature maps # tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer # activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output # plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1): # Here make sure to preprocess your image_input in a way your network expects # with size, normalization, ect if needed # image_input = # Note: x should be the same name as your network's tensorflow data placeholder variable # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function activation = tf_activation.eval(session=sess,feed_dict={x : image_input}) featuremaps = activation.shape[3] plt.figure(plt_num, figsize=(15,15)) for featuremap in range(featuremaps): plt.subplot(8,8, featuremap+1) # sets the number of feature maps to show on each row and column plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number if activation_min != -1 & activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray") elif activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray") elif activation_min !=-1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray") else: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray") with tf.Session() as sess: saver.restore(sess, './traffic_classifier_data') print("Convolution #1") print(test_images[0].shape) print(test_images.shape) outputFeatureMap(test_images,conv1) with tf.Session() as sess: saver.restore(sess, './traffic_classifier_data') print("Convolution #2") outputFeatureMap(test_images,conv2) with tf.Session() as sess: saver.restore(sess, './traffic_classifier_data') print("Convolution #3") outputFeatureMap(test_images,conv3) Explanation: Project Writeup Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file. Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission. Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable. For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. <figure> <img src="visualize_cnn.png" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above)</p> </figcaption> </figure> <p></p> End of explanation
6,401
Given the following text description, write Python code to implement the functionality described below step by step Description: TF-Slim Walkthrough This notebook will walk you through the basics of using TF-Slim to define, train and evaluate neural networks on various tasks. It assumes a basic knowledge of neural networks. Table of contents <a href="#Install">Installation and setup</a><br> <a href='#MLP'>Creating your first neural network with TF-Slim</a><br> <a href='#ReadingTFSlimDatasets'>Reading Data with TF-Slim</a><br> <a href='#CNN'>Training a convolutional neural network (CNN)</a><br> <a href='#Pretained'>Using pre-trained models</a><br> Installation and setup <a id='Install'></a> As of 8/28/16, the latest stable release of TF is r0.10, which does not contain the latest version of slim. To obtain the latest version of TF-Slim, please install the most recent nightly build of TF as explained here. To use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from here. Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/slim before running this notebook, so that these files are in your python path. To check you've got these two steps to work, just execute the cell below. If it complains about unknown modules, restart the notebook after moving to the TF-Slim models directory. Step2: Creating your first neural network with TF-Slim <a id='MLP'></a> Below we give some code to create a simple multilayer perceptron (MLP) which can be used for regression problems. The model has 2 hidden layers. The output is a single node. When this function is called, it will create various nodes, and silently add them to whichever global TF graph is currently in scope. When a node which corresponds to a layer with adjustable parameters (eg., a fully connected layer) is created, additional parameter variable nodes are silently created, and added to the graph. (We will discuss how to train the parameters later.) We use variable scope to put all the nodes under a common name, so that the graph has some hierarchical structure. This is useful when we want to visualize the TF graph in tensorboard, or if we want to query related variables. The fully connected layers all use the same L2 weight decay and ReLu activations, as specified by arg_scope. (However, the final layer overrides these defaults, and uses an identity activation function.) We also illustrate how to add a dropout layer after the first fully connected layer (FC1). Note that at test time, we do not drop out nodes, but instead use the average activations; hence we need to know whether the model is being constructed for training or testing, since the computational graph will be different in the two cases (although the variables, storing the model parameters, will be shared, since they have the same name/scope). Step3: Let's create the model and examine its structure. We create a TF graph and call regression_model(), which adds nodes (tensors) to the graph. We then examine their shape, and print the names of all the model variables which have been implicitly created inside of each layer. We see that the names of the variables follow the scopes that we specified. Step4: Let's create some 1d regression data . We will train and test the model on some noisy observations of a nonlinear function. Step5: Let's fit the model to the data The user has to specify the loss function and the optimizer, and slim does the rest. In particular, the slim.learning.train function does the following Step6: Training with multiple loss functions. Sometimes we have multiple objectives we want to simultaneously optimize. In slim, it is easy to add more losses, as we show below. (We do not optimize the total loss in this example, but we show how to compute it.) Step7: Let's load the saved model and use it for prediction. Step8: Let's compute various evaluation metrics on the test set. In TF-Slim termiology, losses are optimized, but metrics (which may not be differentiable, e.g., precision and recall) are just measured. As an illustration, the code below computes mean squared error and mean absolute error metrics on the test set. Each metric declaration creates several local variables (which must be initialized via tf.initialize_local_variables()) and returns both a value_op and an update_op. When evaluated, the value_op returns the current value of the metric. The update_op loads a new batch of data, runs the model, obtains the predictions and accumulates the metric statistics appropriately before returning the current value of the metric. We store these value nodes and update nodes in 2 dictionaries. After creating the metric nodes, we can pass them to slim.evaluation.evaluation, which repeatedly evaluates these nodes the specified number of times. (This allows us to compute the evaluation in a streaming fashion across minibatches, which is usefulf for large datasets.) Finally, we print the final value of each metric. Step9: Reading Data with TF-Slim <a id='ReadingTFSlimDatasets'></a> Reading data with TF-Slim has two main components Step10: Display some of the data. Step11: Convolutional neural nets (CNNs). <a id='CNN'></a> In this section, we show how to train an image classifier using a simple CNN. Define the model. Below we define a simple CNN. Note that the output layer is linear function - we will apply softmax transformation externally to the model, either in the loss function (for training), or in the prediction function (during testing). Step12: Apply the model to some randomly generated images. Step14: Train the model on the Flowers dataset. Before starting, make sure you've run the code to <a href="#DownloadFlowers">Download the Flowers</a> dataset. Now, we'll get a sense of what it looks like to use TF-Slim's training functions found in learning.py. First, we'll create a function, load_batch, that loads batches of dataset from a dataset. Next, we'll train a model for a single step (just to demonstrate the API), and evaluate the results. Step15: Evaluate some metrics. As we discussed above, we can compute various metrics besides the loss. Below we show how to compute prediction accuracy of the trained model, as well as top-5 classification accuracy. (The difference between evaluation and evaluation_loop is that the latter writes the results to a log directory, so they can be viewed in tensorboard.) Step16: Using pre-trained models <a id='Pretrained'></a> Neural nets work best when they have many parameters, making them very flexible function approximators. However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list here. You can either use these models as-is, or you can perform "surgery" on them, to modify them for some other task. For example, it is common to "chop off" the final pre-softmax layer, and replace it with a new set of weights corresponding to some new set of labels. You can then quickly fine tune the new model on a small new dataset. We illustrate this below, using inception-v1 as the base model. While models like Inception V3 are more powerful, Inception V1 is used for speed purposes. Take into account that VGG and ResNet final layers have only 1000 outputs rather than 1001. The ImageNet dataset provied has an empty background class which can be used to fine-tune the model to other tasks. VGG and ResNet models provided here don't use that class. We provide two examples of using pretrained models Step17: Apply Pre-trained Inception V1 model to Images. We have to convert each image to the size expected by the model checkpoint. There is no easy way to determine this size from the checkpoint itself. So we use a preprocessor to enforce this. Step18: Download the VGG-16 checkpoint Step19: Apply Pre-trained VGG-16 model to Images. We have to convert each image to the size expected by the model checkpoint. There is no easy way to determine this size from the checkpoint itself. So we use a preprocessor to enforce this. Pay attention to the difference caused by 1000 classes instead of 1001. Step21: Fine-tune the model on a different set of labels. We will fine tune the inception model on the Flowers dataset. Step22: Apply fine tuned model to some images.
Python Code: import matplotlib %matplotlib inline import matplotlib.pyplot as plt import math import numpy as np import tensorflow as tf import time from datasets import dataset_utils # Main slim library slim = tf.contrib.slim Explanation: TF-Slim Walkthrough This notebook will walk you through the basics of using TF-Slim to define, train and evaluate neural networks on various tasks. It assumes a basic knowledge of neural networks. Table of contents <a href="#Install">Installation and setup</a><br> <a href='#MLP'>Creating your first neural network with TF-Slim</a><br> <a href='#ReadingTFSlimDatasets'>Reading Data with TF-Slim</a><br> <a href='#CNN'>Training a convolutional neural network (CNN)</a><br> <a href='#Pretained'>Using pre-trained models</a><br> Installation and setup <a id='Install'></a> As of 8/28/16, the latest stable release of TF is r0.10, which does not contain the latest version of slim. To obtain the latest version of TF-Slim, please install the most recent nightly build of TF as explained here. To use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from here. Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/slim before running this notebook, so that these files are in your python path. To check you've got these two steps to work, just execute the cell below. If it complains about unknown modules, restart the notebook after moving to the TF-Slim models directory. End of explanation def regression_model(inputs, is_training=True, scope="deep_regression"): Creates the regression model. Args: inputs: A node that yields a `Tensor` of size [batch_size, dimensions]. is_training: Whether or not we're currently training the model. scope: An optional variable_op scope for the model. Returns: predictions: 1-D `Tensor` of shape [batch_size] of responses. end_points: A dict of end points representing the hidden layers. with tf.variable_scope(scope, 'deep_regression', [inputs]): end_points = {} # Set the default weight _regularizer and acvitation for each fully_connected layer. with slim.arg_scope([slim.fully_connected], activation_fn=tf.nn.relu, weights_regularizer=slim.l2_regularizer(0.01)): # Creates a fully connected layer from the inputs with 32 hidden units. net = slim.fully_connected(inputs, 32, scope='fc1') end_points['fc1'] = net # Adds a dropout layer to prevent over-fitting. net = slim.dropout(net, 0.8, is_training=is_training) # Adds another fully connected layer with 16 hidden units. net = slim.fully_connected(net, 16, scope='fc2') end_points['fc2'] = net # Creates a fully-connected layer with a single hidden unit. Note that the # layer is made linear by setting activation_fn=None. predictions = slim.fully_connected(net, 1, activation_fn=None, scope='prediction') end_points['out'] = predictions return predictions, end_points Explanation: Creating your first neural network with TF-Slim <a id='MLP'></a> Below we give some code to create a simple multilayer perceptron (MLP) which can be used for regression problems. The model has 2 hidden layers. The output is a single node. When this function is called, it will create various nodes, and silently add them to whichever global TF graph is currently in scope. When a node which corresponds to a layer with adjustable parameters (eg., a fully connected layer) is created, additional parameter variable nodes are silently created, and added to the graph. (We will discuss how to train the parameters later.) We use variable scope to put all the nodes under a common name, so that the graph has some hierarchical structure. This is useful when we want to visualize the TF graph in tensorboard, or if we want to query related variables. The fully connected layers all use the same L2 weight decay and ReLu activations, as specified by arg_scope. (However, the final layer overrides these defaults, and uses an identity activation function.) We also illustrate how to add a dropout layer after the first fully connected layer (FC1). Note that at test time, we do not drop out nodes, but instead use the average activations; hence we need to know whether the model is being constructed for training or testing, since the computational graph will be different in the two cases (although the variables, storing the model parameters, will be shared, since they have the same name/scope). End of explanation with tf.Graph().as_default(): # Dummy placeholders for arbitrary number of 1d inputs and outputs inputs = tf.placeholder(tf.float32, shape=(None, 1)) outputs = tf.placeholder(tf.float32, shape=(None, 1)) # Build model predictions, end_points = regression_model(inputs) # Print name and shape of each tensor. print "Layers" for k, v in end_points.iteritems(): print 'name = {}, shape = {}'.format(v.name, v.get_shape()) # Print name and shape of parameter nodes (values not yet initialized) print "\n" print "Parameters" for v in slim.get_model_variables(): print 'name = {}, shape = {}'.format(v.name, v.get_shape()) Explanation: Let's create the model and examine its structure. We create a TF graph and call regression_model(), which adds nodes (tensors) to the graph. We then examine their shape, and print the names of all the model variables which have been implicitly created inside of each layer. We see that the names of the variables follow the scopes that we specified. End of explanation def produce_batch(batch_size, noise=0.3): xs = np.random.random(size=[batch_size, 1]) * 10 ys = np.sin(xs) + 5 + np.random.normal(size=[batch_size, 1], scale=noise) return [xs.astype(np.float32), ys.astype(np.float32)] x_train, y_train = produce_batch(200) x_test, y_test = produce_batch(200) plt.scatter(x_train, y_train) Explanation: Let's create some 1d regression data . We will train and test the model on some noisy observations of a nonlinear function. End of explanation def convert_data_to_tensors(x, y): inputs = tf.constant(x) inputs.set_shape([None, 1]) outputs = tf.constant(y) outputs.set_shape([None, 1]) return inputs, outputs # The following snippet trains the regression model using a mean_squared_error loss. ckpt_dir = '/tmp/regression_model/' with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.INFO) inputs, targets = convert_data_to_tensors(x_train, y_train) # Make the model. predictions, nodes = regression_model(inputs, is_training=True) # Add the loss function to the graph. loss = tf.losses.mean_squared_error(labels=targets, predictions=predictions) # The total loss is the uers's loss plus any regularization losses. total_loss = slim.losses.get_total_loss() # Specify the optimizer and create the train op: optimizer = tf.train.AdamOptimizer(learning_rate=0.005) train_op = slim.learning.create_train_op(total_loss, optimizer) # Run the training inside a session. final_loss = slim.learning.train( train_op, logdir=ckpt_dir, number_of_steps=5000, save_summaries_secs=5, log_every_n_steps=500) print("Finished training. Last batch loss:", final_loss) print("Checkpoint saved in %s" % ckpt_dir) Explanation: Let's fit the model to the data The user has to specify the loss function and the optimizer, and slim does the rest. In particular, the slim.learning.train function does the following: For each iteration, evaluate the train_op, which updates the parameters using the optimizer applied to the current minibatch. Also, update the global_step. Occasionally store the model checkpoint in the specified directory. This is useful in case your machine crashes - then you can simply restart from the specified checkpoint. End of explanation with tf.Graph().as_default(): inputs, targets = convert_data_to_tensors(x_train, y_train) predictions, end_points = regression_model(inputs, is_training=True) # Add multiple loss nodes. mean_squared_error_loss = tf.losses.mean_squared_error(labels=targets, predictions=predictions) absolute_difference_loss = slim.losses.absolute_difference(predictions, targets) # The following two ways to compute the total loss are equivalent regularization_loss = tf.add_n(slim.losses.get_regularization_losses()) total_loss1 = mean_squared_error_loss + absolute_difference_loss + regularization_loss # Regularization Loss is included in the total loss by default. # This is good for training, but not for testing. total_loss2 = slim.losses.get_total_loss(add_regularization_losses=True) init_op = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init_op) # Will initialize the parameters with random weights. total_loss1, total_loss2 = sess.run([total_loss1, total_loss2]) print('Total Loss1: %f' % total_loss1) print('Total Loss2: %f' % total_loss2) print('Regularization Losses:') for loss in slim.losses.get_regularization_losses(): print(loss) print('Loss Functions:') for loss in slim.losses.get_losses(): print(loss) Explanation: Training with multiple loss functions. Sometimes we have multiple objectives we want to simultaneously optimize. In slim, it is easy to add more losses, as we show below. (We do not optimize the total loss in this example, but we show how to compute it.) End of explanation with tf.Graph().as_default(): inputs, targets = convert_data_to_tensors(x_test, y_test) # Create the model structure. (Parameters will be loaded below.) predictions, end_points = regression_model(inputs, is_training=False) # Make a session which restores the old parameters from a checkpoint. sv = tf.train.Supervisor(logdir=ckpt_dir) with sv.managed_session() as sess: inputs, predictions, targets = sess.run([inputs, predictions, targets]) plt.scatter(inputs, targets, c='r'); plt.scatter(inputs, predictions, c='b'); plt.title('red=true, blue=predicted') Explanation: Let's load the saved model and use it for prediction. End of explanation with tf.Graph().as_default(): inputs, targets = convert_data_to_tensors(x_test, y_test) predictions, end_points = regression_model(inputs, is_training=False) # Specify metrics to evaluate: names_to_value_nodes, names_to_update_nodes = slim.metrics.aggregate_metric_map({ 'Mean Squared Error': slim.metrics.streaming_mean_squared_error(predictions, targets), 'Mean Absolute Error': slim.metrics.streaming_mean_absolute_error(predictions, targets) }) # Make a session which restores the old graph parameters, and then run eval. sv = tf.train.Supervisor(logdir=ckpt_dir) with sv.managed_session() as sess: metric_values = slim.evaluation.evaluation( sess, num_evals=1, # Single pass over data eval_op=names_to_update_nodes.values(), final_op=names_to_value_nodes.values()) names_to_values = dict(zip(names_to_value_nodes.keys(), metric_values)) for key, value in names_to_values.iteritems(): print('%s: %f' % (key, value)) Explanation: Let's compute various evaluation metrics on the test set. In TF-Slim termiology, losses are optimized, but metrics (which may not be differentiable, e.g., precision and recall) are just measured. As an illustration, the code below computes mean squared error and mean absolute error metrics on the test set. Each metric declaration creates several local variables (which must be initialized via tf.initialize_local_variables()) and returns both a value_op and an update_op. When evaluated, the value_op returns the current value of the metric. The update_op loads a new batch of data, runs the model, obtains the predictions and accumulates the metric statistics appropriately before returning the current value of the metric. We store these value nodes and update nodes in 2 dictionaries. After creating the metric nodes, we can pass them to slim.evaluation.evaluation, which repeatedly evaluates these nodes the specified number of times. (This allows us to compute the evaluation in a streaming fashion across minibatches, which is usefulf for large datasets.) Finally, we print the final value of each metric. End of explanation import tensorflow as tf from datasets import dataset_utils url = "http://download.tensorflow.org/data/flowers.tar.gz" flowers_data_dir = '/tmp/flowers' if not tf.gfile.Exists(flowers_data_dir): tf.gfile.MakeDirs(flowers_data_dir) dataset_utils.download_and_uncompress_tarball(url, flowers_data_dir) Explanation: Reading Data with TF-Slim <a id='ReadingTFSlimDatasets'></a> Reading data with TF-Slim has two main components: A Dataset and a DatasetDataProvider. The former is a descriptor of a dataset, while the latter performs the actions necessary for actually reading the data. Lets look at each one in detail: Dataset A TF-Slim Dataset contains descriptive information about a dataset necessary for reading it, such as the list of data files and how to decode them. It also contains metadata including class labels, the size of the train/test splits and descriptions of the tensors that the dataset provides. For example, some datasets contain images with labels. Others augment this data with bounding box annotations, etc. The Dataset object allows us to write generic code using the same API, regardless of the data content and encoding type. TF-Slim's Dataset works especially well when the data is stored as a (possibly sharded) TFRecords file, where each record contains a tf.train.Example protocol buffer. TF-Slim uses a consistent convention for naming the keys and values inside each Example record. DatasetDataProvider A DatasetDataProvider is a class which actually reads the data from a dataset. It is highly configurable to read the data in various ways that may make a big impact on the efficiency of your training process. For example, it can be single or multi-threaded. If your data is sharded across many files, it can read each files serially, or from every file simultaneously. Demo: The Flowers Dataset For convenience, we've include scripts to convert several common image datasets into TFRecord format and have provided the Dataset descriptor files necessary for reading them. We demonstrate how easy it is to use these dataset via the Flowers dataset below. Download the Flowers Dataset <a id='DownloadFlowers'></a> We've made available a tarball of the Flowers dataset which has already been converted to TFRecord format. End of explanation from datasets import flowers import tensorflow as tf slim = tf.contrib.slim with tf.Graph().as_default(): dataset = flowers.get_split('train', flowers_data_dir) data_provider = slim.dataset_data_provider.DatasetDataProvider( dataset, common_queue_capacity=32, common_queue_min=1) image, label = data_provider.get(['image', 'label']) with tf.Session() as sess: with slim.queues.QueueRunners(sess): for i in xrange(4): np_image, np_label = sess.run([image, label]) height, width, _ = np_image.shape class_name = name = dataset.labels_to_names[np_label] plt.figure() plt.imshow(np_image) plt.title('%s, %d x %d' % (name, height, width)) plt.axis('off') plt.show() Explanation: Display some of the data. End of explanation def my_cnn(images, num_classes, is_training): # is_training is not used... with slim.arg_scope([slim.max_pool2d], kernel_size=[3, 3], stride=2): net = slim.conv2d(images, 64, [5, 5]) net = slim.max_pool2d(net) net = slim.conv2d(net, 64, [5, 5]) net = slim.max_pool2d(net) net = slim.flatten(net) net = slim.fully_connected(net, 192) net = slim.fully_connected(net, num_classes, activation_fn=None) return net Explanation: Convolutional neural nets (CNNs). <a id='CNN'></a> In this section, we show how to train an image classifier using a simple CNN. Define the model. Below we define a simple CNN. Note that the output layer is linear function - we will apply softmax transformation externally to the model, either in the loss function (for training), or in the prediction function (during testing). End of explanation import tensorflow as tf with tf.Graph().as_default(): # The model can handle any input size because the first layer is convolutional. # The size of the model is determined when image_node is first passed into the my_cnn function. # Once the variables are initialized, the size of all the weight matrices is fixed. # Because of the fully connected layers, this means that all subsequent images must have the same # input size as the first image. batch_size, height, width, channels = 3, 28, 28, 3 images = tf.random_uniform([batch_size, height, width, channels], maxval=1) # Create the model. num_classes = 10 logits = my_cnn(images, num_classes, is_training=True) probabilities = tf.nn.softmax(logits) # Initialize all the variables (including parameters) randomly. init_op = tf.global_variables_initializer() with tf.Session() as sess: # Run the init_op, evaluate the model outputs and print the results: sess.run(init_op) probabilities = sess.run(probabilities) print('Probabilities Shape:') print(probabilities.shape) # batch_size x num_classes print('\nProbabilities:') print(probabilities) print('\nSumming across all classes (Should equal 1):') print(np.sum(probabilities, 1)) # Each row sums to 1 Explanation: Apply the model to some randomly generated images. End of explanation from preprocessing import inception_preprocessing import tensorflow as tf slim = tf.contrib.slim def load_batch(dataset, batch_size=32, height=299, width=299, is_training=False): Loads a single batch of data. Args: dataset: The dataset to load. batch_size: The number of images in the batch. height: The size of each image after preprocessing. width: The size of each image after preprocessing. is_training: Whether or not we're currently training or evaluating. Returns: images: A Tensor of size [batch_size, height, width, 3], image samples that have been preprocessed. images_raw: A Tensor of size [batch_size, height, width, 3], image samples that can be used for visualization. labels: A Tensor of size [batch_size], whose values range between 0 and dataset.num_classes. data_provider = slim.dataset_data_provider.DatasetDataProvider( dataset, common_queue_capacity=32, common_queue_min=8) image_raw, label = data_provider.get(['image', 'label']) # Preprocess image for usage by Inception. image = inception_preprocessing.preprocess_image(image_raw, height, width, is_training=is_training) # Preprocess the image for display purposes. image_raw = tf.expand_dims(image_raw, 0) image_raw = tf.image.resize_images(image_raw, [height, width]) image_raw = tf.squeeze(image_raw) # Batch it up. images, images_raw, labels = tf.train.batch( [image, image_raw, label], batch_size=batch_size, num_threads=1, capacity=2 * batch_size) return images, images_raw, labels from datasets import flowers # This might take a few minutes. train_dir = '/tmp/tfslim_model/' print('Will save model to %s' % train_dir) with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.INFO) dataset = flowers.get_split('train', flowers_data_dir) images, _, labels = load_batch(dataset) # Create the model: logits = my_cnn(images, num_classes=dataset.num_classes, is_training=True) # Specify the loss function: one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes) slim.losses.softmax_cross_entropy(logits, one_hot_labels) total_loss = slim.losses.get_total_loss() # Create some summaries to visualize the training process: tf.summary.scalar('losses/Total Loss', total_loss) # Specify the optimizer and create the train op: optimizer = tf.train.AdamOptimizer(learning_rate=0.01) train_op = slim.learning.create_train_op(total_loss, optimizer) # Run the training: final_loss = slim.learning.train( train_op, logdir=train_dir, number_of_steps=1, # For speed, we just do 1 epoch save_summaries_secs=1) print('Finished training. Final batch loss %d' % final_loss) Explanation: Train the model on the Flowers dataset. Before starting, make sure you've run the code to <a href="#DownloadFlowers">Download the Flowers</a> dataset. Now, we'll get a sense of what it looks like to use TF-Slim's training functions found in learning.py. First, we'll create a function, load_batch, that loads batches of dataset from a dataset. Next, we'll train a model for a single step (just to demonstrate the API), and evaluate the results. End of explanation from datasets import flowers # This might take a few minutes. with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.DEBUG) dataset = flowers.get_split('train', flowers_data_dir) images, _, labels = load_batch(dataset) logits = my_cnn(images, num_classes=dataset.num_classes, is_training=False) predictions = tf.argmax(logits, 1) # Define the metrics: names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({ 'eval/Accuracy': slim.metrics.streaming_accuracy(predictions, labels), 'eval/Recall@5': slim.metrics.streaming_recall_at_k(logits, labels, 5), }) print('Running evaluation Loop...') checkpoint_path = tf.train.latest_checkpoint(train_dir) metric_values = slim.evaluation.evaluate_once( master='', checkpoint_path=checkpoint_path, logdir=train_dir, eval_op=names_to_updates.values(), final_op=names_to_values.values()) names_to_values = dict(zip(names_to_values.keys(), metric_values)) for name in names_to_values: print('%s: %f' % (name, names_to_values[name])) Explanation: Evaluate some metrics. As we discussed above, we can compute various metrics besides the loss. Below we show how to compute prediction accuracy of the trained model, as well as top-5 classification accuracy. (The difference between evaluation and evaluation_loop is that the latter writes the results to a log directory, so they can be viewed in tensorboard.) End of explanation from datasets import dataset_utils url = "http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz" checkpoints_dir = '/tmp/checkpoints' if not tf.gfile.Exists(checkpoints_dir): tf.gfile.MakeDirs(checkpoints_dir) dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir) Explanation: Using pre-trained models <a id='Pretrained'></a> Neural nets work best when they have many parameters, making them very flexible function approximators. However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list here. You can either use these models as-is, or you can perform "surgery" on them, to modify them for some other task. For example, it is common to "chop off" the final pre-softmax layer, and replace it with a new set of weights corresponding to some new set of labels. You can then quickly fine tune the new model on a small new dataset. We illustrate this below, using inception-v1 as the base model. While models like Inception V3 are more powerful, Inception V1 is used for speed purposes. Take into account that VGG and ResNet final layers have only 1000 outputs rather than 1001. The ImageNet dataset provied has an empty background class which can be used to fine-tune the model to other tasks. VGG and ResNet models provided here don't use that class. We provide two examples of using pretrained models: Inception V1 and VGG-19 models to highlight this difference. Download the Inception V1 checkpoint End of explanation import numpy as np import os import tensorflow as tf import urllib2 from datasets import imagenet from nets import inception from preprocessing import inception_preprocessing slim = tf.contrib.slim image_size = inception.inception_v1.default_image_size with tf.Graph().as_default(): url = 'https://upload.wikimedia.org/wikipedia/commons/7/70/EnglishCockerSpaniel_simon.jpg' image_string = urllib2.urlopen(url).read() image = tf.image.decode_jpeg(image_string, channels=3) processed_image = inception_preprocessing.preprocess_image(image, image_size, image_size, is_training=False) processed_images = tf.expand_dims(processed_image, 0) # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(inception.inception_v1_arg_scope()): logits, _ = inception.inception_v1(processed_images, num_classes=1001, is_training=False) probabilities = tf.nn.softmax(logits) init_fn = slim.assign_from_checkpoint_fn( os.path.join(checkpoints_dir, 'inception_v1.ckpt'), slim.get_model_variables('InceptionV1')) with tf.Session() as sess: init_fn(sess) np_image, probabilities = sess.run([image, probabilities]) probabilities = probabilities[0, 0:] sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])] plt.figure() plt.imshow(np_image.astype(np.uint8)) plt.axis('off') plt.show() names = imagenet.create_readable_names_for_imagenet_labels() for i in range(5): index = sorted_inds[i] print('Probability %0.2f%% => [%s]' % (probabilities[index], names[index])) Explanation: Apply Pre-trained Inception V1 model to Images. We have to convert each image to the size expected by the model checkpoint. There is no easy way to determine this size from the checkpoint itself. So we use a preprocessor to enforce this. End of explanation from datasets import dataset_utils import tensorflow as tf url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz" checkpoints_dir = '/tmp/checkpoints' if not tf.gfile.Exists(checkpoints_dir): tf.gfile.MakeDirs(checkpoints_dir) dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir) Explanation: Download the VGG-16 checkpoint End of explanation import numpy as np import os import tensorflow as tf import urllib2 from datasets import imagenet from nets import vgg from preprocessing import vgg_preprocessing slim = tf.contrib.slim image_size = vgg.vgg_16.default_image_size with tf.Graph().as_default(): url = 'https://upload.wikimedia.org/wikipedia/commons/d/d9/First_Student_IC_school_bus_202076.jpg' image_string = urllib2.urlopen(url).read() image = tf.image.decode_jpeg(image_string, channels=3) processed_image = vgg_preprocessing.preprocess_image(image, image_size, image_size, is_training=False) processed_images = tf.expand_dims(processed_image, 0) # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(vgg.vgg_arg_scope()): # 1000 classes instead of 1001. logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False) probabilities = tf.nn.softmax(logits) init_fn = slim.assign_from_checkpoint_fn( os.path.join(checkpoints_dir, 'vgg_16.ckpt'), slim.get_model_variables('vgg_16')) with tf.Session() as sess: init_fn(sess) np_image, probabilities = sess.run([image, probabilities]) probabilities = probabilities[0, 0:] sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])] plt.figure() plt.imshow(np_image.astype(np.uint8)) plt.axis('off') plt.show() names = imagenet.create_readable_names_for_imagenet_labels() for i in range(5): index = sorted_inds[i] # Shift the index of a class name by one. print('Probability %0.2f%% => [%s]' % (probabilities[index], names[index+1])) Explanation: Apply Pre-trained VGG-16 model to Images. We have to convert each image to the size expected by the model checkpoint. There is no easy way to determine this size from the checkpoint itself. So we use a preprocessor to enforce this. Pay attention to the difference caused by 1000 classes instead of 1001. End of explanation # Note that this may take several minutes. import os from datasets import flowers from nets import inception from preprocessing import inception_preprocessing slim = tf.contrib.slim image_size = inception.inception_v1.default_image_size def get_init_fn(): Returns a function run by the chief worker to warm-start the training. checkpoint_exclude_scopes=["InceptionV1/Logits", "InceptionV1/AuxLogits"] exclusions = [scope.strip() for scope in checkpoint_exclude_scopes] variables_to_restore = [] for var in slim.get_model_variables(): excluded = False for exclusion in exclusions: if var.op.name.startswith(exclusion): excluded = True break if not excluded: variables_to_restore.append(var) return slim.assign_from_checkpoint_fn( os.path.join(checkpoints_dir, 'inception_v1.ckpt'), variables_to_restore) train_dir = '/tmp/inception_finetuned/' with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.INFO) dataset = flowers.get_split('train', flowers_data_dir) images, _, labels = load_batch(dataset, height=image_size, width=image_size) # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(inception.inception_v1_arg_scope()): logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True) # Specify the loss function: one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes) slim.losses.softmax_cross_entropy(logits, one_hot_labels) total_loss = slim.losses.get_total_loss() # Create some summaries to visualize the training process: tf.summary.scalar('losses/Total Loss', total_loss) # Specify the optimizer and create the train op: optimizer = tf.train.AdamOptimizer(learning_rate=0.01) train_op = slim.learning.create_train_op(total_loss, optimizer) # Run the training: final_loss = slim.learning.train( train_op, logdir=train_dir, init_fn=get_init_fn(), number_of_steps=2) print('Finished training. Last batch loss %f' % final_loss) Explanation: Fine-tune the model on a different set of labels. We will fine tune the inception model on the Flowers dataset. End of explanation import numpy as np import tensorflow as tf from datasets import flowers from nets import inception slim = tf.contrib.slim image_size = inception.inception_v1.default_image_size batch_size = 3 with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.INFO) dataset = flowers.get_split('train', flowers_data_dir) images, images_raw, labels = load_batch(dataset, height=image_size, width=image_size) # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(inception.inception_v1_arg_scope()): logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True) probabilities = tf.nn.softmax(logits) checkpoint_path = tf.train.latest_checkpoint(train_dir) init_fn = slim.assign_from_checkpoint_fn( checkpoint_path, slim.get_variables_to_restore()) with tf.Session() as sess: with slim.queues.QueueRunners(sess): sess.run(tf.initialize_local_variables()) init_fn(sess) np_probabilities, np_images_raw, np_labels = sess.run([probabilities, images_raw, labels]) for i in xrange(batch_size): image = np_images_raw[i, :, :, :] true_label = np_labels[i] predicted_label = np.argmax(np_probabilities[i, :]) predicted_name = dataset.labels_to_names[predicted_label] true_name = dataset.labels_to_names[true_label] plt.figure() plt.imshow(image.astype(np.uint8)) plt.title('Ground Truth: [%s], Prediction [%s]' % (true_name, predicted_name)) plt.axis('off') plt.show() Explanation: Apply fine tuned model to some images. End of explanation
6,402
Given the following text description, write Python code to implement the functionality described below step by step Description: In-Class Demonstration and Visualization Step2: The function plot_taylor_approximations included here was written by Fernando Perez and was part of work on the original IPython project. Although attribution seems to have been lost over time, we gratefully acknowledge FP and thank him for this code! Step3: Lecture 10 Step4: It is important to read about SymPy symbols at this time. We can generate a sequence using SeqFormula. Step5: if we want the limit of the sequence at infinity Step6: DIY Step7: Infinite Series A series is the sum of a sequence. An infinite series will converge if the partial sums of the series has a finite limit. For example, examine the partial sums of the series Step8: A power series is of the form Step9: We can define the series about the point $a$ as follows Step10: SymPy has a function that can take SymPy expressions and represent them as power series Step11: DIY Step12: Taylor's Series Below we present a derivation of Taylor's series and small algebraic argument for series representations of functions. In contrast to the ability to use sympy functions without any deeper understanding, these presentations are intended to give you insight into the origin of the series representation and the factors present within each term. While the algebraic presentation isn't a general case, the essential elements of a general polynomial representation are visible. The function $f(x)$ can be expanded into an infinite series or a finite series plus an error term. Assume that the function has a continuous nth derivative over the interval $a \le x \le b$. Integrate the nth derivative n times Step13: To help us get our work done we can use sympy's diff function. Testing this function with a known result, we can write Step14: A list comprehension is used to organize the results. In each iteration the exact function and the power series are differentiated and stored as an element of a list. The list can be inspected and a set of simultaneous equations can be written down and solved to determine the values of the coefficients. Casting the list as a sympy Matrix object clarifies the correspondance between entries in the list. Step15: A list comprehension can be used to organize and extend the results further. We can wrap the list above into another list that changes the order of differentiation each iteration. Step16: Casting the results as a sympy Matrix object the list is more easily viewed in the Jupyter notebook Step17: DIY Step18: Your markdown here. Computing a Taylor's Series Symbolically Using sympy the Taylor's series can be computed symbolically. Step19: One of the major uses of Taylor's series in computation is for the evaluation of derivatives. Take note of the fact that the derivatives of a function appear in the evaluation of the series. Computing Derivatives of Discrete Data It may be straightforward to compute the derivative of some functions. For example Step20: Meaning that Step21: Meaning that Step22: Remember that sympy expressions are zero by default. So this is true
Python Code: %matplotlib inline import numpy as np import sympy as sp import matplotlib.pyplot as plt # You can change the default figure size to be a bit larger if you want, # uncomment the next line for that: plt.rc('figure', figsize=(10, 6)) Explanation: In-Class Demonstration and Visualization End of explanation def plot_taylor_approximations(func, x0=None, orders=(2, 4), xrange=(0,1), yrange=None, npts=200): Plot the Taylor series approximations to a function at various orders. Parameters ---------- func : a sympy function x0 : float Origin of the Taylor series expansion. If not given, x0=xrange[0]. orders : list List of integers with the orders of Taylor series to show. Default is (2, 4). xrange : 2-tuple or array. Either an (xmin, xmax) tuple indicating the x range for the plot (default is (0, 1)), or the actual array of values to use. yrange : 2-tuple (ymin, ymax) tuple indicating the y range for the plot. If not given, the full range of values will be automatically used. npts : int Number of points to sample the x range with. Default is 200. if not callable(func): raise ValueError('func must be callable') if isinstance(xrange, (list, tuple)): x = np.linspace(float(xrange[0]), float(xrange[1]), npts) else: x = xrange if x0 is None: x0 = x[0] xs = sp.Symbol('x') # Make a numpy-callable form of the original function for plotting fx = func(xs) f = sp.lambdify(xs, fx, modules=['numpy']) # We could use latex(fx) instead of str(), but matploblib gets confused # with some of the (valid) latex constructs sympy emits. So we play it safe. plt.plot(x, f(x), label=str(fx), lw=2) # Build the Taylor approximations, plotting as we go apps = {} for order in orders: app = fx.series(xs, x0, n=order).removeO() apps[order] = app # Must be careful here: if the approximation is a constant, we can't # blindly use lambdify as it won't do the right thing. In that case, # evaluate the number as a float and fill the y array with that value. if isinstance(app, sp.numbers.Number): y = np.zeros_like(x) y.fill(app.evalf()) else: fa = sp.lambdify(xs, app, modules=['numpy']) y = fa(x) tex = sp.latex(app).replace('$', '') plt.plot(x, y, label=r'$n=%s:\, %s$' % (order, tex) ) # Plot refinements if yrange is not None: plt.ylim(*yrange) plt.grid() plt.legend(loc='best').get_frame().set_alpha(0.8) # For an expression made from elementary functions, we must first make it into # a callable function, the simplest way is to use the Python lambda construct. # plot_taylor_approximations(lambda x: 1/sp.cos(x), 0, [2,4,6,8], (0, 2*sp.pi), (-5,5)) plot_taylor_approximations(sp.sin, 0, [2, 4, 6, 8], (0, 2*sp.pi), (-2,2)) Explanation: The function plot_taylor_approximations included here was written by Fernando Perez and was part of work on the original IPython project. Although attribution seems to have been lost over time, we gratefully acknowledge FP and thank him for this code! End of explanation import sympy as sp x, y, z, t, a = sp.symbols('x y z t a') k, m, n = sp.symbols('k m n', integer=True) f, g, h = sp.symbols('f g h', cls=sp.Function) sp.var('a1:6') sp.init_printing() Explanation: Lecture 10: Taylor's Series and Discrete Calculus Background It is common in physics and engineering to represent transcendental functions and other nonlinear expressions using a few terms from a Taylor series. This series provides a fast and efficient way to compute quantities such as $\mathrm{sin}(x)$ or $e^x$ to a prescribed error. Learning how to calculate the series representation of these functions will provide practical experience with the Taylor series and help the student understand the results of Python methods designed to accelerate and simplify computations. The series can be written generally as: $$f(x) = f{\left (0 \right )} + x \left. \frac{d}{d x} f{\left (x \right )} \right|{\substack{ x=0 }} + \frac{x^{2}}{2} \left. \frac{d^{2}}{d x^{2}} f{\left (x \right )} \right|{\substack{ x=0 }} + \frac{x^{3}}{6} \left. \frac{d^{3}}{d x^{3}} f{\left (x \right )} \right|{\substack{ x=0 }} + \frac{x^{4}}{24} \left. \frac{d^{4}}{d x^{4}} f{\left (x \right )} \right|{\substack{ x=0 }} + \frac{x^{5}}{120} \left. \frac{d^{5}}{d x^{5}} f{\left (x \right )} \right|_{\substack{ x=0 }} + \mathcal{O}\left(x^{6}\right)$$ Of equal importance the Taylor series permits discrete representation of derivatives and is a common way to perform numerical integration of partial and ordinary differential equations. Expansion of a general function $f(x)$ about a point, coupled with algebraic manipulation, will produce expressions that can be used to approximate derivative quantities. Although any order of derivative can be computed, this lesson will focus on first and second derivatives that will be encountered in the diffusion equation. What Skills Will I Learn? You will practice the following skills: Defining and determining the limits of infinite sequences, series and power series. Define the Taylor series and write the general form about any point and to any order. Derive the central and forward difference formulae for numerical derivatives using the Taylor's series. What Steps Should I Take? Learn to use Sympy to define and find the limits of sequences are series. Learn how to approximate transcendental functions about center points of your choosing. Differentiate an explicit series representation of a function to see that the coefficients of such a series can be determined algebraically. Use Sympy to compute a power series symbolically Derive the finite difference expressions for the first and second derivatives. Read the relevant pages from Hornbeck's text on numerical methods. Generate a list of values that approximate the function $f(x)=x^8$ on the domain ${x | 0 \leq x \leq 1}$. Using these values, numerically compute the derivative at your selected grid points and compare it to the analytical solution. Using this technique, examine how the observed error changes as the number of grid points is varied. Visualize and explain the results. Prepare a new notebook (not just modifications to this one) that describes your approach. Optional challenge: A list is one of the fundamental data structures within Python. Numpy (a Python library) and other parts of Python libraries use vectorized computations. From Wikipedia, vectorization is "a style of computer programming where operations are applied to whole arrays instead of individual elements." With this in mind, we certainly can iterate over our list of points and apply the function that you will soon write in an element by element fashion, however, it is a more common practice in Python and other modern languages to write vectorized code. If this is your first exposure to vectorized computation, I recommend two initial strategies: write out your algorithms and use "classic" flow control and iteration to compute the results. From that point you will more easily see the strategy you should use to write vectorized code. Using the discrete forms of the first and second derivatives (based on central differences) can you devise a vectorized operation that computes the derivative without looping in Python? A Sucessful Jupyter Notebook Will Present a description of the essential elements of Taylor's series and how to compute numerical derivatives; Identify the audience for which the work is intended; Run the code necessary to compute and visualize the error associated with the second order approximation and the changes in grid point spacing; Provide a narrative and equations to explain why your approach is relevant to solving the problem; Provide references and citations to any others' work you use to complete the assignment; Be checked into your GitHub repository by the due date (one week from assignment). A high quality communication provides an organized, logically progressing, blend of narrative, equations, and code that teaches the reader a particular topic or idea. You will be assessed on: * The functionality of the code (i.e. it should perform the task assigned). * The narrative you present. I should be able to read and learn from it. Choose your audience wisely. * The supporting equations and figures you choose to include. If your notebook is just computer code your assignment will be marked incomplete. Reading and Reference Essential Mathematical Methods for Physicists, H. Weber and G. Arfken, Academic Press, 2003 Advanced engineering Mathematics, E. Kreyszig, John wiley and Sons, 2010 Numerical Recipes, W. Press, Cambridge University Press, 1986 Numerical Methods, R. Hornbeck, Quantum Publishers, 1975 Infinite Sequences Ideas relating to sequences, series, and power series are used in the formulation of integral calculus and in the construction of polynomial representations of functions. The limit of functions will also be investigated as boundary conditions for differential equations. For this reason understanding concepts related to sequences and series are important to review. A sequence is an ordered list of numbers. A list such as the following represents a sequence: $$a_1, a_2, a_3, a_4, \dots, a_n, \dots $$ The sequence maps one value $a_n$ for every integer $n$. It is typical to provide a formula for construction of the nth term in the sequence. While ad-hoc strategies could be used to develop sequences using SymPy and lists in Python, SymPy has a sequence class that can be used. A short demonstration is provided next: End of explanation a1 = sp.SeqFormula(n**2, (n,0,5)) list(a1) Explanation: It is important to read about SymPy symbols at this time. We can generate a sequence using SeqFormula. End of explanation sp.limit_seq(a1.formula, n) Explanation: if we want the limit of the sequence at infinity: $$[0, 1, 4, 9, \ldots ]$$ we can use limit_seq: End of explanation # Your code here. Explanation: DIY: Determine if the following sequences are convergent or divergent. If convergent, what is the limit? $$ \begin{aligned} a_n = & \frac{1}{n} \ a_n = & 1 - (0.2)^n \ a_n = & \frac{1}{2n+1} \end{aligned} $$ End of explanation a2 = sp.Sum(1/2**n, (n,0,1)) a2 a2.doit() a4 = sp.Sum(n**2, (n,0,5)) a4 a5 = sp.Sum(k**2, (k, 1, m)) a5 a4.doit() a5.doit() Explanation: Infinite Series A series is the sum of a sequence. An infinite series will converge if the partial sums of the series has a finite limit. For example, examine the partial sums of the series: $$ \sum^{\infty}_{n=1} \frac{1}{2^n} $$ End of explanation M = sp.IndexedBase('M') sp.Sum(M[n]*x**n, (n,0,m)) Explanation: A power series is of the form: $$ \sum_{n=0}^{\infty} M_{n} x^{n} = M_0 + M_1 x + M_2 x^2 + \cdots $$ End of explanation sp.Sum(M[n]*(x-a)**n, (n,0,m)) Explanation: We can define the series about the point $a$ as follows: End of explanation sp.series(f(x), x, x0=0) Explanation: SymPy has a function that can take SymPy expressions and represent them as power series: End of explanation # Your code here. Explanation: DIY: Use SymPy to determine series approximations to $e^x$, $sin(x)$, and $cos(x)$ about the point $x=0$. End of explanation import sympy as sp sp.init_printing() x, A, B, C, D, E = sp.symbols('x, A, B, C, D, E') Explanation: Taylor's Series Below we present a derivation of Taylor's series and small algebraic argument for series representations of functions. In contrast to the ability to use sympy functions without any deeper understanding, these presentations are intended to give you insight into the origin of the series representation and the factors present within each term. While the algebraic presentation isn't a general case, the essential elements of a general polynomial representation are visible. The function $f(x)$ can be expanded into an infinite series or a finite series plus an error term. Assume that the function has a continuous nth derivative over the interval $a \le x \le b$. Integrate the nth derivative n times: $$\int_a^x f^n(x) dx = f^{(n-1)}(x) - f^{(n-1)}(a)$$ The power on the function $f$ in the equation above indicates the order of the derivative. Do this n times and then solve for f(x) to recover Taylor's series. One of the key features in this derivation is that the integral is definite. This derivation is outlined on Wolfram’s Mathworld. As a second exercise, assume that we wish to expand sin(x) about x=0. First, assume that the series exists and can be written as a power series with unknown coefficients. As a first step, differentiate the series and the function we are expanding. Next, let the value of x go to the value of the expansion point and it will be possible to evaluate the coefficients in turn: $$ \sin x = A+Bx+Cx^2+Dx^3+Ex^4 $$ We can choose an expansion point (e.g. $x = 0$) and differentiate to get a set of simultaneous equations permitting determination of the coefficients. The computer algebra system can help us with this activity: End of explanation sp.diff(sp.sin(x),x) Explanation: To help us get our work done we can use sympy's diff function. Testing this function with a known result, we can write: End of explanation orderOfDifferentiation = 1 powerSeries = A+B*x+C*x**2+D*x**3+E*x**4 # Differentiate, element by element, the list [sp.sin(x),powerSeries] [sp.diff(a,x,orderOfDifferentiation) for a in [sp.sin(x),powerSeries]] Explanation: A list comprehension is used to organize the results. In each iteration the exact function and the power series are differentiated and stored as an element of a list. The list can be inspected and a set of simultaneous equations can be written down and solved to determine the values of the coefficients. Casting the list as a sympy Matrix object clarifies the correspondance between entries in the list. End of explanation maximumOrder = 5 funcAndSeries = [[sp.diff(a,x,order) for a in [sp.sin(x),powerSeries]] for order in range(maximumOrder)] funcAndSeries Explanation: A list comprehension can be used to organize and extend the results further. We can wrap the list above into another list that changes the order of differentiation each iteration. End of explanation sp.Matrix(funcAndSeries) Explanation: Casting the results as a sympy Matrix object the list is more easily viewed in the Jupyter notebook: End of explanation # Your code here if you feel you need it. Explanation: DIY: Determine the coefficients in the above power series. You don't necessarily need to write code to complete this DIY problem. End of explanation from sympy import init_printing, symbols, Function init_printing() x, c = symbols("x,c") f = Function("f") f(x).series(x, x0=c, n=3) Explanation: Your markdown here. Computing a Taylor's Series Symbolically Using sympy the Taylor's series can be computed symbolically. End of explanation x, h, c = sp.symbols("x,h,c") f = sp.Function("f") # the .subs() method replaces occurences of 'x' with something else taylorExpansionPlus = f(x).series(x, x0=c, n=3).removeO().subs(x,c+h) taylorExpansionMinus = f(x).series(x, x0=c, n=3).removeO().subs(x,c-h) taylorExpansionPlus Explanation: One of the major uses of Taylor's series in computation is for the evaluation of derivatives. Take note of the fact that the derivatives of a function appear in the evaluation of the series. Computing Derivatives of Discrete Data It may be straightforward to compute the derivative of some functions. For example: $$f(x) = x^2$$ $$f'(x) = 2x$$ In numerical computing situations there is no analytical solution to the problem being solved and therefore no function to integrate or differentiate. The approximate solution is available as a list of discrete points in the domain of the problem's independent variables (e.g. space, time). The values could be represented as a list of numbers: $${f(x_0), f(x_1), f(x_2), ...}$$ The neighboring points $f(x_0)$ and $f(x_1)$ are seperated by a distance $\Delta x = x_1 - x_0$ in the independent variable. Although this will not be apparent from the values, it is implicit in the structure of the data. Taylor's series can be used to compute approximate derivatives of the unknown function directly from the list of points in this situation. We are going to compute a series expansion for an unknown function $f(x)$ in the vicinity of the point $c$ and then examine the relationship between that function and it's derivative quantities at a point $c\pm h$. The goal of the activity is to see if we can find expressions for the derivatives using the data point of interest ($c$) and its neighbors ($c \pm h$). We are going to use the idea of forward and backward differences. Forward differences are computed by expanding an unknown function in a Taylor series about a point “x=c” and then letting x go to c+h. Then, for backward differences, let x go to c-h. Symbolically Computing Forward and Backward Differences In the figure below we indicate the following: the unknown function $f(x)$ as a dashed line the point about which the unknown function is expanded at $x=c$ the distance between successive points is shown as $h$ the approximate values of the function given at the filled squares Imagine that we take the above series expansion and use it to compute the value of the function near the point $c$. Let us evaluate this series by adding and subtracting to the independent varable the quantity $h$. To accomplish this we write down the series expansion for our function about the point $c$, then we let the independent variable $x \rightarrow c+h$ and $c-h$. End of explanation taylorExpansionMinus Explanation: Meaning that: $$ f(c+h) = \frac{h^{2}}{2} \left. \frac{d^{2}}{d \xi_{1}^{2}} f{\left (\xi_{1} \right )} \right|{\substack{ \xi{1}=c }} + h \left. \frac{d}{d \xi_{1}} f{\left (\xi_{1} \right )} \right|{\substack{ \xi{1}=c }} + f{\left (c \right )} $$ End of explanation (taylorExpansionMinus-f(c-h))-(taylorExpansionPlus-f(c+h)) Explanation: Meaning that: $$ f(c-h) = \frac{h^{2}}{2} \left. \frac{d^{2}}{d \xi_{1}^{2}} f{\left (\xi_{1} \right )} \right|{\substack{ \xi{1}=c }} - h \left. \frac{d}{d \xi_{1}} f{\left (\xi_{1} \right )} \right|{\substack{ \xi{1}=c }} + f{\left (c \right )} $$ Solving for First and Second Derivatives Inspection of the results shows that the signs on the terms containing the first derivative are different between the two expressions. We can use this to our advantage in solving for the derivative terms explicitly. Note that each grouped expression is equal to zero as is the default in sympy. Find the first derivative in this expression: End of explanation (taylorExpansionMinus-f(c-h))+(taylorExpansionPlus-f(c+h)) Explanation: Remember that sympy expressions are zero by default. So this is true: $$ - 2 h \left. \frac{d}{d \xi_{1}} f{\left (\xi_{1} \right )} \right|{\substack{ \xi{1}=c }} - f{\left (c - h \right )} + f{\left (c + h \right )} = 0 $$ Find the second derivative in this expression: End of explanation
6,403
Given the following text description, write Python code to implement the functionality described below step by step Description: Find MEG reference channel artifacts Use ICA decompositions of MEG reference channels to remove intermittent noise. Many MEG systems have an array of reference channels which are used to detect external magnetic noise. However, standard techniques that use reference channels to remove noise from standard channels often fail when noise is intermittent. The technique described here (using ICA on the reference channels) often succeeds where the standard techniques do not. There are two algorithms to choose from Step1: Read raw data, cropping to 5 minutes to save memory Step2: Note that even though standard noise removal has already been applied to these data, much of the noise in the reference channels (bottom of the plot) can still be seen in the standard channels. Step3: The PSD of these data show the noise as clear peaks. Step4: Run the "together" algorithm. Step5: Cleaned data Step6: Now try the "separate" algorithm. Step7: Cleaned raw data traces Step8: Cleaned raw data PSD
Python Code: # Authors: Jeff Hanna <[email protected]> # # License: BSD-3-Clause import mne from mne import io from mne.datasets import refmeg_noise from mne.preprocessing import ICA import numpy as np print(__doc__) data_path = refmeg_noise.data_path() Explanation: Find MEG reference channel artifacts Use ICA decompositions of MEG reference channels to remove intermittent noise. Many MEG systems have an array of reference channels which are used to detect external magnetic noise. However, standard techniques that use reference channels to remove noise from standard channels often fail when noise is intermittent. The technique described here (using ICA on the reference channels) often succeeds where the standard techniques do not. There are two algorithms to choose from: separate and together (default). In the "separate" algorithm, two ICA decompositions are made: one on the reference channels, and one on reference + standard channels. The reference + standard channel components which correlate with the reference channel components are removed. In the "together" algorithm, a single ICA decomposition is made on reference + standard channels, and those components whose weights are particularly heavy on the reference channels are removed. This technique is fully described and validated in :footcite:HannaEtAl2020 End of explanation raw_fname = data_path / 'sample_reference_MEG_noise-raw.fif' raw = io.read_raw_fif(raw_fname).crop(300, 600).load_data() Explanation: Read raw data, cropping to 5 minutes to save memory End of explanation select_picks = np.concatenate( (mne.pick_types(raw.info, meg=True)[-32:], mne.pick_types(raw.info, meg=False, ref_meg=True))) plot_kwargs = dict( duration=100, order=select_picks, n_channels=len(select_picks), scalings={"mag": 8e-13, "ref_meg": 2e-11}) raw.plot(**plot_kwargs) Explanation: Note that even though standard noise removal has already been applied to these data, much of the noise in the reference channels (bottom of the plot) can still be seen in the standard channels. End of explanation raw.plot_psd(fmax=30) Explanation: The PSD of these data show the noise as clear peaks. End of explanation raw_tog = raw.copy() ica_kwargs = dict( method='picard', fit_params=dict(tol=1e-4), # use a high tol here for speed ) all_picks = mne.pick_types(raw_tog.info, meg=True, ref_meg=True) ica_tog = ICA(n_components=60, max_iter='auto', allow_ref_meg=True, **ica_kwargs) ica_tog.fit(raw_tog, picks=all_picks) # low threshold (2.0) here because of cropped data, entire recording can use # a higher threshold (2.5) bad_comps, scores = ica_tog.find_bads_ref(raw_tog, threshold=2.0) # Plot scores with bad components marked. ica_tog.plot_scores(scores, bad_comps) # Examine the properties of removed components. It's clear from the time # courses and topographies that these components represent external, # intermittent noise. ica_tog.plot_properties(raw_tog, picks=bad_comps) # Remove the components. raw_tog = ica_tog.apply(raw_tog, exclude=bad_comps) Explanation: Run the "together" algorithm. End of explanation raw_tog.plot_psd(fmax=30) Explanation: Cleaned data: End of explanation raw_sep = raw.copy() # Do ICA only on the reference channels. ref_picks = mne.pick_types(raw_sep.info, meg=False, ref_meg=True) ica_ref = ICA(n_components=2, max_iter='auto', allow_ref_meg=True, **ica_kwargs) ica_ref.fit(raw_sep, picks=ref_picks) # Do ICA on both reference and standard channels. Here, we can just reuse # ica_tog from the section above. ica_sep = ica_tog.copy() # Extract the time courses of these components and add them as channels # to the raw data. Think of them the same way as EOG/EKG channels, but instead # of giving info about eye movements/cardiac activity, they give info about # external magnetic noise. ref_comps = ica_ref.get_sources(raw_sep) for c in ref_comps.ch_names: # they need to have REF_ prefix to be recognised ref_comps.rename_channels({c: "REF_" + c}) raw_sep.add_channels([ref_comps]) # Now that we have our noise channels, we run the separate algorithm. bad_comps, scores = ica_sep.find_bads_ref(raw_sep, method="separate") # Plot scores with bad components marked. ica_sep.plot_scores(scores, bad_comps) # Examine the properties of removed components. ica_sep.plot_properties(raw_sep, picks=bad_comps) # Remove the components. raw_sep = ica_sep.apply(raw_sep, exclude=bad_comps) Explanation: Now try the "separate" algorithm. End of explanation raw_sep.plot(**plot_kwargs) Explanation: Cleaned raw data traces: End of explanation raw_sep.plot_psd(fmax=30) Explanation: Cleaned raw data PSD: End of explanation
6,404
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocean MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Is Required Step9: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required Step10: 2.2. Eos Functional Temp Is Required Step11: 2.3. Eos Functional Salt Is Required Step12: 2.4. Eos Functional Depth Is Required Step13: 2.5. Ocean Freezing Point Is Required Step14: 2.6. Ocean Specific Heat Is Required Step15: 2.7. Ocean Reference Density Is Required Step16: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required Step17: 3.2. Type Is Required Step18: 3.3. Ocean Smoothing Is Required Step19: 3.4. Source Is Required Step20: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required Step21: 4.2. River Mouth Is Required Step22: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required Step23: 5.2. Code Version Is Required Step24: 5.3. Code Languages Is Required Step25: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required Step26: 6.2. Canonical Horizontal Resolution Is Required Step27: 6.3. Range Horizontal Resolution Is Required Step28: 6.4. Number Of Horizontal Gridpoints Is Required Step29: 6.5. Number Of Vertical Levels Is Required Step30: 6.6. Is Adaptive Grid Is Required Step31: 6.7. Thickness Level 1 Is Required Step32: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required Step33: 7.2. Global Mean Metrics Used Is Required Step34: 7.3. Regional Metrics Used Is Required Step35: 7.4. Trend Metrics Used Is Required Step36: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required Step37: 8.2. Scheme Is Required Step38: 8.3. Consistency Properties Is Required Step39: 8.4. Corrected Conserved Prognostic Variables Is Required Step40: 8.5. Was Flux Correction Used Is Required Step41: 9. Grid Ocean grid 9.1. Overview Is Required Step42: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required Step43: 10.2. Partial Steps Is Required Step44: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required Step45: 11.2. Staggering Is Required Step46: 11.3. Scheme Is Required Step47: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required Step48: 12.2. Diurnal Cycle Is Required Step49: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required Step50: 13.2. Time Step Is Required Step51: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required Step52: 14.2. Scheme Is Required Step53: 14.3. Time Step Is Required Step54: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required Step55: 15.2. Time Step Is Required Step56: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required Step57: 17. Advection Ocean advection 17.1. Overview Is Required Step58: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required Step59: 18.2. Scheme Name Is Required Step60: 18.3. ALE Is Required Step61: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required Step62: 19.2. Flux Limiter Is Required Step63: 19.3. Effective Order Is Required Step64: 19.4. Name Is Required Step65: 19.5. Passive Tracers Is Required Step66: 19.6. Passive Tracers Advection Is Required Step67: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required Step68: 20.2. Flux Limiter Is Required Step69: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required Step70: 21.2. Scheme Is Required Step71: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required Step72: 22.2. Order Is Required Step73: 22.3. Discretisation Is Required Step74: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required Step75: 23.2. Constant Coefficient Is Required Step76: 23.3. Variable Coefficient Is Required Step77: 23.4. Coeff Background Is Required Step78: 23.5. Coeff Backscatter Is Required Step79: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required Step80: 24.2. Submesoscale Mixing Is Required Step81: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required Step82: 25.2. Order Is Required Step83: 25.3. Discretisation Is Required Step84: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required Step85: 26.2. Constant Coefficient Is Required Step86: 26.3. Variable Coefficient Is Required Step87: 26.4. Coeff Background Is Required Step88: 26.5. Coeff Backscatter Is Required Step89: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required Step90: 27.2. Constant Val Is Required Step91: 27.3. Flux Type Is Required Step92: 27.4. Added Diffusivity Is Required Step93: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required Step94: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required Step95: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required Step96: 30.2. Closure Order Is Required Step97: 30.3. Constant Is Required Step98: 30.4. Background Is Required Step99: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required Step100: 31.2. Closure Order Is Required Step101: 31.3. Constant Is Required Step102: 31.4. Background Is Required Step103: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required Step104: 32.2. Tide Induced Mixing Is Required Step105: 32.3. Double Diffusion Is Required Step106: 32.4. Shear Mixing Is Required Step107: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required Step108: 33.2. Constant Is Required Step109: 33.3. Profile Is Required Step110: 33.4. Background Is Required Step111: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required Step112: 34.2. Constant Is Required Step113: 34.3. Profile Is Required Step114: 34.4. Background Is Required Step115: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required Step116: 35.2. Scheme Is Required Step117: 35.3. Embeded Seaice Is Required Step118: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required Step119: 36.2. Type Of Bbl Is Required Step120: 36.3. Lateral Mixing Coef Is Required Step121: 36.4. Sill Overflow Is Required Step122: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required Step123: 37.2. Surface Pressure Is Required Step124: 37.3. Momentum Flux Correction Is Required Step125: 37.4. Tracers Flux Correction Is Required Step126: 37.5. Wave Effects Is Required Step127: 37.6. River Runoff Budget Is Required Step128: 37.7. Geothermal Heating Is Required Step129: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required Step130: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required Step131: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required Step132: 40.2. Ocean Colour Is Required Step133: 40.3. Extinction Depth Is Required Step134: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required Step135: 41.2. From Sea Ice Is Required Step136: 41.3. Forced Mode Restoring Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mpi-m', 'icon-esm-lr', 'ocean') Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: MPI-M Source ID: ICON-ESM-LR Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:17 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation
6,405
Given the following text description, write Python code to implement the functionality described below step by step Description: Executed Step1: Load Data Multispot Load the leakage coefficient from disk (computed in Multi-spot 5-Samples analyis - Leakage coefficient fit) Step2: Load the direct excitation coefficient ($d_{dirT}$) from disk (computed in usALEX - Corrections - Direct excitation physical parameter) Step3: Multispot PR for FRET population Step4: usALEX Corrected $E$ from μs-ALEX data Step5: Multi-spot gamma fitting Step6: Plot FRET vs distance
Python Code: from fretbursts import fretmath import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl from cycler import cycler import seaborn as sns %matplotlib inline %config InlineBackend.figure_format='retina' # for hi-dpi displays import matplotlib as mpl from cycler import cycler bmap = sns.color_palette("Set1", 9) colors = np.array(bmap)[(1,0,2,3,4,8,6,7), :] mpl.rcParams['axes.prop_cycle'] = cycler('color', colors) colors_labels = ['blue', 'red', 'green', 'violet', 'orange', 'gray', 'brown', 'pink', ] for c, cl in zip(colors, colors_labels): locals()[cl] = tuple(c) # assign variables with color names sns.palplot(colors) sns.set_style('whitegrid') Explanation: Executed: Mon Mar 27 11:48:57 2017 Duration: 5 seconds. Multi-spot Gamma Fitting End of explanation leakage_coeff_fname = 'results/Multi-spot - leakage coefficient KDE wmean DexDem.csv' leakageM = float(np.loadtxt(leakage_coeff_fname, ndmin=1)) print('Multispot Leakage Coefficient:', leakageM) Explanation: Load Data Multispot Load the leakage coefficient from disk (computed in Multi-spot 5-Samples analyis - Leakage coefficient fit): End of explanation dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_t beta.csv' dir_ex_t = float(np.loadtxt(dir_ex_coeff_fname, ndmin=1)) print('Direct excitation coefficient (dir_ex_t):', dir_ex_t) Explanation: Load the direct excitation coefficient ($d_{dirT}$) from disk (computed in usALEX - Corrections - Direct excitation physical parameter): End of explanation mspot_filename = 'results/Multi-spot - dsDNA - PR - all_samples all_ch.csv' E_pr_fret = pd.read_csv(mspot_filename, index_col=0) E_pr_fret Explanation: Multispot PR for FRET population: End of explanation data_file = 'results/usALEX-5samples-E-corrected-all-ph.csv' data_alex = pd.read_csv(data_file).set_index('sample')#[['E_pr_fret_kde']] data_alex.round(6) E_alex = data_alex.E_gauss_w E_alex Explanation: usALEX Corrected $E$ from μs-ALEX data: End of explanation import lmfit def residuals(params, E_raw, E_ref): gamma = params['gamma'].value # NOTE: leakageM and dir_ex_t are globals return E_ref - fretmath.correct_E_gamma_leak_dir(E_raw, leakage=leakageM, gamma=gamma, dir_ex_t=dir_ex_t) params = lmfit.Parameters() params.add('gamma', value=0.5) E_pr_fret_mean = E_pr_fret.mean(1) E_pr_fret_mean m = lmfit.minimize(residuals, params, args=(E_pr_fret_mean, E_alex)) lmfit.report_fit(m.params, show_correl=False) E_alex['12d'], E_pr_fret_mean['12d'] m = lmfit.minimize(residuals, params, args=(np.array([E_pr_fret_mean['12d']]), np.array([E_alex['12d']]))) lmfit.report_fit(m.params, show_correl=False) print('Fitted gamma(multispot):', m.params['gamma'].value) multispot_gamma = m.params['gamma'].value multispot_gamma E_fret_mch = fretmath.correct_E_gamma_leak_dir(E_pr_fret, leakage=leakageM, dir_ex_t=dir_ex_t, gamma=multispot_gamma) E_fret_mch = E_fret_mch.round(6) E_fret_mch E_fret_mch.to_csv('results/Multi-spot - dsDNA - Corrected E - all_samples all_ch.csv') '%.5f' % multispot_gamma with open('results/Multi-spot - gamma factor.csv', 'wt') as f: f.write('%.5f' % multispot_gamma) norm = (E_fret_mch.T - E_fret_mch.mean(1))#/E_pr_fret.mean(1) norm_rel = (E_fret_mch.T - E_fret_mch.mean(1))/E_fret_mch.mean(1) norm.plot() norm_rel.plot() Explanation: Multi-spot gamma fitting End of explanation sns.set_style('whitegrid') CH = np.arange(8) CH_labels = ['CH%d' % i for i in CH] dist_s_bp = [7, 12, 17, 22, 27] fontsize = 16 fig, ax = plt.subplots(figsize=(8, 5)) ax.plot(dist_s_bp, E_fret_mch, '+', lw=2, mew=1.2, ms=10, zorder=4) ax.plot(dist_s_bp, E_alex, '-', lw=3, mew=0, alpha=0.5, color='k', zorder=3) plt.title('Multi-spot smFRET dsDNA, Gamma = %.2f' % multispot_gamma) plt.xlabel('Distance in base-pairs', fontsize=fontsize); plt.ylabel('E', fontsize=fontsize) plt.ylim(0, 1); plt.xlim(0, 30) plt.grid(True) plt.legend(['CH1','CH2','CH3','CH4','CH5','CH6','CH7','CH8', u'μsALEX'], fancybox=True, prop={'size':fontsize-1}, loc='best'); Explanation: Plot FRET vs distance End of explanation
6,406
Given the following text description, write Python code to implement the functionality described below step by step Description: Triple Double Russ In this post, we use webscraping to analyze Russell Westbrook's (Russ) team performance when he records a triple double. We will use multiple Python modules and focus on BeautifulSoup, pandas, matplotlib, and seaborn Overview Background Data Collection Main Analysis Data Visualization Secondary Analysis Review Footnotes Background Russell Westbrook is a lightning rod for discussion in the NBA because of his style of play and clashes with popular players like his beef with former teammate Kevin Durant. Westbrook's high energy, domineering style of play is best characterized by the the statistic he is known for, the triple double. What is a triple double? Traditional NBA box scores collect statistics on 5 individual player categories Step1: Russell Westbrook's player data is available at the following link Step2: In order to create a dataset that includes Westbrook's individual and team performance, we need to scrape his yearly game log data. An example yearly game log is, https Step3: lists for aggregating yearly game logs and errors while webscraping Step4: Now we can pass the years_available_list and iterate through each year of game log data and extract player statistical information on the individual game basis with the function extract_player_game_logs. We can also get team results from the same function Note Step5: check if there were any errors while scraping game log web pages Step6: no errors found, we can concatenate our list of yearly game log dataframes, westbrook_career_game_logs_df_list into a single data frame with the concat function from the pandas module. we can double check to see if we collected the same amount of yearly game logs as years of data available with a boolean comparison Step7: let's take a look at the data set we created and the column names/data types Step8: Main Analysis now that we have all of Westbrook's individual and team statistics, we need to create a triple double metric compare win percentage across games where Westbrook does and does not generate a triple double our triple double metric, triple_double, is a binary indicator with a value of 1 when a triple double is achieved through one of 7 scenarios Step9: percentage of career games westbrook records a triple double and number of games he has recorded a triple double Step10: I calculated 193 triple doubles for Westbrook (17% of his career games) which matches the count found here on basketball-reference.com Step11: Westbrook is defined by his regular availability for his team (playing in 92% of his teams' games), he is so remarkably durable that he once had a 10 year streak of no missed games Step12: To complete our comparison, we need Westbrook's win percentage in 4 scenarios * Westbrook's career win percentage * Westbrook's active win percentage * Westbrook's triple double games win percentage * Westbrook's non-triple double games win percentage We can use the result_b column, a binary indicator of results where 0 indicates a loss and 1 indicates a win to calculate these win percentages across our different data frames. The mean for result_b is the percentage (after multiplying by 100) of games won under the scenario First, Westbrook's career win percentage for his teams regardless of his activity status Step13: Now scenario 2, win percentage when Russ is active Step14: We can see that Westbrook's teams win 5% more games when he is active than when he doesn't play, a good sign for an impactful palyer Now let's calculate Westbrook's team win percentage in games he has a triple double Step15: Lastly, let's calculate Westbrook's win percentage when he does not record a triple double Step16: We can add a z-test of proportions to compare the win percentages in different scenarios to see if they are stastically significant. The null hypothesis for this test is that the win percentages are equal in both scenarios (games with a triple double vs games without a triple double) Step17: The difference in win percentage for games with a triple double and without a triple double is statistically significant Data visualization Chart of triple doubles over time Where is Russ having triple doubles Step18: to find the frequency of triple doubles by location (home vs away), we need to create a year variable from the date variable group by three variables of interest and fill the observations without a triple double with 0 instead of NaN Step19: optional bar plot of triple doubles by game location Step20: line plot of triple doubles over time by location Step21: Secondary Analysis Let's look at two scenarios that add to the Russ/triple double debate. 1. Russ' play post-partnership with Kevin Durant 2. The margin of wins/losses when Russ records a triple double Triple Double Occurrence with and without Kevin Durant Since teammate Kevin Durant left in Free Agency before the 2016 season, Westbrook won his only MVP and started to define his play with triple doubles * How many triple doubles has Russ created when he was teammates with Kevin Durant in comparison to the years after they were no longer teammates? + Note Step22: count triple doubles with and without Kevin Durant as a teammate Step23: Game margin by Triple Double Occurrence Lastly, the game margin during triple double games can provide a naive comparison about how Russ' pursuit of a triple double affects his team. For instance, if Russ is selfishly pursuing triple doubles at the expense of team success, we could expect that the margin of loss is greater in loses where he records a triple double than in loses where a triple double is not achieved. Similarly, smaller win margins in games with a triple double than without a triple double could suggest a negative team effect when Russ creates a triple double. * what is the margin difference in games that are won/lost, but a triple double is recorded? * compare to margin for games won/lost without a triple double Step24: When Russ's teams lose, they (on average) lose by a smaller margin (-7.27) when he records a triple double than when he does not (-9.96) Unlike the margin in losses, the win margin is nearly identical in games Russ' teams win when Russ records a triple double (11.55) than when he does not (11.6) Review In all honesty, these results are surprising (to me) and speak well for Russ and his supporters Westbrook's teams win a far greater percentage of games when he has a triple double than when he doesn't 74% of games won when a triple double is recorded vs 55% of games won without a triple double Additionally, game margins suggests that when Westbrook creates a triple double, the team performance is not negatively impacted. When Westbrook's teams lose but he records a triple double are smaller than game margins when a triple double is not recorded in a loss. Furthermore, Russ' teams win at a similar margin when he does and does not record a triple double. Together, the game margin evidence suggests that Russ is not pursuing triple doubles in blowouts and not stat padding as suggested in this video Westbrook recently became a triple double machine, amassing 156 of his 193 triple doubles in the last 6+ seasons alone alone Westbrook's increased pursuit of the triple double occurs after the Oklahoma City Thunder lost Kevin Durant during Free Agency in the summer of 2016 For reference, LeBron James (4<sup>th</sup> all time in triple doubles) has recorded 103 total triple doubles in a 19-year NBA career Westbrook is 12<sup>th</sup> all time in assists while LeBron is 7<sup>th</sup> (source
Python Code: from utils import * import constants as c Explanation: Triple Double Russ In this post, we use webscraping to analyze Russell Westbrook's (Russ) team performance when he records a triple double. We will use multiple Python modules and focus on BeautifulSoup, pandas, matplotlib, and seaborn Overview Background Data Collection Main Analysis Data Visualization Secondary Analysis Review Footnotes Background Russell Westbrook is a lightning rod for discussion in the NBA because of his style of play and clashes with popular players like his beef with former teammate Kevin Durant. Westbrook's high energy, domineering style of play is best characterized by the the statistic he is known for, the triple double. What is a triple double? Traditional NBA box scores collect statistics on 5 individual player categories: points, rebounds, assists, steals, and blocks<sup>1</sup>. When a player has 10 or more instances of each category ("double") in 3 categories ("triple), the collective statisitcal feat is known as a triple double. Westbrook is the NBA leader in career triple doubles and recently broke Hall of Famer Oscar Robertson's 47 year-old record in the 2020-21 season. Robertson was the original "Mr. Triple Double" and some have used the moniker with Russ. What is controverisial about Russ and the triple double? While there are many supporters of Westbrook in print and video, including videos praising Russ for his ability to create triple doubles, there is also considerable pushback of Russ' pursuit of a triple double. Check the plays compiled in this video for instance. At the time of the compiled plays, Westbrook is 2 assists shy of a triple double and declines to take several open shots in order to pass the ball to a teammate in the hopes of getting an assist. Several plays are awkward with Westbrook putting teammates in bad positions and some plays ending in turnovers. For me, the most egregious example of a player selfishly pursuing a triple double is Ricky Davis in 2003. With seconds remaining in the game his team was winning by 25, Davis attempted a shot on his team's basket in order to secure what he hoped would be his 10th rebound and a triple double (Davis was not credited with the rebound or triple double). That Westbrook video is nothing near the cynicism of Davis, but the discussion prompted by Russ' triple double ability is: "Do Westbrook's teams win more games when he creates a triple double?" In order to investigate Russ' team performance when he creates a triple double we need to gather his individual statistics and team performance during his games. We can gather both sources of information from Basketball-Reference.com and create a function to tabulate triple doubles. Then we will compare the win percentage of Westbrook's teams when he does and does not have a triple double to assess if his pursuit of a triple doubles negatively impacts team performance. We will subset our analysis to only Westbrook's Regular Season games Data Collection End of explanation westbrook_basketball_reference_url = "https://www.basketball-reference.com/players/w/westbru01.html" webbrowser.open(westbrook_basketball_reference_url) Explanation: Russell Westbrook's player data is available at the following link: https://www.basketball-reference.com/players/w/westbru01.html. We can use the webbrowser module to open this url with the code below End of explanation player_metadata = get_player_metadata(westbrook_basketball_reference_url) years_available_list = player_metadata.get("years_available") total_years_of_data = len(years_available_list) player_metadata Explanation: In order to create a dataset that includes Westbrook's individual and team performance, we need to scrape his yearly game log data. An example yearly game log is, https://www.basketball-reference.com/players/w/westbru01/gamelog/2022, the game log for this current season. To accomplish this task, we need to * Find out how many years of game log data is available * Create an empty list that will contain all yearly game log dataframes * Create a list to store any errors that may come up while scraping yearly game log web pages I've written a helper function, get_player_metadata that we can use to scrape a player's main Basketball-Reference.com page. get_player_metadata returns a dictionary of the years available for the player (in the form of a list) and the player's name End of explanation westbrook_career_game_logs_df_list = [] errors_list = [] Explanation: lists for aggregating yearly game logs and errors while webscraping End of explanation # The url template that we pass in year info url_template = 'https://www.basketball-reference.com/players/w/westbru01/gamelog/{year}' # for each year of data avaialable, gather game log data for year in years_available_list: # Use try/except block to catch and inspect any urls that cause an error try: print(f'getting game log data from {year}') # get the formatted game log data url formatted_url = url_template.format(year=year) westbrook_yearly_game_logs = extract_player_game_logs(formatted_url) # append the current dataframe to the list of dataframes westbrook_career_game_logs_df_list.append(westbrook_yearly_game_logs) except Exception as e: # Store the url and the error it causes in a list error =[formatted_url, e] # then append it to the list of errors errors_list.append(error) Explanation: Now we can pass the years_available_list and iterate through each year of game log data and extract player statistical information on the individual game basis with the function extract_player_game_logs. We can also get team results from the same function Note: These functions collect the latest available data so you may have more games and slightly different win percentage statistics when running this notebook after 2022-03-06 End of explanation errors_list Explanation: check if there were any errors while scraping game log web pages End of explanation print(len(westbrook_career_game_logs_df_list)==len(years_available_list)) print(len(westbrook_career_game_logs_df_list)) westbrook_career_game_logs_df = pd.concat(westbrook_career_game_logs_df_list, axis=0) total_career_games = westbrook_career_game_logs_df.shape[0] print(f"collected data for {total_career_games} games for {player_metadata.get('player_name')}") Explanation: no errors found, we can concatenate our list of yearly game log dataframes, westbrook_career_game_logs_df_list into a single data frame with the concat function from the pandas module. we can double check to see if we collected the same amount of yearly game logs as years of data available with a boolean comparison End of explanation westbrook_career_game_logs_df.head() westbrook_career_game_logs_df.info() westbrook_career_game_logs_df.iloc[0] Explanation: let's take a look at the data set we created and the column names/data types End of explanation def triple_double(row): if row['points']>=10 and row['total_rebs']>=10 and row['assists']>=10: return 1 if row['points']>=10 and row['total_rebs']>=10 and row['blocks']>=10: return 1 if row['points']>=10 and row['total_rebs']>=10 and row['steals']>=10: return 1 if row['points']>=10 and row['assists']>=10 and row['steals']>=10: return 1 if row['points']>=10 and row['assists']>=10 and row['blocks']>=10: return 1 if row['total_rebs']>=10 and row['assists']>=10 and row['blocks']>=10: return 1 if row['total_rebs']>=10 and row['assists']>=10 and row['steals']>=10: return 1 westbrook_career_game_logs_df["triple_double"] = westbrook_career_game_logs_df.apply(triple_double, axis=1) westbrook_career_game_logs_df.loc[:, "triple_double"] = westbrook_career_game_logs_df.loc[:, "triple_double"].fillna(0) westbrook_career_game_logs_df["triple_double"].astype(int) Explanation: Main Analysis now that we have all of Westbrook's individual and team statistics, we need to create a triple double metric compare win percentage across games where Westbrook does and does not generate a triple double our triple double metric, triple_double, is a binary indicator with a value of 1 when a triple double is achieved through one of 7 scenarios: points, rebounds, and assists points, rebounds, and steals points, rebounds, and blocks points, assists, and steals points, assists, and blocks rebounds, assists, and blocks rebounds, assists, and steals End of explanation np.mean(westbrook_career_game_logs_df["triple_double"])*100 westbrook_career_game_logs_df.shape[0]*np.mean(westbrook_career_game_logs_df["triple_double"]) Explanation: percentage of career games westbrook records a triple double and number of games he has recorded a triple double End of explanation westbrook_active_games_pct = np.mean(westbrook_career_game_logs_df["active"])*100 print(f"Russell Westbrook has been active for {westbrook_active_games_pct:.2f}% of his teams' games") Explanation: I calculated 193 triple doubles for Westbrook (17% of his career games) which matches the count found here on basketball-reference.com: https://www.basketball-reference.com/leaders/trp_dbl_career.html Note these numbers are current as of 2022-03-06 now we can calculate win percentage for games where Westbrook did and did not have a triple double let's isolate only games westbrook was active for players can miss games for injury, personal reasons, suspension, and more separate triple double games from other games Win Percentage by Triple Double Occurrence Active games only End of explanation westbrook_active_games_df = westbrook_career_game_logs_df.loc[westbrook_career_game_logs_df["active"]==1] triple_double_games_df = westbrook_active_games_df.loc[westbrook_active_games_df["triple_double"]==1] non_triple_double_games_df = westbrook_active_games_df.loc[westbrook_active_games_df["triple_double"]!=1] westbrook_active_games_pct = np.mean(westbrook_career_game_logs_df["active"])*100 westbrook_career_games = westbrook_career_game_logs_df.shape[0] westbrook_active_games = westbrook_active_games_df.shape[0] triple_double_games = triple_double_games_df.shape[0] non_triple_double_games = non_triple_double_games_df.shape[0] active_triple_double_pct = np.mean(westbrook_active_games_df["triple_double"])*100 print(f"Russell Westbrook has been active for {westbrook_active_games_pct:.2f}% of his teams' games") print(f"He has played in {westbrook_active_games} of {westbrook_career_games} potential games") print(f"Westbrook has recorded a triple double in {triple_double_games} games, {active_triple_double_pct:.2f}% of his active games") print(f"Westbrook has {non_triple_double_games} games without a triple double") Explanation: Westbrook is defined by his regular availability for his team (playing in 92% of his teams' games), he is so remarkably durable that he once had a 10 year streak of no missed games End of explanation westbrook_career_win_pct = np.mean(westbrook_career_game_logs_df['result_b'])*100 print(f"Russell Westbrook's teams have won {westbrook_career_win_pct:.2f}% of their games") Explanation: To complete our comparison, we need Westbrook's win percentage in 4 scenarios * Westbrook's career win percentage * Westbrook's active win percentage * Westbrook's triple double games win percentage * Westbrook's non-triple double games win percentage We can use the result_b column, a binary indicator of results where 0 indicates a loss and 1 indicates a win to calculate these win percentages across our different data frames. The mean for result_b is the percentage (after multiplying by 100) of games won under the scenario First, Westbrook's career win percentage for his teams regardless of his activity status End of explanation westbrook_active_win_pct = np.mean(westbrook_active_games_df['result_b'])*100 print(f"Russell Westbrook's teams win {westbrook_active_win_pct:.2f}% of their games when he is active") Explanation: Now scenario 2, win percentage when Russ is active End of explanation triple_double_win_pct = np.mean(triple_double_games_df['result_b'])*100 print(f"Russell Westbrook's teams win {triple_double_win_pct:.2f}% of their games when he DOES record a triple double") Explanation: We can see that Westbrook's teams win 5% more games when he is active than when he doesn't play, a good sign for an impactful palyer Now let's calculate Westbrook's team win percentage in games he has a triple double End of explanation non_triple_double_win_pct = np.mean(non_triple_double_games_df['result_b'])*100 print(f"Russell Westbrook's teams win {non_triple_double_win_pct:.2f}% of their games when he DOES NOT record a triple double") import tabulate data = [["Career Win Pct", f"{westbrook_career_win_pct:.2f}%"], ["Active Win Pct", f"{westbrook_active_win_pct:.2f}%"], ["Triple Double Win Pct", f"{triple_double_win_pct:.2f}%"], ["Non-Triple Double Win Pct", f"{non_triple_double_win_pct:.2f}%"]] table = tabulate.tabulate(data, tablefmt='html') table Explanation: Lastly, let's calculate Westbrook's win percentage when he does not record a triple double End of explanation contingency_table = pd.crosstab(westbrook_active_games_df.result_b,westbrook_active_games_df.triple_double) #Contingency Table contingency_table z_stat, p_value = create_proportions_ztest(contingency_table) Explanation: We can add a z-test of proportions to compare the win percentages in different scenarios to see if they are stastically significant. The null hypothesis for this test is that the win percentages are equal in both scenarios (games with a triple double vs games without a triple double) End of explanation westbrook_active_games_df = copy.deepcopy(westbrook_active_games_df) westbrook_active_games_df["year"] = westbrook_active_games_df["season"].apply(lambda x: x[0:4]) westbrook_active_games_df["year"] = pd.DatetimeIndex(westbrook_active_games_df["year"]) yearly_triple_double_counts = westbrook_active_games_df.resample(rule='Y', on='year')['triple_double'].sum() yearly_triple_double_counts sns.set_style("white") sns.set_color_codes() # Create figure and plot space fig, ax = plt.subplots(figsize=(12, 12)) # Add x-axis and y-axis ax.bar(yearly_triple_double_counts.index.year, yearly_triple_double_counts.values, color='Blue') # Set title and labels for axes ax.set(xlabel="Season", ylabel="Games with a Triple Double", title= "Seasonal Counts of Triple Doubles by Russell Westbrook") # add '2008' to years_available_list so that we have the Seasons shown on x-axis years_available_list = ['2008'] + years_available_list ax.xaxis.set_ticks([int(year) for year in years_available_list]) # Call add values function add_value_labels(ax) plt.show() Explanation: The difference in win percentage for games with a triple double and without a triple double is statistically significant Data visualization Chart of triple doubles over time Where is Russ having triple doubles: home or away? Note these counts are across the NBA season and not the calendar year End of explanation # westbrook_active_games_df = copy.deepcopy(westbrook_active_games_df) # westbrook_active_games_df["year"] = pd.DatetimeIndex(westbrook_active_games_df["date"]).year triple_double_games_df = copy.deepcopy(triple_double_games_df) triple_double_games_df["year"] = triple_double_games_df["season"].apply(lambda x: x[0:4]) triple_double_games_df["year"] = pd.DatetimeIndex(triple_double_games_df["year"]).year triple_double_games_gb_df = triple_double_games_df.groupby(['triple_double', 'location', 'year']).size().unstack(level=2).fillna(0).T triple_double_games_gb_df Explanation: to find the frequency of triple doubles by location (home vs away), we need to create a year variable from the date variable group by three variables of interest and fill the observations without a triple double with 0 instead of NaN End of explanation # triple_double_games_gb_df.plot(kind = 'bar') Explanation: optional bar plot of triple doubles by game location End of explanation sns.set_style("white") sns.set_color_codes() fig, ax = plt.subplots(figsize=(16, 6)) fig.subplots_adjust(hspace=0.4) plot_title = 'Seasonal Counts of Triple Doubles by Russell Westbrook \n Home vs Away' triple_double_games_gb_df.iloc[:,].plot(ax = ax, title = plot_title, ylabel = "Games with a Triple Double", xlabel= "Season") plt.axvline(x=2015.5, color='k', linestyle='--') plt.xticks([int(year) for year in years_available_list]) plt.legend(["Away", "Home", "Kevin Durant Leaves OKC"], loc ="lower right") plt.show() Explanation: line plot of triple doubles over time by location End of explanation triple_double_games_gb_df2 = triple_double_games_gb_df[1].reset_index(level="year") triple_double_games_gb_df2.keys().name = '' triple_double_games_gb_df2.rename(columns={0:"Away", 1:"Home"}, inplace=True) triple_double_games_gb_df2 with_kd_mask = triple_double_games_gb_df2["year"]<2016 without_kd_mask = triple_double_games_gb_df2["year"]>=2016 triple_double_games_gb_df2.loc[without_kd_mask] # triple_double_games_gb_df2.loc[with_kd_mask] Explanation: Secondary Analysis Let's look at two scenarios that add to the Russ/triple double debate. 1. Russ' play post-partnership with Kevin Durant 2. The margin of wins/losses when Russ records a triple double Triple Double Occurrence with and without Kevin Durant Since teammate Kevin Durant left in Free Agency before the 2016 season, Westbrook won his only MVP and started to define his play with triple doubles * How many triple doubles has Russ created when he was teammates with Kevin Durant in comparison to the years after they were no longer teammates? + Note: Westbrook is an older player and more established in seasons post playing with Durant and it will not be surprising if he records more triple doubles as a tenured player than as a younger player End of explanation triple_doubles_without_kd = sum(triple_double_games_gb_df2.loc[without_kd_mask]["Away"])+ sum(triple_double_games_gb_df2.loc[without_kd_mask]["Home"]) triple_doubles_with_kd = sum(triple_double_games_gb_df2.loc[with_kd_mask]["Away"])+ sum(triple_double_games_gb_df2.loc[with_kd_mask]["Home"]) print(f"Westbrook recorded {triple_doubles_with_kd} triple doubles in {triple_double_games_gb_df2.loc[with_kd_mask].shape[0]} seasons with Kevin Durant as a teammate") print(f"Westbrook has created {triple_doubles_without_kd} triple doubles in {triple_double_games_gb_df2.loc[without_kd_mask].shape[0]}+ seasons without Kevin Durant as a teammate") Explanation: count triple doubles with and without Kevin Durant as a teammate End of explanation triple_double_win_margin = np.mean(triple_double_games_df.loc[triple_double_games_df["result_b"]==1]["margin"]) triple_double_loss_margin = np.mean(triple_double_games_df.loc[triple_double_games_df["result_b"]==0]["margin"]) non_triple_double_win_margin = np.mean(non_triple_double_games_df.loc[non_triple_double_games_df["result_b"]==1]["margin"]) non_triple_double_loss_margin = np.mean(non_triple_double_games_df.loc[non_triple_double_games_df["result_b"]==0]["margin"]) margin_data = [["Triple Double Win Margin", f"{triple_double_win_margin:.2f}"], ["Non-Triple Double Win Margin", f"{non_triple_double_win_margin:.2f}"], ["Triple Double Loss Margin", f"{triple_double_loss_margin:.2f}"], ["Non-Triple Double Loss Margin", f"{non_triple_double_loss_margin:.2f}"]] margin_table = tabulate.tabulate(margin_data, tablefmt='html') margin_table Explanation: Game margin by Triple Double Occurrence Lastly, the game margin during triple double games can provide a naive comparison about how Russ' pursuit of a triple double affects his team. For instance, if Russ is selfishly pursuing triple doubles at the expense of team success, we could expect that the margin of loss is greater in loses where he records a triple double than in loses where a triple double is not achieved. Similarly, smaller win margins in games with a triple double than without a triple double could suggest a negative team effect when Russ creates a triple double. * what is the margin difference in games that are won/lost, but a triple double is recorded? * compare to margin for games won/lost without a triple double End of explanation import sys import IPython import matplotlib as mpl from datetime import datetime print('originally published 2022-03-06 11:43') print(f'last updated: {datetime.now().strftime("%Y-%m-%d %H:%M")} \n') print(f'Python version: {sys.version_info}') print(f'matplotlib version: {mpl.__version__}') print(f'iPython version: {IPython.__version__}') print(f'urllib version: {urllib.request.__version__}') print(f'seaborn version: {sns.__version__}') print(f'pandas version: {pd.__version__}') Explanation: When Russ's teams lose, they (on average) lose by a smaller margin (-7.27) when he records a triple double than when he does not (-9.96) Unlike the margin in losses, the win margin is nearly identical in games Russ' teams win when Russ records a triple double (11.55) than when he does not (11.6) Review In all honesty, these results are surprising (to me) and speak well for Russ and his supporters Westbrook's teams win a far greater percentage of games when he has a triple double than when he doesn't 74% of games won when a triple double is recorded vs 55% of games won without a triple double Additionally, game margins suggests that when Westbrook creates a triple double, the team performance is not negatively impacted. When Westbrook's teams lose but he records a triple double are smaller than game margins when a triple double is not recorded in a loss. Furthermore, Russ' teams win at a similar margin when he does and does not record a triple double. Together, the game margin evidence suggests that Russ is not pursuing triple doubles in blowouts and not stat padding as suggested in this video Westbrook recently became a triple double machine, amassing 156 of his 193 triple doubles in the last 6+ seasons alone alone Westbrook's increased pursuit of the triple double occurs after the Oklahoma City Thunder lost Kevin Durant during Free Agency in the summer of 2016 For reference, LeBron James (4<sup>th</sup> all time in triple doubles) has recorded 103 total triple doubles in a 19-year NBA career Westbrook is 12<sup>th</sup> all time in assists while LeBron is 7<sup>th</sup> (source: ESPN) Footnotes <sup id="fn1">1</sup>steals and blocks were added to the box score almost a decade after points, rebounds, and assists. This blazersedge.com article has excellent documentation of the NBA box score's evolution End of explanation
6,407
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href='http Step1: The Data There are some fake data csv files you can read in as dataframes Step2: Style Sheets Matplotlib has style sheets you can use to make your plots look a little nicer. These style sheets include plot_bmh,plot_fivethirtyeight,plot_ggplot and more. They basically create a set of style rules that your plots follow. I recommend using them, they make all your plots have the same look and feel more professional. You can even create your own if you want your company's plots to all have the same look (it is a bit tedious to create on though). Here is how to use them. Before plt.style.use() your plots look like this Step3: Call the style Step4: Now your plots look like this Step5: Let's stick with the ggplot style and actually show you how to utilize pandas built-in plotting capabilities! Plot Types There are several plot types built-in to pandas, most of them statistical plots by nature Step6: Barplots Step7: Histograms Step8: Line Plots Step9: Scatter Plots Step10: You can use c to color based off another column value Use cmap to indicate colormap to use. For all the colormaps, check out Step11: Or use s to indicate size based off another column. s parameter needs to be an array, not just the name of a column Step12: BoxPlots Step13: Hexagonal Bin Plot Useful for Bivariate Data, alternative to scatterplot Step14: Kernel Density Estimation plot (KDE)
Python Code: import numpy as np import pandas as pd %matplotlib inline Explanation: <a href='http://www.pieriandata.com'> <img src='../../Pierian_Data_Logo.png' /></a> Pandas Built-in Data Visualization In this lecture we will learn about pandas built-in capabilities for data visualization! It's built-off of matplotlib, but it baked into pandas for easier usage! Let's take a look! Imports End of explanation df1 = pd.read_csv('df1',index_col=0) df2 = pd.read_csv('df2') Explanation: The Data There are some fake data csv files you can read in as dataframes: End of explanation df1['A'].hist() Explanation: Style Sheets Matplotlib has style sheets you can use to make your plots look a little nicer. These style sheets include plot_bmh,plot_fivethirtyeight,plot_ggplot and more. They basically create a set of style rules that your plots follow. I recommend using them, they make all your plots have the same look and feel more professional. You can even create your own if you want your company's plots to all have the same look (it is a bit tedious to create on though). Here is how to use them. Before plt.style.use() your plots look like this: End of explanation import matplotlib.pyplot as plt plt.style.use('ggplot') Explanation: Call the style: End of explanation df1['A'].hist() plt.style.use('bmh') df1['A'].hist() plt.style.use('dark_background') df1['A'].hist() plt.style.use('fivethirtyeight') df1['A'].hist() plt.style.use('ggplot') Explanation: Now your plots look like this: End of explanation df2.plot.area(alpha=0.4) Explanation: Let's stick with the ggplot style and actually show you how to utilize pandas built-in plotting capabilities! Plot Types There are several plot types built-in to pandas, most of them statistical plots by nature: df.plot.area df.plot.barh df.plot.density df.plot.hist df.plot.line df.plot.scatter df.plot.bar df.plot.box df.plot.hexbin df.plot.kde df.plot.pie You can also just call df.plot(kind='hist') or replace that kind argument with any of the key terms shown in the list above (e.g. 'box','barh', etc..) Let's start going through them! Area End of explanation df2.head() df2.plot.bar() df2.plot.bar(stacked=True) Explanation: Barplots End of explanation df1['A'].plot.hist(bins=50) Explanation: Histograms End of explanation df1.plot.line(x=df1.index,y='B',figsize=(12,3),lw=1) Explanation: Line Plots End of explanation df1.plot.scatter(x='A',y='B') Explanation: Scatter Plots End of explanation df1.plot.scatter(x='A',y='B',c='C',cmap='coolwarm') Explanation: You can use c to color based off another column value Use cmap to indicate colormap to use. For all the colormaps, check out: http://matplotlib.org/users/colormaps.html End of explanation df1.plot.scatter(x='A',y='B',s=df1['C']*200) Explanation: Or use s to indicate size based off another column. s parameter needs to be an array, not just the name of a column: End of explanation df2.plot.box() # Can also pass a by= argument for groupby Explanation: BoxPlots End of explanation df = pd.DataFrame(np.random.randn(1000, 2), columns=['a', 'b']) df.plot.hexbin(x='a',y='b',gridsize=25,cmap='Oranges') Explanation: Hexagonal Bin Plot Useful for Bivariate Data, alternative to scatterplot: End of explanation df2['a'].plot.kde() df2.plot.density() Explanation: Kernel Density Estimation plot (KDE) End of explanation
6,408
Given the following text description, write Python code to implement the functionality described below step by step Description: Cranfield dataset processing This notebook creates vector space model for documents and queries contained in Cranfield collection. Step1: Helper functions. get_top_n() simply returns indices of top ten relevant documents for each query. get_precision() returns precision, recall and f-score for a query (represented by query index and top ten documents retrieved). Step2: Here we prepare corpus of documents and queries for processing. Note that corpus[ Step3: Initialization of different vectorizers we are going to use to create vector space model. * TFIDF vectorizer -- calculates TFIDF score for each document or query * Count vectorizer -- counts term in each document or query * Binary vectorizer -- 1 if term is present in document/query, 0 otherwise Step4: Matrices with dimensions (1625 -- number of documents and queries, 20679 -- total number of terms) for each vectorizer. Matrix rows are vector of given vector space models (each row represent document or query). Step5: Calculate similarity between queries and documents using given vector space model (TFIDF, count, binary) and distance measure (cosine similiarity, euclidean distance). Each matrix has dimensions (225, 1400), each element represents similarity betweent one query and one document. Step6: Get indices of 10 most relevant documents for each query using given vector space model and distance measure. Step7: Now we can use top 10 matching documents for each query to calculate methods precision, recall and F-score. These values are calculated by comparing query's top 10 documents to the list of actually relevant documents. These lists (one for each query) come with Cranfield collection and can be found in cranfield/r folder. Step8: When we have these values, we can compare different distances and vector models. As you can see, precision can vary a lot, depending on the query, different vector space models however yield similar results. Step9: On the other hand, when usin euclidean distance instead of cosine similarity, results are much worse (barely any query yields desired results). Step10: Precision, recall and fscores for each method are shown in following "table", in format (mean, maximum). We can see that TFIDF vector space model performed best, which was expected. Euclidean distance method yielded much worse results for every vector space model.
Python Code: from sklearn.metrics.pairwise import cosine_similarity, pairwise_distances from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer import matplotlib.pyplot as plt import matplotlib.style import numpy as np matplotlib.style.use('ggplot') %matplotlib inline matplotlib.rcParams['figure.figsize'] = (20.0, 10.0) matplotlib.rcParams.update({'font.size': 18}) Explanation: Cranfield dataset processing This notebook creates vector space model for documents and queries contained in Cranfield collection. End of explanation def get_top_n(matrix, n=10): return np.array([ matrix[i].argsort()[-n:][::-1]+1 for i in range(225)]) def get_precision(query_index, retrieved): relevant = [] with open('cranfield/r/{}.txt'.format(query_index)) as f: for line in f: relevant.append(int(line)) tp = 0 fn = 0 fp = 0 for doc in relevant: if doc in retrieved: tp += 1 else: fn += 1 for doc in retrieved: if doc not in relevant: fp += 1 p = tp / (tp + fp) r = tp / (tp + fn) if p == 0 or r == 0: f = 0 else: f = 2 * ((p * r)/(p + r)) return p, r, f def get_prf_dict(top_retrieved): prf = { 'p' : np.array([0.] * len(top_retrieved)), 'r' : np.array([0.] * len(top_retrieved)), 'f' : np.array([0.] * len(top_retrieved)), } for idx, top in enumerate(top_retrieved): prf['p'][idx] = get_precision(idx + 1, top)[0] prf['r'][idx] = get_precision(idx + 1, top)[1] prf['f'][idx] = get_precision(idx + 1, top)[2] return prf Explanation: Helper functions. get_top_n() simply returns indices of top ten relevant documents for each query. get_precision() returns precision, recall and f-score for a query (represented by query index and top ten documents retrieved). End of explanation corpus = [] for d in range(1400): f = open("cranfield/d/"+str(d+1)+".txt") corpus.append(f.read()) f.close() for q in range(225): f = open("cranfield/q/"+str(q+1)+".txt") corpus.append(f.read()) f.close() Explanation: Here we prepare corpus of documents and queries for processing. Note that corpus[:1400] contains documents and corpus[1400:] contains queries. End of explanation tfidf_vectorizer = TfidfVectorizer() count_vectorizer = CountVectorizer() binary_vectorizer = CountVectorizer(binary=True) Explanation: Initialization of different vectorizers we are going to use to create vector space model. * TFIDF vectorizer -- calculates TFIDF score for each document or query * Count vectorizer -- counts term in each document or query * Binary vectorizer -- 1 if term is present in document/query, 0 otherwise End of explanation tfidf_matrix = tfidf_vectorizer.fit_transform(corpus) count_matrix = count_vectorizer.fit_transform(corpus) bin_matrix = binary_vectorizer.fit_transform(corpus) Explanation: Matrices with dimensions (1625 -- number of documents and queries, 20679 -- total number of terms) for each vectorizer. Matrix rows are vector of given vector space models (each row represent document or query). End of explanation r_tfidf_cos = np.array(cosine_similarity(tfidf_matrix[1400:], tfidf_matrix[:1400])) r_tfidf_euc = np.array(pairwise_distances(tfidf_matrix[1400:], tfidf_matrix[:1400])) r_count_cos = np.array(cosine_similarity(count_matrix[1400:], count_matrix[:1400])) r_count_euc = np.array(pairwise_distances(count_matrix[1400:], count_matrix[:1400])) r_bin_cos = np.array(cosine_similarity(bin_matrix[1400:], bin_matrix[:1400])) r_bin_euc = np.array(pairwise_distances(bin_matrix[1400:], bin_matrix[:1400])) Explanation: Calculate similarity between queries and documents using given vector space model (TFIDF, count, binary) and distance measure (cosine similiarity, euclidean distance). Each matrix has dimensions (225, 1400), each element represents similarity betweent one query and one document. End of explanation top_relevant_tfidf_cos = get_top_n(r_tfidf_cos) top_relevant_tfidf_euc = get_top_n(r_tfidf_euc) top_relevant_count_cos = get_top_n(r_count_cos) top_relevant_count_euc = get_top_n(r_count_euc) top_relevant_bin_cos = get_top_n(r_bin_cos) top_relevant_bin_euc = get_top_n(r_bin_euc) Explanation: Get indices of 10 most relevant documents for each query using given vector space model and distance measure. End of explanation tfidf_cos_prf = get_prf_dict(top_relevant_tfidf_cos) tfidf_euc_prf = get_prf_dict(top_relevant_tfidf_euc) count_cos_prf = get_prf_dict(top_relevant_count_cos) count_euc_prf = get_prf_dict(top_relevant_count_euc) bin_cos_prf = get_prf_dict(top_relevant_bin_cos) bin_euc_prf = get_prf_dict(top_relevant_bin_euc) Explanation: Now we can use top 10 matching documents for each query to calculate methods precision, recall and F-score. These values are calculated by comparing query's top 10 documents to the list of actually relevant documents. These lists (one for each query) come with Cranfield collection and can be found in cranfield/r folder. End of explanation r = np.arange(225) plt.bar(r, tfidf_cos_prf['p'], color='r', label='TFIDF') plt.bar(r, count_cos_prf['p'], color='g', label='Count') plt.bar(r, bin_cos_prf['p'], color='b', label='Binary') plt.legend() plt.title('Precision of each query for each vector space model\n (using cosine similarity)') plt.xlabel('Query index') plt.ylabel('Precision') plt.show() Explanation: When we have these values, we can compare different distances and vector models. As you can see, precision can vary a lot, depending on the query, different vector space models however yield similar results. End of explanation plt.bar(r,tfidf_cos_prf['p'], label='Cosine', color='b') plt.bar(r,tfidf_euc_prf['p'], label='Euclidean', color='r') plt.legend() plt.title('Precision of each query for TFIDF\n (using cosine and euclidean similarity)') plt.xlabel('Query index') plt.ylabel('Precision') plt.show() Explanation: On the other hand, when usin euclidean distance instead of cosine similarity, results are much worse (barely any query yields desired results). End of explanation eval_scores = {} for method in [('TFIDF', tfidf_cos_prf, tfidf_euc_prf), ('Count', count_cos_prf, count_euc_prf), ('Binary', bin_cos_prf, bin_euc_prf)]: eval_scores[method[0]] = {} for metric in ['p', 'r', 'f']: eval_scores[method[0]][metric] = {} eval_scores[method[0]][metric]['cos'] = (method[1][metric].mean(), method[1][metric].max()) eval_scores[method[0]][metric]['euc'] = (method[2][metric].mean(), method[2][metric].max()) import pprint pprint.pprint(eval_scores) Explanation: Precision, recall and fscores for each method are shown in following "table", in format (mean, maximum). We can see that TFIDF vector space model performed best, which was expected. Euclidean distance method yielded much worse results for every vector space model. End of explanation
6,409
Given the following text description, write Python code to implement the functionality described below step by step Description: GBM Geometry Demo J. Michael Burgess gbmeometry is a module with routines for handling GBM geometry. It performs a few tasks Step1: Making an interpolation from TRIGDAT Getting the data We can use GetGBMData to download the data Step2: By default, GetGBMData will grad data from all detectors. However, one can set a subset to retrieve different data types Step3: Interpolating First let's create an interpolating object for a given TRIGDAT file (POSHIST files are also readable) Step4: Single detector One can look at a single detector which knows about it's orientation in the Fermi SC coordinates Step5: We can also go back into the GBMFrame Step6: Earth Centered Coordinates The sc_pos are Earth centered coordinates (in km for trigdat and m for poshist) and can also be passed. It is a good idea to specify the units! Step7: Working with the GBM class Ideally, we want to know about many detectors. The GBM class performs operations on all detectors for ease of use. It also has plotting capabilities Step8: Plotting We can look at the NaI view on the sky for a given FOV Step9: In Fermi GBM coodinates We can also plot in Fermi GBM spacecraft coordinates Step10: Capturing points on the sky We can even see which detector's FOVs contain a point on the sky. We create a mock GRB SKycoord first. Step11: Looking at Earth occulted points on the sky We can plot the points occulted by the Earth (assuming points 68.5 degrees form the Earth anti-zenith are hidden) Step12: Source/Detector Separation We can even look at the separation angles for the detectors and the source Step13: Examining Legal Detector Pairs To see which detectors are valid, can look at the legal pairs map
Python Code: %pylab inline from astropy.coordinates import SkyCoord import astropy.coordinates as coord import astropy.units as u from gbmgeometry import * Explanation: GBM Geometry Demo J. Michael Burgess gbmeometry is a module with routines for handling GBM geometry. It performs a few tasks: * creates and astropy coordinate frame for Fermi GBM given a quarternion * allows for coordinate transforms from Fermi frame to in astropy frame (J2000, etc.) * plots the GBM NaI detectors at a given time for a given FOV * determines if a astropy SkyCoord location is within an NaI's FOV * creates interpolations over GBM quarternions and SC coordinates End of explanation data = GetGBMData("080916009") data.set_destination("") # You can enter a folder here. If you want the CWD, you do not have to set data.get_trigdat() Explanation: Making an interpolation from TRIGDAT Getting the data We can use GetGBMData to download the data End of explanation data.select_detectors('n1','n2','b0') data.get_rsp_cspec() Explanation: By default, GetGBMData will grad data from all detectors. However, one can set a subset to retrieve different data types End of explanation interp = PositionInterpolator(trigdat="glg_trigdat_all_bn080916009_v02.fit") # In trigger times print "Quaternions" print interp.quaternion(0) print interp.quaternion(10) print print "SC XYZ" print interp.sc_pos(0) print interp.sc_pos(10) Explanation: Interpolating First let's create an interpolating object for a given TRIGDAT file (POSHIST files are also readable) End of explanation na = NaIA(interp.quaternion(0)) print na.get_center() print na.get_center().icrs #J2000 print na.get_center().galactic # Galactic print print "Changing in time" na.set_quaternion(interp.quaternion(100)) print na.get_center() print na.get_center().icrs #J2000 print na.get_center().galactic # Galactic Explanation: Single detector One can look at a single detector which knows about it's orientation in the Fermi SC coordinates End of explanation center_j2000 = na.get_center().icrs center_j2000 center_j2000.transform_to(GBMFrame(quaternion=interp.quaternion(100.))) Explanation: We can also go back into the GBMFrame End of explanation na = NaIA(interp.quaternion(0),interp.sc_pos(0)*u.km) na.get_center() Explanation: Earth Centered Coordinates The sc_pos are Earth centered coordinates (in km for trigdat and m for poshist) and can also be passed. It is a good idea to specify the units! End of explanation myGBM = GBM(interp.quaternion(0),sc_pos=interp.sc_pos(0)*u.km) myGBM.get_centers() [x.icrs for x in myGBM.get_centers()] Explanation: Working with the GBM class Ideally, we want to know about many detectors. The GBM class performs operations on all detectors for ease of use. It also has plotting capabilities End of explanation myGBM.detector_plot(radius=60) myGBM.detector_plot(radius=10,projection='ortho',lon_0=40) myGBM.detector_plot(radius=10,projection='ortho',lon_0=0,lat_0=40,fignum=2) Explanation: Plotting We can look at the NaI view on the sky for a given FOV End of explanation myGBM.detector_plot(radius=60,fermi_frame=True) myGBM.detector_plot(radius=10,projection='ortho',lon_0=20,fermi_frame=True) myGBM.detector_plot(radius=10,projection='ortho',lon_0=200,fermi_frame=True,fignum=3) myGBM.detector_plot(radius=10,projection='ortho',lon_0=0,lat_0=40,fignum=2,fermi_frame=True) Explanation: In Fermi GBM coodinates We can also plot in Fermi GBM spacecraft coordinates End of explanation grb = SkyCoord(ra=130.,dec=-45 ,frame='icrs', unit='deg') myGBM.detector_plot(radius=60, projection='moll', good=True, # only plot NaIs that see the GRB point=grb, lon_0=110,lat_0=-0) myGBM.detector_plot(radius=60, projection='ortho', good=True, # only plot NaIs that see the GRB point=grb, lon_0=180,lat_0=-40,fignum=2) Explanation: Capturing points on the sky We can even see which detector's FOVs contain a point on the sky. We create a mock GRB SKycoord first. End of explanation myGBM.detector_plot(radius=10,show_earth=True,lon_0=90) myGBM.detector_plot(radius=10,lon_0=100,show_earth=True,projection='ortho') myGBM.detector_plot(radius=10,show_earth=True,lon_0=120,lat_0=-30,fermi_frame=True,projection='ortho') Explanation: Looking at Earth occulted points on the sky We can plot the points occulted by the Earth (assuming points 68.5 degrees form the Earth anti-zenith are hidden) End of explanation seps = myGBM.get_separation(grb) seps.sort("Separation") seps Explanation: Source/Detector Separation We can even look at the separation angles for the detectors and the source End of explanation get_legal_pairs() Explanation: Examining Legal Detector Pairs To see which detectors are valid, can look at the legal pairs map End of explanation
6,410
Given the following text description, write Python code to implement the functionality described below step by step Description: Circuito RLC paralelo sem fonte Jupyter Notebook desenvolvido por Gustavo S.S. Circuitos RLC em paralelo têm diversas aplicações, como em projetos de filtros e redes de comunicação. Suponha que a corrente inicial I0 no indutor e a tensão inicial V0 no capacitor sejam Step1: Problema Prático 8.5 Na Figura 8.13, seja R = 2 Ω, L = 0,4 H, C = 25 mF, v(0) = 0, e i(0) = 50 mA. Determine v(t) para t > 0. Step2: Exemplo 8.6 Determine v(t) para t > 0 no circuito RLC da Figura 8.15.
Python Code: print("Exemplo 8.5") from sympy import * m = 10**(-3) #definicao de mili L = 1 C = 10*m v0 = 5 i0 = 0 A1 = symbols('A1') A2 = symbols('A2') t = symbols('t') def sqrt(x, root = 2): #definir funcao para raiz y = x**(1/root) return y print("\n--------------\n") ## PARA R = 1.923 R = 1.923 print("Para R = ", R) def resolve_rlc(R,L,C): alpha = 1/(2*R*C) omega = 1/sqrt(L*C) print("Alpha:",alpha) print("Omega:",omega) s1 = -alpha + sqrt(alpha**2 - omega**2) s2 = -alpha - sqrt(alpha**2 - omega**2) def rlc(alpha,omega): #funcao para verificar tipo de amortecimento resposta = "" if alpha > omega: resposta = "superamortecimento" v = A1*exp(s1*t) + A2*exp(s2*t) elif alpha == omega: resposta = "amortecimento critico" v = (A1 + A2*t)*exp(-alpha*t) else: resposta = "subamortecimento" v = exp(-alpha*t)*(A1*cos(omega_d*t) + A2*sin(omega_d*t)) return resposta,v resposta,v = rlc(alpha,omega) print("Tipo de resposta:",resposta) print("Resposta v(t):",v) print("v(0):",v.subs(t,0)) print("dv(0)/dt:",v.diff(t).subs(t,0)) return alpha,omega,s1,s2,resposta,v alpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C) #v(0) = 5 = A1 + A2 -> A2 = 5 - A1 #dv(0)/dt = -2A1 - 50A2 #C*dv(0)/dt + i(0) + v(0)/R = 0 #0.01*(-2A1 - 50A2) + 0 + 5/1.923 = 0 #(-2A1 -50(5 - A1)) = -5/(1.923*0.01) #48A1 = 250 - 5/(1.923*0.01) A1 = (250 - 5/(1.923*0.01))/48 print("Constante A1:",A1) A2 = 5 - A1 print("Constante A2:",A2) v = A1*exp(s1*t) + A2*exp(s2*t) print("Resposta v(t):",v,"V") print("\n--------------\n") ## PARA R = 5 R = 5 A1 = symbols('A1') A2 = symbols('A2') print("Para R = ", R) alpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C) #v(t) = (A1 + A2t)e^(-alpha*t) #v(0) = A1 = 5 A1 = 5 #C*dv(0)/dt + i(0) + v(0)/R = 0 #0.01(-10A1 + A2) + 0 + 5/5 = 0 #0.01A2 = -1 + 0.5 A2 = (-1 + 0.5)/0.01 print("Constante A1:",A1) print("Constante A2:",A2) v = (A1 + A2*t)*exp(-alpha*t) print("Resposta v(t):",v,"V") print("\n--------------\n") ## PARA R = 6.25 R = 6.25 A1 = symbols('A1') A2 = symbols('A2') print("Para R = ", R) omega_d = sqrt(omega**2 - alpha**2) alpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C) #v(t) = e^-(alpha*t)*(A1cos(wd*t) + A2sen(wd*t)) #v(0) = A1 = 5 A1 = 5 #C*dv(0)/dt + i(0) + v(0)/R = 0 #0.01*(-8A1 + 6A2) + 0 + 5/6.25 = 0 #-0.4 + 0.06A2 = -5/6.25 A2 = (-5/6.25 + 0.4)/0.06 print("Constante A1:",A1) print("Constante A2:",A2) v = exp(-alpha*t)*(A1*cos(omega_d*t) + A2*sin(omega_d*t)) print("Resposta v(t):",v,"V") Explanation: Circuito RLC paralelo sem fonte Jupyter Notebook desenvolvido por Gustavo S.S. Circuitos RLC em paralelo têm diversas aplicações, como em projetos de filtros e redes de comunicação. Suponha que a corrente inicial I0 no indutor e a tensão inicial V0 no capacitor sejam: \begin{align} {\Large i(0) = I_0 = \frac{1}{L} \int_{-\infty}^{0} v(t) dt} \end{align} \begin{align} {\Large v(0) = V_0} \end{align} Portanto, aplicando a LKC ao nó superior fornece: \begin{align} {\Large \frac{v}{R} + \frac{1}{L} \int_{-\infty}^{t} v(\tau) d\tau + C \frac{dv}{dt} = 0} \end{align} Extraindo a derivada em relação a t e dividindo por C resulta em: \begin{align} {\Large \frac{d^2v}{dt^2} + \frac{1}{RC} \frac{dv}{dt} + \frac{1}{LC} v = 0} \end{align} Obtemos a equação característica substituindo a primeira derivada por s e a segunda por s^2: \begin{align} {\Large s^2 + \frac{1}{RC} s + \frac{1}{LC} = 0} \end{align} Assim, as raízes da equação característica são: \begin{align} {\Large s_{1,2} = -\alpha \pm \sqrt{\alpha^2 - \omega_0^2}} \end{align} onde: \begin{align} {\Large \alpha = \frac{1}{2RC}, \space \space \space \omega_0 = \frac{1}{\sqrt{LC}} } \end{align} Amortecimento Supercrítico / Superamortecimento (α > ω0) Quando α > ω0, as raízes da equação característica são reais e negativas. A resposta é: \begin{align} {\Large v(t) = A_1 e^{s_1 t} + A_2 e^{s_2 t} } \end{align} Amortecimento Crítico (α = ω0) Quando α = ω0 as raízes da equação característica são reais e iguais de modo que a resposta seja: \begin{align} {\Large v(t) = (A_1 + A_2t)e^{-\alpha t}} \end{align} Subamortecimento (α < ω0) Quando α < ω0, nesse caso, as raízes são complexas e podem ser expressas como segue: \begin{align} {\Large s_{1,2} = -\alpha \pm j\omega_d} \ \{\Large \omega_d = \sqrt{\omega_0^2 - \alpha^2}} \end{align} \begin{align} {\Large v(t) = e^{-\alpha t}(A_1 cos(\omega_d t) + A_2 sen(\omega_d t))} \end{align} As constantes A1 e A2 em cada caso podem ser determinadas a partir das condições iniciais. Precisamos de v(0) e dv(0)/dt. Exemplo 8.5 No circuito paralelo da Figura 8.13, determine v(t) para t > 0, supondo que v(0) = 5 V, i(0) = 0, L = 1 H e C = 10 mF. Considere os seguintes casos: R = 1,923 Ω, R = 5 Ω e R = 6,25 Ω . End of explanation print("Problema Prático 8.5") R = 2 L = 0.4 C = 25*m v0 = 0 i0 = 50*m A1 = symbols('A1') A2 = symbols('A2') alpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C) #C*dv(0)/dt + i(0) + v(0)/R = 0 #C*(-10A1 + A2) + i0 + v(0)/2 = 0 #v(0) = 0 = A1 #C*A2 = -i0 A2 = -i0/C A1 = 0 print("Constante A1:",A1) print("Constante A2:",A2) v = (A1 + A2*t)*exp(-10.0*t) print("Resposta v(t):",v,"V") Explanation: Problema Prático 8.5 Na Figura 8.13, seja R = 2 Ω, L = 0,4 H, C = 25 mF, v(0) = 0, e i(0) = 50 mA. Determine v(t) para t > 0. End of explanation print("Exemplo 8.6") u = 10**(-6) #definicao de micro Vs = 40 L = 0.4 C = 20*u A1 = symbols('A1') A2 = symbols('A2') #Para t < 0 v0 = Vs*50/(50 + 30) i0 = -Vs/(50 + 30) print("V0:",v0,"V") print("i0:",i0,"A") #Para t > 0 #C*dv(0)/dt + i(0) + v(0)/50 = 0 #20u*dv(0)/dt - 0.5 + 0.5 = 0 #dv(0)/dt = 0 R = 50 alpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C) #v(0) = 25 = A1 + A2 #A1 = 25 - A2 #dv(0)/dt = -146A1 - 854A2 = 0 #-146(25 - A2) - 854A2 = 0 #146A2 - 854A2 = 3650 #-708A2 = 3650 A2 = -3650/708 A1 = 25 - A2 print("Constante A1:",A1) print("Constante A2:",A2) v = A1*exp(s1*t) + A2*exp(s2*t) print("Resposta v(t):",v,"V") Explanation: Exemplo 8.6 Determine v(t) para t > 0 no circuito RLC da Figura 8.15. End of explanation
6,411
Given the following text description, write Python code to implement the functionality described below step by step Description: Today Melanie lead the meeting with a session on the ArcGIS software and how we can use Python to automatise the geospatial data processing. The slides are available below. We started with a brief introduction to the types of data and analysis you can do in ArcGIS. Then Melanie demonstrated how to produce a 3D terrain model using the ArcScene toolbox. Presentation Step1: We all agreed that ArcGIS has a lot to offer to geoscientists. But what makes this software even more appealing is that you can work in a command-line interface using Python (ArcPy module). So we looked at how to run processes using the Python window command-by-command and how you might integrate ArcGIS processes within a longer script. This was exemplified by Melanie's script that she used to analyse vegetation regrowth after a volcanic eruption. The script takes two vegetation photos in GeoTIFF format retrieved by Landsat as input and calculates the Normalised Difference Vegetation Index (NDVI) for each of them. We can then compare the output to see how vegetation has changed over the time period. <div class="alert alert-warning", style="font-size
Python Code: # embed pdf into an automatically resized window (requires imagemagick) w_h_str = !identify -format "%w %h" ../pdfs/arcgis-intro.pdf[0] HTML('<iframe src=../pdfs/arcgis-intro.pdf width={0[0]} height={0[1]}></iframe>'.format([int(i)*0.8 for i in w_h_str[0].split()])) Explanation: Today Melanie lead the meeting with a session on the ArcGIS software and how we can use Python to automatise the geospatial data processing. The slides are available below. We started with a brief introduction to the types of data and analysis you can do in ArcGIS. Then Melanie demonstrated how to produce a 3D terrain model using the ArcScene toolbox. Presentation End of explanation HTML(html) Explanation: We all agreed that ArcGIS has a lot to offer to geoscientists. But what makes this software even more appealing is that you can work in a command-line interface using Python (ArcPy module). So we looked at how to run processes using the Python window command-by-command and how you might integrate ArcGIS processes within a longer script. This was exemplified by Melanie's script that she used to analyse vegetation regrowth after a volcanic eruption. The script takes two vegetation photos in GeoTIFF format retrieved by Landsat as input and calculates the Normalised Difference Vegetation Index (NDVI) for each of them. We can then compare the output to see how vegetation has changed over the time period. <div class="alert alert-warning", style="font-size: 100%"> <li>Standard ArcGIS uses Python 2.7 (Python 3 is available in ArcGIS Pro) <li>The commands below require ArcGIS installed, and hence are not in executable cells in this notebook. </div> ArcPy script example: NDVI of the two geotiff images Import modules import arcpy, string, arcpy.sa from arcpy import env Check out extension and set overwrite outputs arcpy.CheckOutExtension("spatial") arcpy.env.overwriteOutput = True Stop outputs being added to the map arcpy.env.addOutputsToMap = "FALSE" Set workspace and declare variations env.workspace = ("/path/to/demo/demo1") print(arcpy.env.workspace) Load the data rasterb3 = arcpy.Raster("p046r28_5t900922_nn3.tif") rasterb4 = arcpy.Raster("p046r28_5t900922_nn4.tif") Describe variables desc = arcpy.Describe(rasterb4) print(desc.dataType) print(desc.meanCellHeight) Calculate the NDVI Num = arcpy.sa.Float(rasterb4-rasterb3) Denom = arcpy.sa.Float(rasterb4 + rasterb3) NDVI1990 = arcpy.sa.Divide(Num, Denom) Save the result as another .tif image NDVI1990.save("/path/to/demo/demo1/NDVI1990.tif") Do the same calculation for the images from a later year ``` rasterb3a = arcpy.Raster("L71046028_02820050721_B30.TIF") rasterb4a = arcpy.Raster("L71046028_02820050721_B40.TIF") Num = arcpy.sa.Float(rasterb4a-rasterb3a) Denom = arcpy.sa.Float(rasterb4a + rasterb3a) NDVI2005 = arcpy.sa.Divide(Num, Denom) ``` And after saving the second result, calculate the NDVI difference ``` NDVI2005.save("/path/to/demo/demo1/NDVI2005.tif") NDVIdiff = NDVI2005 - NDVI1990 NDVIdiff.save("/path/to/demo/demo1/NDVIdiff.tif") ``` The result is shown in the slide 5. End of explanation
6,412
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocean MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Is Required Step9: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required Step10: 2.2. Eos Functional Temp Is Required Step11: 2.3. Eos Functional Salt Is Required Step12: 2.4. Eos Functional Depth Is Required Step13: 2.5. Ocean Freezing Point Is Required Step14: 2.6. Ocean Specific Heat Is Required Step15: 2.7. Ocean Reference Density Is Required Step16: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required Step17: 3.2. Type Is Required Step18: 3.3. Ocean Smoothing Is Required Step19: 3.4. Source Is Required Step20: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required Step21: 4.2. River Mouth Is Required Step22: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required Step23: 5.2. Code Version Is Required Step24: 5.3. Code Languages Is Required Step25: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required Step26: 6.2. Canonical Horizontal Resolution Is Required Step27: 6.3. Range Horizontal Resolution Is Required Step28: 6.4. Number Of Horizontal Gridpoints Is Required Step29: 6.5. Number Of Vertical Levels Is Required Step30: 6.6. Is Adaptive Grid Is Required Step31: 6.7. Thickness Level 1 Is Required Step32: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required Step33: 7.2. Global Mean Metrics Used Is Required Step34: 7.3. Regional Metrics Used Is Required Step35: 7.4. Trend Metrics Used Is Required Step36: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required Step37: 8.2. Scheme Is Required Step38: 8.3. Consistency Properties Is Required Step39: 8.4. Corrected Conserved Prognostic Variables Is Required Step40: 8.5. Was Flux Correction Used Is Required Step41: 9. Grid Ocean grid 9.1. Overview Is Required Step42: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required Step43: 10.2. Partial Steps Is Required Step44: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required Step45: 11.2. Staggering Is Required Step46: 11.3. Scheme Is Required Step47: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required Step48: 12.2. Diurnal Cycle Is Required Step49: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required Step50: 13.2. Time Step Is Required Step51: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required Step52: 14.2. Scheme Is Required Step53: 14.3. Time Step Is Required Step54: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required Step55: 15.2. Time Step Is Required Step56: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required Step57: 17. Advection Ocean advection 17.1. Overview Is Required Step58: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required Step59: 18.2. Scheme Name Is Required Step60: 18.3. ALE Is Required Step61: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required Step62: 19.2. Flux Limiter Is Required Step63: 19.3. Effective Order Is Required Step64: 19.4. Name Is Required Step65: 19.5. Passive Tracers Is Required Step66: 19.6. Passive Tracers Advection Is Required Step67: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required Step68: 20.2. Flux Limiter Is Required Step69: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required Step70: 21.2. Scheme Is Required Step71: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required Step72: 22.2. Order Is Required Step73: 22.3. Discretisation Is Required Step74: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required Step75: 23.2. Constant Coefficient Is Required Step76: 23.3. Variable Coefficient Is Required Step77: 23.4. Coeff Background Is Required Step78: 23.5. Coeff Backscatter Is Required Step79: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required Step80: 24.2. Submesoscale Mixing Is Required Step81: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required Step82: 25.2. Order Is Required Step83: 25.3. Discretisation Is Required Step84: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required Step85: 26.2. Constant Coefficient Is Required Step86: 26.3. Variable Coefficient Is Required Step87: 26.4. Coeff Background Is Required Step88: 26.5. Coeff Backscatter Is Required Step89: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required Step90: 27.2. Constant Val Is Required Step91: 27.3. Flux Type Is Required Step92: 27.4. Added Diffusivity Is Required Step93: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required Step94: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required Step95: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required Step96: 30.2. Closure Order Is Required Step97: 30.3. Constant Is Required Step98: 30.4. Background Is Required Step99: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required Step100: 31.2. Closure Order Is Required Step101: 31.3. Constant Is Required Step102: 31.4. Background Is Required Step103: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required Step104: 32.2. Tide Induced Mixing Is Required Step105: 32.3. Double Diffusion Is Required Step106: 32.4. Shear Mixing Is Required Step107: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required Step108: 33.2. Constant Is Required Step109: 33.3. Profile Is Required Step110: 33.4. Background Is Required Step111: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required Step112: 34.2. Constant Is Required Step113: 34.3. Profile Is Required Step114: 34.4. Background Is Required Step115: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required Step116: 35.2. Scheme Is Required Step117: 35.3. Embeded Seaice Is Required Step118: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required Step119: 36.2. Type Of Bbl Is Required Step120: 36.3. Lateral Mixing Coef Is Required Step121: 36.4. Sill Overflow Is Required Step122: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required Step123: 37.2. Surface Pressure Is Required Step124: 37.3. Momentum Flux Correction Is Required Step125: 37.4. Tracers Flux Correction Is Required Step126: 37.5. Wave Effects Is Required Step127: 37.6. River Runoff Budget Is Required Step128: 37.7. Geothermal Heating Is Required Step129: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required Step130: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required Step131: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required Step132: 40.2. Ocean Colour Is Required Step133: 40.3. Extinction Depth Is Required Step134: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required Step135: 41.2. From Sea Ice Is Required Step136: 41.3. Forced Mode Restoring Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-1', 'ocean') Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: MOHC Source ID: SANDBOX-1 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:15 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation
6,413
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Fully-Connected Neural Nets In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures. In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this Step4: Affine layer Step5: Affine layer Step6: ReLU layer Step7: ReLU layer Step8: "Sandwich" layers There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py. For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass Step9: Loss layers Step10: Two-layer network In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations. Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation. Step11: Solver In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class. Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set. Step12: Multilayer network Next you will implement a fully-connected network with an arbitrary number of hidden layers. Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py. Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon. Initial loss and gradient check As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable? For gradient checking, you should expect to see errors around 1e-6 or less. Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs. Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs. Step15: Inline question Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster. Step17: RMSProp and Adam RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients. In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below. [1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules Step19: Train a good model! Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net. If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets. You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models. Step20: Test you model Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
Python Code: # As usual, a bit of setup import time import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.fc_net import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from cs231n.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): returns relative error return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in data.iteritems(): print '%s: ' % k, v.shape Explanation: Fully-Connected Neural Nets In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures. In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this: ```python def layer_forward(x, w): Receive inputs x and weights w # Do some computations ... z = # ... some intermediate value # Do some more computations ... out = # the output cache = (x, w, z, out) # Values we need to compute gradients return out, cache ``` The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this: ```python def layer_backward(dout, cache): Receive derivative of loss with respect to outputs and cache, and compute derivative with respect to inputs. # Unpack cache values x, w, z, out = cache # Use values in cache to compute derivatives dx = # Derivative of loss with respect to x dw = # Derivative of loss with respect to w return dx, dw ``` After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures. In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks. End of explanation # Test the affine_forward function num_inputs = 2 input_shape = (4, 5, 6) output_dim = 3 input_size = num_inputs * np.prod(input_shape) weight_size = output_dim * np.prod(input_shape) x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape) w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim) b = np.linspace(-0.3, 0.1, num=output_dim) out, _ = affine_forward(x, w, b) correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297], [ 3.25553199, 3.5141327, 3.77273342]]) # Compare your output with ours. The error should be around 1e-9. print 'Testing affine_forward function:' print 'difference: ', rel_error(out, correct_out) Explanation: Affine layer: foward Open the file cs231n/layers.py and implement the affine_forward function. Once you are done you can test your implementaion by running the following: End of explanation # Test the affine_backward function x = np.random.randn(10, 2, 3) w = np.random.randn(6, 5) b = np.random.randn(5) dout = np.random.randn(10, 5) dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout) _, cache = affine_forward(x, w, b) dx, dw, db = affine_backward(dout, cache) # The error should be around 1e-10 print 'Testing affine_backward function:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) Explanation: Affine layer: backward Now implement the affine_backward function and test your implementation using numeric gradient checking. End of explanation # Test the relu_forward function x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4) out, _ = relu_forward(x) correct_out = np.array([[ 0., 0., 0., 0., ], [ 0., 0., 0.04545455, 0.13636364,], [ 0.22727273, 0.31818182, 0.40909091, 0.5, ]]) # Compare your output with ours. The error should be around 1e-8 print 'Testing relu_forward function:' print 'difference: ', rel_error(out, correct_out) Explanation: ReLU layer: forward Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following: End of explanation x = np.random.randn(10, 10) dout = np.random.randn(*x.shape) dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout) _, cache = relu_forward(x) dx = relu_backward(dout, cache) # The error should be around 1e-12 print 'Testing relu_backward function:' print 'dx error: ', rel_error(dx_num, dx) Explanation: ReLU layer: backward Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking: End of explanation from cs231n.layer_utils import affine_relu_forward, affine_relu_backward x = np.random.randn(2, 3, 4) w = np.random.randn(12, 10) b = np.random.randn(10) dout = np.random.randn(2, 10) out, cache = affine_relu_forward(x, w, b) dx, dw, db = affine_relu_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout) print 'Testing affine_relu_forward:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) Explanation: "Sandwich" layers There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py. For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass: End of explanation num_classes, num_inputs = 10, 50 x = 0.001 * np.random.randn(num_inputs, num_classes) y = np.random.randint(num_classes, size=num_inputs) dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False) loss, dx = svm_loss(x, y) # Test svm_loss function. Loss should be around 9 and dx error should be 1e-9 print 'Testing svm_loss:' print 'loss: ', loss print 'dx error: ', rel_error(dx_num, dx) dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False) loss, dx = softmax_loss(x, y) # Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8 print '\nTesting softmax_loss:' print 'loss: ', loss print 'dx error: ', rel_error(dx_num, dx) Explanation: Loss layers: Softmax and SVM You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py. You can make sure that the implementations are correct by running the following: End of explanation N, D, H, C = 3, 5, 50, 7 X = np.random.randn(N, D) y = np.random.randint(C, size=N) std = 1e-2 model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std) print 'Testing initialization ... ' W1_std = abs(model.params['W1'].std() - std) b1 = model.params['b1'] W2_std = abs(model.params['W2'].std() - std) b2 = model.params['b2'] assert W1_std < std / 10, 'First layer weights do not seem right' assert np.all(b1 == 0), 'First layer biases do not seem right' assert W2_std < std / 10, 'Second layer weights do not seem right' assert np.all(b2 == 0), 'Second layer biases do not seem right' print 'Testing test-time forward pass ... ' model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H) model.params['b1'] = np.linspace(-0.1, 0.9, num=H) model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C) model.params['b2'] = np.linspace(-0.9, 0.1, num=C) X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T scores = model.loss(X) correct_scores = np.asarray( [[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096], [12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143], [12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]]) scores_diff = np.abs(scores - correct_scores).sum() assert scores_diff < 1e-6, 'Problem with test-time forward pass' print 'Testing training loss (no regularization)' y = np.asarray([0, 5, 1]) loss, grads = model.loss(X, y) correct_loss = 3.4702243556 assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss' model.reg = 1.0 loss, grads = model.loss(X, y) correct_loss = 26.5948426952 assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss' for reg in [0.0, 0.7]: print 'Running numeric gradient check with reg = ', reg model.reg = reg loss, grads = model.loss(X, y) for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False) print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])) Explanation: Two-layer network In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations. Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation. End of explanation model = TwoLayerNet() solver = None ############################################################################## # TODO: Use a Solver instance to train a TwoLayerNet that achieves at least # # 50% accuracy on the validation set. # ############################################################################## pass ############################################################################## # END OF YOUR CODE # ############################################################################## # Run this cell to visualize training loss and train / val accuracy plt.subplot(2, 1, 1) plt.title('Training loss') plt.plot(solver.loss_history, 'o') plt.xlabel('Iteration') plt.subplot(2, 1, 2) plt.title('Accuracy') plt.plot(solver.train_acc_history, '-o', label='train') plt.plot(solver.val_acc_history, '-o', label='val') plt.plot([0.5] * len(solver.val_acc_history), 'k--') plt.xlabel('Epoch') plt.legend(loc='lower right') plt.gcf().set_size_inches(15, 12) plt.show() Explanation: Solver In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class. Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set. End of explanation N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,)) for reg in [0, 3.14]: print 'Running check with reg = ', reg model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C, reg=reg, weight_scale=5e-2, dtype=np.float64) loss, grads = model.loss(X, y) print 'Initial loss: ', loss for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5) print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])) Explanation: Multilayer network Next you will implement a fully-connected network with an arbitrary number of hidden layers. Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py. Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon. Initial loss and gradient check As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable? For gradient checking, you should expect to see errors around 1e-6 or less. End of explanation # TODO: Use a three-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } weight_scale = 1e-2 learning_rate = 1e-4 model = FullyConnectedNet([100, 100], weight_scale=weight_scale, dtype=np.float64) solver = Solver(model, small_data, print_every=10, num_epochs=20, batch_size=25, update_rule='sgd', optim_config={ 'learning_rate': learning_rate, } ) solver.train() plt.plot(solver.loss_history, 'o') plt.title('Training loss history') plt.xlabel('Iteration') plt.ylabel('Training loss') plt.show() Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs. End of explanation # TODO: Use a five-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } learning_rate = 1e-3 weight_scale = 1e-5 model = FullyConnectedNet([100, 100, 100, 100], weight_scale=weight_scale, dtype=np.float64) solver = Solver(model, small_data, print_every=10, num_epochs=20, batch_size=25, update_rule='sgd', optim_config={ 'learning_rate': learning_rate, } ) solver.train() plt.plot(solver.loss_history, 'o') plt.title('Training loss history') plt.xlabel('Iteration') plt.ylabel('Training loss') plt.show() Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs. End of explanation from cs231n.optim import sgd_momentum N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) config = {'learning_rate': 1e-3, 'velocity': v} next_w, _ = sgd_momentum(w, dw, config=config) expected_next_w = np.asarray([ [ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789], [ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526], [ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263], [ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]]) expected_velocity = np.asarray([ [ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158], [ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105], [ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053], [ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]]) print 'next_w error: ', rel_error(next_w, expected_next_w) print 'velocity error: ', rel_error(expected_velocity, config['velocity']) Explanation: Inline question: Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net? Answer: [FILL THIS IN] Update rules So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD. SGD+Momentum Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent. Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8. End of explanation num_train = 4000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } solvers = {} for update_rule in ['sgd', 'sgd_momentum']: print 'running with ', update_rule model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2) solver = Solver(model, small_data, num_epochs=5, batch_size=100, update_rule=update_rule, optim_config={ 'learning_rate': 1e-2, }, verbose=True) solvers[update_rule] = solver solver.train() print plt.subplot(3, 1, 1) plt.title('Training loss') plt.xlabel('Iteration') plt.subplot(3, 1, 2) plt.title('Training accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 3) plt.title('Validation accuracy') plt.xlabel('Epoch') for update_rule, solver in solvers.iteritems(): plt.subplot(3, 1, 1) plt.plot(solver.loss_history, 'o', label=update_rule) plt.subplot(3, 1, 2) plt.plot(solver.train_acc_history, '-o', label=update_rule) plt.subplot(3, 1, 3) plt.plot(solver.val_acc_history, '-o', label=update_rule) for i in [1, 2, 3]: plt.subplot(3, 1, i) plt.legend(loc='upper center', ncol=4) plt.gcf().set_size_inches(15, 15) plt.show() Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster. End of explanation # Test RMSProp implementation; you should see errors less than 1e-7 from cs231n.optim import rmsprop N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) config = {'learning_rate': 1e-2, 'cache': cache} next_w, _ = rmsprop(w, dw, config=config) expected_next_w = np.asarray([ [-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247], [-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774], [ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447], [ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]]) expected_cache = np.asarray([ [ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321], [ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377], [ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936], [ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]]) print 'next_w error: ', rel_error(expected_next_w, next_w) print 'cache error: ', rel_error(expected_cache, config['cache']) # Test Adam implementation; you should see errors around 1e-7 or less from cs231n.optim import adam N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D) config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5} next_w, _ = adam(w, dw, config=config) expected_next_w = np.asarray([ [-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977], [-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929], [ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969], [ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]]) expected_v = np.asarray([ [ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,], [ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,], [ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,], [ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]]) expected_m = np.asarray([ [ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474], [ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316], [ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158], [ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]]) print 'next_w error: ', rel_error(expected_next_w, next_w) print 'v error: ', rel_error(expected_v, config['v']) print 'm error: ', rel_error(expected_m, config['m']) Explanation: RMSProp and Adam RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients. In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below. [1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012). [2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015. End of explanation learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3} for update_rule in ['adam', 'rmsprop']: print 'running with ', update_rule model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2) solver = Solver(model, small_data, num_epochs=5, batch_size=100, update_rule=update_rule, optim_config={ 'learning_rate': learning_rates[update_rule] }, verbose=True) solvers[update_rule] = solver solver.train() print plt.subplot(3, 1, 1) plt.title('Training loss') plt.xlabel('Iteration') plt.subplot(3, 1, 2) plt.title('Training accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 3) plt.title('Validation accuracy') plt.xlabel('Epoch') for update_rule, solver in solvers.iteritems(): plt.subplot(3, 1, 1) plt.plot(solver.loss_history, 'o', label=update_rule) plt.subplot(3, 1, 2) plt.plot(solver.train_acc_history, '-o', label=update_rule) plt.subplot(3, 1, 3) plt.plot(solver.val_acc_history, '-o', label=update_rule) for i in [1, 2, 3]: plt.subplot(3, 1, i) plt.legend(loc='upper center', ncol=4) plt.gcf().set_size_inches(15, 15) plt.show() Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules: End of explanation best_model = None ################################################################################ # TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might # # batch normalization and dropout useful. Store your best model in the # # best_model variable. # ################################################################################ pass ################################################################################ # END OF YOUR CODE # ################################################################################ Explanation: Train a good model! Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net. If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets. You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models. End of explanation y_test_pred = np.argmax(best_model.loss(X_test), axis=1) y_val_pred = np.argmax(best_model.loss(X_val), axis=1) print 'Validation set accuracy: ', (y_val_pred == y_val).mean() print 'Test set accuracy: ', (y_test_pred == y_test).mean() Explanation: Test you model Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set. End of explanation
6,414
Given the following text description, write Python code to implement the functionality described below step by step Description: US EPA ChemView web services The documentation lists several ways of accessing data in ChemView. Step1: Getting 'chemicals' data from ChemView As a start... this downloads data for all chemicals. Let's see what we get. Step2: Data wrangling Step3: How many unique CASRNs, PMN numbers? Step4: What's in 'synonyms'? Step5: How many 'synonyms' for each entry? Step6: Do the data objects in synonyms all have the same attributes? Step7: All of the synonyms fields contain a variable number of objects with a uniform set of fields. Tell me more about those items with PMN numbers... Step8: Are there any that have CASRN too? ... No.
Python Code: URIBASE = 'http://java.epa.gov/chemview/' Explanation: US EPA ChemView web services The documentation lists several ways of accessing data in ChemView. End of explanation uri = URIBASE + 'chemicals' r = requests.get(uri, headers = {'Accept': 'application/json, */*'}) j = json.loads(r.text) print(len(j)) df = DataFrame(j) df.tail() # Save this dataset so that I don't have to re-request it again later. df.to_pickle('../data/chemicals.pickle') df = pd.read_pickle('../data/chemicals.pickle') Explanation: Getting 'chemicals' data from ChemView As a start... this downloads data for all chemicals. Let's see what we get. End of explanation # want to interpret 'None' as NaN def scrub_None(x): s = str(x).strip() if s == 'None' or s == '': return np.nan else: return s for c in list(df.columns)[:-1]: df[c] = df[c].apply(scrub_None) df.tail() Explanation: Data wrangling End of explanation # CASRNS len(df.casNo.value_counts()) # PMN numbers len(df.pmnNo.value_counts()) Explanation: How many unique CASRNs, PMN numbers? End of explanation DataFrame(df.loc[4,'synonyms']) Explanation: What's in 'synonyms'? End of explanation df.synonyms.apply(len).describe() Explanation: How many 'synonyms' for each entry? End of explanation def getfields(x): k = set() for d in x: j = set(d.keys()) k = k | j return ','.join(sorted(k)) df.synonyms.apply(getfields).head() len(df.synonyms.apply(getfields).value_counts()) Explanation: Do the data objects in synonyms all have the same attributes? End of explanation pmns = df.loc[df.pmnNo.notnull()] pmns.head() Explanation: All of the synonyms fields contain a variable number of objects with a uniform set of fields. Tell me more about those items with PMN numbers... End of explanation len(pmns.casNo.dropna()) Explanation: Are there any that have CASRN too? ... No. End of explanation
6,415
Given the following text description, write Python code to implement the functionality described below step by step Description: 3. Imagined movement In this tutorial we will look at imagined movement. Our movement is controlled in the motor cortex where there is an increased level of mu activity (8–12 Hz) when we perform movements. This is accompanied by a reduction of this mu activity in specific regions that deal with the limb that is currently moving. This decrease is called Event Related Desynchronization (ERD). By measuring the amount of mu activity at different locations on the motor cortex, we can determine which limb the subject is moving. Through mirror neurons, this effect also occurs when the subject is not actually moving his limbs, but merely imagining it. Credits The CSP code was originally written by Boris Reuderink of the Donders Institute for Brain, Cognition and Behavior. It is part of his Python EEG toolbox Step1: Executing the code cell below will load the data Step2: Now we have the data in the following python variables Step3: This is a large recording Step4: Since the feature we're looking for (a decrease in $\mu$-activity) is a frequency feature, lets plot the PSD of the trials in a similar manner as with the SSVEP data. The code below defines a function that computes the PSD for each trial (we're going to need it again later on) Step5: The function below plots the PSDs that are calculated with the above function. Since plotting it for 118 channels will clutter the display, it takes the indices of the desired channels as input, as well as some metadata to decorate the plot. Step6: Lets put the plot_psd() function to use and plot three channels Step7: A spike of mu activity can be seen on each channel for both classes. At the right hemisphere, the mu for the left hand movement is lower than for the right hand movement due to the ERD. At the left electrode, the mu for the right hand movement is reduced and at the central electrode the mu activity is about equal for both classes. This is in line with the theory that the left hand is controlled by the right hemisphere and the feet are controlled centrally. Classifying the data We will use a machine learning algorithm to construct a model that can distinguish between the right hand and foot movement of this subject. In order to do this we need to Step8: Plotting the PSD of the resulting trials_filt shows the suppression of frequencies outside the passband of the filter Step9: As a feature for the classifier, we will use the logarithm of the variance of each channel. The function below calculates this Step10: Below is a function to visualize the logvar of each channel as a bar chart Step11: We see that most channels show a small difference in the log-var of the signal between the two classes. The next step is to go from 118 channels to only a few channel mixtures. The CSP algorithm calculates mixtures of channels that are designed to maximize the difference in variation between two classes. These mixures are called spatial filters. Step12: To see the result of the CSP algorithm, we plot the log-var like we did before Step13: Instead of 118 channels, we now have 118 mixtures of channels, called components. They are the result of 118 spatial filters applied to the data. The first filters maximize the variation of the first class, while minimizing the variation of the second. The last filters maximize the variation of the second class, while minimizing the variation of the first. This is also visible in a PSD plot. The code below plots the PSD for the first and last components as well as one in the middle Step14: In order to see how well we can differentiate between the two classes, a scatter plot is a useful tool. Here both classes are plotted on a 2-dimensional plane Step15: We will apply a linear classifier to this data. A linear classifier can be thought of as drawing a line in the above plot to separate the two classes. To determine the class for a new trial, we just check on which side of the line the trial would be if plotted as above. The data is split into a train and a test set. The classifier will fit a model (in this case, a straight line) on the training set and use this model to make predictions about the test set (see on which side of the line each trial in the test set falls). Note that the CSP algorithm is part of the model, so for fairness sake it should be calculated using only the training data. Step16: For a classifier the Linear Discriminant Analysis (LDA) algorithm will be used. It fits a gaussian distribution to each class, characterized by the mean and covariance, and determines an optimal separating plane to divide the two. This plane is defined as $r = W_0 \cdot X_0 + W_1 \cdot X_1 + \ldots + W_n \cdot X_n - b$, where $r$ is the classifier output, $W$ are called the feature weights, $X$ are the features of the trial, $n$ is the dimensionality of the data and $b$ is called the offset. In our case we have 2 dimensional data, so the separating plane will be a line Step17: Training the LDA using the training data gives us $W$ and $b$ Step18: It can be informative to recreate the scatter plot and overlay the decision boundary as determined by the LDA classifier. The decision boundary is the line for which the classifier output is exactly 0. The scatterplot used $X_0$ as $x$-axis and $X_1$ as $y$-axis. To find the function $y = f(x)$ describing the decision boundary, we set $r$ to 0 and solve for $y$ in the equation of the separating plane Step19: The code below plots the boundary with the test data on which we will apply the classifier. You will see the classifier is going to make some mistakes. Step20: Now the LDA is constructed and fitted to the training data. We can now apply it to the test data. The results are presented as a confusion matrix
Python Code: %pylab inline Explanation: 3. Imagined movement In this tutorial we will look at imagined movement. Our movement is controlled in the motor cortex where there is an increased level of mu activity (8–12 Hz) when we perform movements. This is accompanied by a reduction of this mu activity in specific regions that deal with the limb that is currently moving. This decrease is called Event Related Desynchronization (ERD). By measuring the amount of mu activity at different locations on the motor cortex, we can determine which limb the subject is moving. Through mirror neurons, this effect also occurs when the subject is not actually moving his limbs, but merely imagining it. Credits The CSP code was originally written by Boris Reuderink of the Donders Institute for Brain, Cognition and Behavior. It is part of his Python EEG toolbox: https://github.com/breuderink/eegtools Inspiration for this tutorial also came from the excellent code example given in the book chapter: Arnaud Delorme, Christian Kothe, Andrey Vankov, Nima Bigdely-Shamlo, Robert Oostenveld, Thorsten Zander, and Scott Makeig. MATLAB-Based Tools for BCI Research, In (B+H)CI: The Human in Brain-Computer Interfaces and the Brain in Human-Computer Interaction. Desney S. Tan and Anton Nijholt (eds.), 2009, 241-259, http://dx.doi.org/10.1007/978-1-84996-272-8 About the data The dataset for this tutorial is provided by the fourth BCI competition. Before you continue with this tutorial, please, go to http://www.bbci.de/competition/iv/#download and fill in your name and email address. You don't need to actually download the data, since it is installed in the virtual server already, but this way, the wonderful organizers of the competition get notified that their data is still being used and by whom. Description of the data End of explanation import numpy as np import scipy.io m = scipy.io.loadmat('data/BCICIV_calib_ds1d.mat', struct_as_record=True) # SciPy.io.loadmat does not deal well with Matlab structures, resulting in lots of # extra dimensions in the arrays. This makes the code a bit more cluttered sample_rate = m['nfo']['fs'][0][0][0][0] EEG = m['cnt'].T nchannels, nsamples = EEG.shape channel_names = [s[0] for s in m['nfo']['clab'][0][0][0]] event_onsets = m['mrk'][0][0][0] event_codes = m['mrk'][0][0][1] labels = np.zeros((1, nsamples), int) labels[0, event_onsets] = event_codes cl_lab = [s[0] for s in m['nfo']['classes'][0][0][0]] cl1 = cl_lab[0] cl2 = cl_lab[1] nclasses = len(cl_lab) nevents = len(event_onsets) Explanation: Executing the code cell below will load the data: End of explanation # Print some information print('Shape of EEG:', EEG.shape) print('Sample rate:', sample_rate) print('Number of channels:', nchannels) print('Channel names:', channel_names) print('Number of events:', len(event_onsets)) print('Event codes:', np.unique(event_codes)) print('Class labels:', cl_lab) print('Number of classes:', nclasses) Explanation: Now we have the data in the following python variables: End of explanation # Dictionary to store the trials in, each class gets an entry trials = {} # The time window (in samples) to extract for each trial, here 0.5 -- 2.5 seconds win = np.arange(int(0.5*sample_rate), int(2.5*sample_rate)) # Length of the time window nsamples = len(win) # Loop over the classes (right, foot) for cl, code in zip(cl_lab, np.unique(event_codes)): # Extract the onsets for the class cl_onsets = event_onsets[event_codes == code] # Allocate memory for the trials trials[cl] = np.zeros((nchannels, nsamples, len(cl_onsets))) # Extract each trial for i, onset in enumerate(cl_onsets): trials[cl][:,:,i] = EEG[:, win+onset] # Some information about the dimensionality of the data (channels x time x trials) print('Shape of trials[cl1]:', trials[cl1].shape) print('Shape of trials[cl2]:', trials[cl2].shape) Explanation: This is a large recording: 59 electrodes where used, spread across the entire scalp. The subject was given a cue and then imagined either right hand movement or the movement of his feet. As can be seen from the Homunculus, foot movement is controlled at the center of the motor cortex (which makes it hard to distinguish left from right foot), while hand movement is controlled more lateral. Plotting the data The code below cuts trials for the two classes and should look familiar if you've completed the previous tutorials. Trials are cut in the interval [0.5–2.5 s] after the onset of the cue. End of explanation from matplotlib import mlab def psd(trials): ''' Calculates for each trial the Power Spectral Density (PSD). Parameters ---------- trials : 3d-array (channels x samples x trials) The EEG signal Returns ------- trial_PSD : 3d-array (channels x PSD x trials) the PSD for each trial. freqs : list of floats Yhe frequencies for which the PSD was computed (useful for plotting later) ''' ntrials = trials.shape[2] trials_PSD = np.zeros((nchannels, 101, ntrials)) # Iterate over trials and channels for trial in range(ntrials): for ch in range(nchannels): # Calculate the PSD (PSD, freqs) = mlab.psd(trials[ch,:,trial], NFFT=int(nsamples), Fs=sample_rate) trials_PSD[ch, :, trial] = PSD.ravel() return trials_PSD, freqs # Apply the function psd_r, freqs = psd(trials[cl1]) psd_f, freqs = psd(trials[cl2]) trials_PSD = {cl1: psd_r, cl2: psd_f} Explanation: Since the feature we're looking for (a decrease in $\mu$-activity) is a frequency feature, lets plot the PSD of the trials in a similar manner as with the SSVEP data. The code below defines a function that computes the PSD for each trial (we're going to need it again later on): End of explanation import matplotlib.pyplot as plt def plot_psd(trials_PSD, freqs, chan_ind, chan_lab=None, maxy=None): ''' Plots PSD data calculated with psd(). Parameters ---------- trials : 3d-array The PSD data, as returned by psd() freqs : list of floats The frequencies for which the PSD is defined, as returned by psd() chan_ind : list of integers The indices of the channels to plot chan_lab : list of strings (optional) List of names for each channel maxy : float (optional) Limit the y-axis to this value ''' plt.figure(figsize=(12,5)) nchans = len(chan_ind) # Maximum of 3 plots per row nrows = int(np.ceil(nchans / 3)) ncols = min(3, nchans) # Enumerate over the channels for i,ch in enumerate(chan_ind): # Figure out which subplot to draw to plt.subplot(nrows,ncols,i+1) # Plot the PSD for each class for cl in trials.keys(): plt.plot(freqs, np.mean(trials_PSD[cl][ch,:,:], axis=1), label=cl) # All plot decoration below... plt.xlim(1,30) if maxy != None: plt.ylim(0,maxy) plt.grid() plt.xlabel('Frequency (Hz)') if chan_lab == None: plt.title('Channel %d' % (ch+1)) else: plt.title(chan_lab[i]) plt.legend() plt.tight_layout() Explanation: The function below plots the PSDs that are calculated with the above function. Since plotting it for 118 channels will clutter the display, it takes the indices of the desired channels as input, as well as some metadata to decorate the plot. End of explanation plot_psd( trials_PSD, freqs, [channel_names.index(ch) for ch in ['C3', 'Cz', 'C4']], chan_lab=['left', 'center', 'right'], maxy=500 ) Explanation: Lets put the plot_psd() function to use and plot three channels: C3: Central, left Cz: Central, central C4: Central, right End of explanation import scipy.signal def bandpass(trials, lo, hi, sample_rate): ''' Designs and applies a bandpass filter to the signal. Parameters ---------- trials : 3d-array (channels x samples x trials) The EEGsignal lo : float Lower frequency bound (in Hz) hi : float Upper frequency bound (in Hz) sample_rate : float Sample rate of the signal (in Hz) Returns ------- trials_filt : 3d-array (channels x samples x trials) The bandpassed signal ''' # The iirfilter() function takes the filter order: higher numbers mean a sharper frequency cutoff, # but the resulting signal might be shifted in time, lower numbers mean a soft frequency cutoff, # but the resulting signal less distorted in time. It also takes the lower and upper frequency bounds # to pass, divided by the niquist frequency, which is the sample rate divided by 2: a, b = scipy.signal.iirfilter(6, [lo/(sample_rate/2.0), hi/(sample_rate/2.0)]) # Applying the filter to each trial ntrials = trials.shape[2] trials_filt = np.zeros((nchannels, nsamples, ntrials)) for i in range(ntrials): trials_filt[:,:,i] = scipy.signal.filtfilt(a, b, trials[:,:,i], axis=1) return trials_filt # Apply the function trials_filt = {cl1: bandpass(trials[cl1], 8, 15, sample_rate), cl2: bandpass(trials[cl2], 8, 15, sample_rate)} Explanation: A spike of mu activity can be seen on each channel for both classes. At the right hemisphere, the mu for the left hand movement is lower than for the right hand movement due to the ERD. At the left electrode, the mu for the right hand movement is reduced and at the central electrode the mu activity is about equal for both classes. This is in line with the theory that the left hand is controlled by the right hemisphere and the feet are controlled centrally. Classifying the data We will use a machine learning algorithm to construct a model that can distinguish between the right hand and foot movement of this subject. In order to do this we need to: find a way to quantify the amount of mu activity present in a trial make a model that describes expected values of mu activity for each class finally test this model on some unseen data to see if it can predict the correct class label We will follow a classic BCI design by Blankertz et al. [1] where they use the logarithm of the variance of the signal in a certain frequency band as a feature for the classifier. [1] Blankertz, B., Dornhege, G., Krauledat, M., Müller, K.-R., & Curio, G. (2007). The non-invasive Berlin Brain-Computer Interface: fast acquisition of effective performance in untrained subjects. NeuroImage, 37(2), 539–550. doi:10.1016/j.neuroimage.2007.01.051 The script below designs a band pass filter using scipy.signal.irrfilter that will strip away frequencies outside the 8--15Hz window. The filter is applied to all trials: End of explanation psd_r, freqs = psd(trials_filt[cl1]) psd_f, freqs = psd(trials_filt[cl2]) trials_PSD = {cl1: psd_r, cl2: psd_f} plot_psd( trials_PSD, freqs, [channel_names.index(ch) for ch in ['C3', 'Cz', 'C4']], chan_lab=['left', 'center', 'right'], maxy=300 ) Explanation: Plotting the PSD of the resulting trials_filt shows the suppression of frequencies outside the passband of the filter: End of explanation # Calculate the log(var) of the trials def logvar(trials): ''' Calculate the log-var of each channel. Parameters ---------- trials : 3d-array (channels x samples x trials) The EEG signal. Returns ------- logvar - 2d-array (channels x trials) For each channel the logvar of the signal ''' return np.log(np.var(trials, axis=1)) # Apply the function trials_logvar = {cl1: logvar(trials_filt[cl1]), cl2: logvar(trials_filt[cl2])} Explanation: As a feature for the classifier, we will use the logarithm of the variance of each channel. The function below calculates this: End of explanation def plot_logvar(trials): ''' Plots the log-var of each channel/component. arguments: trials - Dictionary containing the trials (log-vars x trials) for 2 classes. ''' plt.figure(figsize=(12,5)) x0 = np.arange(nchannels) x1 = np.arange(nchannels) + 0.4 y0 = np.mean(trials[cl1], axis=1) y1 = np.mean(trials[cl2], axis=1) plt.bar(x0, y0, width=0.5, color='b') plt.bar(x1, y1, width=0.4, color='r') plt.xlim(-0.5, nchannels+0.5) plt.gca().yaxis.grid(True) plt.title('log-var of each channel/component') plt.xlabel('channels/components') plt.ylabel('log-var') plt.legend(cl_lab) # Plot the log-vars plot_logvar(trials_logvar) Explanation: Below is a function to visualize the logvar of each channel as a bar chart: End of explanation from numpy import linalg def cov(trials): ''' Calculate the covariance for each trial and return their average ''' ntrials = trials.shape[2] covs = [ trials[:,:,i].dot(trials[:,:,i].T) / nsamples for i in range(ntrials) ] return np.mean(covs, axis=0) def whitening(sigma): ''' Calculate a whitening matrix for covariance matrix sigma. ''' U, l, _ = linalg.svd(sigma) return U.dot( np.diag(l ** -0.5) ) def csp(trials_r, trials_f): ''' Calculate the CSP transformation matrix W. arguments: trials_r - Array (channels x samples x trials) containing right hand movement trials trials_f - Array (channels x samples x trials) containing foot movement trials returns: Mixing matrix W ''' cov_r = cov(trials_r) cov_f = cov(trials_f) P = whitening(cov_r + cov_f) B, _, _ = linalg.svd( P.T.dot(cov_f).dot(P) ) W = P.dot(B) return W def apply_mix(W, trials): ''' Apply a mixing matrix to each trial (basically multiply W with the EEG signal matrix)''' ntrials = trials.shape[2] trials_csp = np.zeros((nchannels, nsamples, ntrials)) for i in range(ntrials): trials_csp[:,:,i] = W.T.dot(trials[:,:,i]) return trials_csp # Apply the functions W = csp(trials_filt[cl1], trials_filt[cl2]) trials_csp = {cl1: apply_mix(W, trials_filt[cl1]), cl2: apply_mix(W, trials_filt[cl2])} Explanation: We see that most channels show a small difference in the log-var of the signal between the two classes. The next step is to go from 118 channels to only a few channel mixtures. The CSP algorithm calculates mixtures of channels that are designed to maximize the difference in variation between two classes. These mixures are called spatial filters. End of explanation trials_logvar = {cl1: logvar(trials_csp[cl1]), cl2: logvar(trials_csp[cl2])} plot_logvar(trials_logvar) Explanation: To see the result of the CSP algorithm, we plot the log-var like we did before: End of explanation psd_r, freqs = psd(trials_csp[cl1]) psd_f, freqs = psd(trials_csp[cl2]) trials_PSD = {cl1: psd_r, cl2: psd_f} plot_psd(trials_PSD, freqs, [0,28,-1], chan_lab=['first component', 'middle component', 'last component'], maxy=0.75 ) Explanation: Instead of 118 channels, we now have 118 mixtures of channels, called components. They are the result of 118 spatial filters applied to the data. The first filters maximize the variation of the first class, while minimizing the variation of the second. The last filters maximize the variation of the second class, while minimizing the variation of the first. This is also visible in a PSD plot. The code below plots the PSD for the first and last components as well as one in the middle: End of explanation def plot_scatter(left, foot): plt.figure() plt.scatter(left[0,:], left[-1,:], color='b') plt.scatter(foot[0,:], foot[-1,:], color='r') plt.xlabel('Last component') plt.ylabel('First component') plt.legend(cl_lab) plot_scatter(trials_logvar[cl1], trials_logvar[cl2]) Explanation: In order to see how well we can differentiate between the two classes, a scatter plot is a useful tool. Here both classes are plotted on a 2-dimensional plane: the x-axis is the first CSP component, the y-axis is the last. End of explanation # Percentage of trials to use for training (50-50 split here) train_percentage = 0.5 # Calculate the number of trials for each class the above percentage boils down to ntrain_r = int(trials_filt[cl1].shape[2] * train_percentage) ntrain_f = int(trials_filt[cl2].shape[2] * train_percentage) ntest_r = trials_filt[cl1].shape[2] - ntrain_r ntest_f = trials_filt[cl2].shape[2] - ntrain_f # Splitting the frequency filtered signal into a train and test set train = {cl1: trials_filt[cl1][:,:,:ntrain_r], cl2: trials_filt[cl2][:,:,:ntrain_f]} test = {cl1: trials_filt[cl1][:,:,ntrain_r:], cl2: trials_filt[cl2][:,:,ntrain_f:]} # Train the CSP on the training set only W = csp(train[cl1], train[cl2]) # Apply the CSP on both the training and test set train[cl1] = apply_mix(W, train[cl1]) train[cl2] = apply_mix(W, train[cl2]) test[cl1] = apply_mix(W, test[cl1]) test[cl2] = apply_mix(W, test[cl2]) # Select only the first and last components for classification comp = np.array([0,-1]) train[cl1] = train[cl1][comp,:,:] train[cl2] = train[cl2][comp,:,:] test[cl1] = test[cl1][comp,:,:] test[cl2] = test[cl2][comp,:,:] # Calculate the log-var train[cl1] = logvar(train[cl1]) train[cl2] = logvar(train[cl2]) test[cl1] = logvar(test[cl1]) test[cl2] = logvar(test[cl2]) Explanation: We will apply a linear classifier to this data. A linear classifier can be thought of as drawing a line in the above plot to separate the two classes. To determine the class for a new trial, we just check on which side of the line the trial would be if plotted as above. The data is split into a train and a test set. The classifier will fit a model (in this case, a straight line) on the training set and use this model to make predictions about the test set (see on which side of the line each trial in the test set falls). Note that the CSP algorithm is part of the model, so for fairness sake it should be calculated using only the training data. End of explanation def train_lda(class1, class2): ''' Trains the LDA algorithm. arguments: class1 - An array (observations x features) for class 1 class2 - An array (observations x features) for class 2 returns: The projection matrix W The offset b ''' nclasses = 2 nclass1 = class1.shape[0] nclass2 = class2.shape[0] # Class priors: in this case, we have an equal number of training # examples for each class, so both priors are 0.5 prior1 = nclass1 / float(nclass1 + nclass2) prior2 = nclass2 / float(nclass1 + nclass1) mean1 = np.mean(class1, axis=0) mean2 = np.mean(class2, axis=0) class1_centered = class1 - mean1 class2_centered = class2 - mean2 # Calculate the covariance between the features cov1 = class1_centered.T.dot(class1_centered) / (nclass1 - nclasses) cov2 = class2_centered.T.dot(class2_centered) / (nclass2 - nclasses) W = (mean2 - mean1).dot(np.linalg.pinv(prior1*cov1 + prior2*cov2)) b = (prior1*mean1 + prior2*mean2).dot(W) return (W,b) def apply_lda(test, W, b): ''' Applies a previously trained LDA to new data. arguments: test - An array (features x trials) containing the data W - The project matrix W as calculated by train_lda() b - The offsets b as calculated by train_lda() returns: A list containing a classlabel for each trial ''' ntrials = test.shape[1] prediction = [] for i in range(ntrials): # The line below is a generalization for: # result = W[0] * test[0,i] + W[1] * test[1,i] - b result = W.dot(test[:,i]) - b if result <= 0: prediction.append(1) else: prediction.append(2) return np.array(prediction) Explanation: For a classifier the Linear Discriminant Analysis (LDA) algorithm will be used. It fits a gaussian distribution to each class, characterized by the mean and covariance, and determines an optimal separating plane to divide the two. This plane is defined as $r = W_0 \cdot X_0 + W_1 \cdot X_1 + \ldots + W_n \cdot X_n - b$, where $r$ is the classifier output, $W$ are called the feature weights, $X$ are the features of the trial, $n$ is the dimensionality of the data and $b$ is called the offset. In our case we have 2 dimensional data, so the separating plane will be a line: $r = W_0 \cdot X_0 + W_1 \cdot X_1 - b$. To determine a class label for an unseen trial, we can calculate whether the result is positive or negative. End of explanation W,b = train_lda(train[cl1].T, train[cl2].T) print('W:', W) print('b:', b) Explanation: Training the LDA using the training data gives us $W$ and $b$: End of explanation # Scatterplot like before plot_scatter(train[cl1], train[cl2]) title('Training data') # Calculate decision boundary (x,y) x = np.arange(-5, 1, 0.1) y = (b - W[0]*x) / W[1] # Plot the decision boundary plt.plot(x,y, linestyle='--', linewidth=2, color='k') plt.xlim(-5, 1) plt.ylim(-2.2, 1) Explanation: It can be informative to recreate the scatter plot and overlay the decision boundary as determined by the LDA classifier. The decision boundary is the line for which the classifier output is exactly 0. The scatterplot used $X_0$ as $x$-axis and $X_1$ as $y$-axis. To find the function $y = f(x)$ describing the decision boundary, we set $r$ to 0 and solve for $y$ in the equation of the separating plane: <div style="width:600px"> $$\begin{align} W_0 \cdot X_0 + W_1 \cdot X_1 - b &= r &&\text{the original equation} \\\ W_0 \cdot x + W_1 \cdot y - b &= 0 &&\text{filling in $X_0=x$, $X_1=y$ and $r=0$} \\\ W_0 \cdot x + W_1 \cdot y &= b &&\text{solving for $y$}\\\ W_1 \cdot y &= b - W_0 \cdot x \\\ \\\ y &= \frac{b - W_0 \cdot x}{W_1} \end{align}$$ </div> We first plot the decision boundary with the training data used to calculate it: End of explanation plot_scatter(test[cl1], test[cl2]) title('Test data') plt.plot(x,y, linestyle='--', linewidth=2, color='k') plt.xlim(-5, 1) plt.ylim(-2.2, 1) Explanation: The code below plots the boundary with the test data on which we will apply the classifier. You will see the classifier is going to make some mistakes. End of explanation # Print confusion matrix conf = np.array([ [(apply_lda(test[cl1], W, b) == 1).sum(), (apply_lda(test[cl2], W, b) == 1).sum()], [(apply_lda(test[cl1], W, b) == 2).sum(), (apply_lda(test[cl2], W, b) == 2).sum()], ]) print('Confusion matrix:') print(conf) print() print('Accuracy: %.3f' % (np.sum(np.diag(conf)) / float(np.sum(conf)))) Explanation: Now the LDA is constructed and fitted to the training data. We can now apply it to the test data. The results are presented as a confusion matrix: <table> <tr><td></td><td colspan='2' style="font-weight:bold">True labels →</td></tr> <tr><td style="font-weight:bold">↓ Predicted labels</td><td>Right</td><td>Foot</td></tr> <tr><td>Right</td><td></td><td></td></tr> <tr><td>Foot</td><td></td><td></td></tr> </table> The number at the diagonal will be trials that were correctly classified, any trials incorrectly classified (either a false positive or false negative) will be in the corners. End of explanation
6,416
Given the following text description, write Python code to implement the functionality described below step by step Description: Chapter 6 Temporal-Difference Learning DP, TD, and Monte Carlo methods all use some variation of generalized policy iteration Step1: 6.2 Advantages of TD Prediction Methods TD Step2: 6.5 Q-learning Step3: 6.6 Expected Sarsa use expeteced value, how likely each action is under the current policy. \begin{align} Q(S_t, A_t) & \gets Q(S_t, A_t) + \alpha \left [ R_{t+1} + \gamma \color{blue}{\mathbb{E}[Q(S_{t+1}, A_{t+1}) \mid S_{t+1}]} - Q(S_t,, A_t) \right ] \ & \gets Q(S_t, A_t) + \alpha \left [ R_{t+1} + \gamma \color{blue}{\sum_a \pi(a \mid S_{t+1}) Q(S_{t+1}, a)} - Q(S_t,, A_t) \right ] \end{align} con
Python Code: Image('./res/fig6_1.png') Image('./res/TD_0.png') Explanation: Chapter 6 Temporal-Difference Learning DP, TD, and Monte Carlo methods all use some variation of generalized policy iteration: primarily differences in their approaches to the prediction problem. 6.1 TD Prediction constant-$\alpha$ MC: $V(S_t) \gets V(S_t) + \alpha \underbrace{\left [ G_t - V(S_t) \right ]}{= \sum{k=t}^{T-1} \gamma^{k-1} \theta_k}$ \begin{align} v_\pi(s) &\doteq \mathbb{E}\pi [ G_t \mid S_s = s] \qquad \text{Monte Carlo} \ &= \mathbb{E}\pi [ R_{t+1} + \gamma \color{blue}{v_\pi(S_{t+1})} \mid S_t = s ] \quad \text{DP} \end{align} one-step TD, or TD(0): $V(S_t) \gets V(S_t) + \alpha \left [ \underbrace{R_{t+1} + \gamma \color{blue}{V(S_{t+1})} - V(S_t)}_{\text{TD error: } \theta_t} \right ]$ TD: samples the expected values and uses the current estimate $V$ instead of the true $v_\pi$. End of explanation Image('./res/sarsa.png') Explanation: 6.2 Advantages of TD Prediction Methods TD: they learn a guess from a guess - they boostrap. advantage: over DP: TD do not require a model of the environment, of its reward and next-state probability distributions. over Monte Carlo: TD are naturally implemented in an online, fully incremental fashion. TD: guarantee convergence. In practice, TD methods have usually been found that converge faster than constant-$\alpha$ MC methods on stochastic tasks. 6.3 Optimality of TD(0) batch updating: updates are made only after processing each complete batch of training data until the value function converges. Batch Monte Carlo methods: always find the estimates that minimize mean-squared error on the training set. Batch TD(0): always find the estimates that would be exactly correct for the maximum-likelihood model of the Markov process. 6.4 Sarsa: On-policy TD Control $Q(S_t, A_t) \gets Q(S_t, A_t) + \alpha \left [ R_{t+1} + \gamma Q(S_{t+1}, A_{t+1}) - Q(S_t,, A_t) \right ]$ End of explanation Image('./res/q_learn_off_policy.png') Explanation: 6.5 Q-learning: Off-policy TD Control $Q(S_t, A_t) \gets Q(S_t, A_t) + \alpha \left [ R_{t+1} + \gamma \color{blue}{\max_a Q(S_{t+1}, a)} - Q(S_t,, A_t) \right ]$ End of explanation Image('./res/double_learn.png') Explanation: 6.6 Expected Sarsa use expeteced value, how likely each action is under the current policy. \begin{align} Q(S_t, A_t) & \gets Q(S_t, A_t) + \alpha \left [ R_{t+1} + \gamma \color{blue}{\mathbb{E}[Q(S_{t+1}, A_{t+1}) \mid S_{t+1}]} - Q(S_t,, A_t) \right ] \ & \gets Q(S_t, A_t) + \alpha \left [ R_{t+1} + \gamma \color{blue}{\sum_a \pi(a \mid S_{t+1}) Q(S_{t+1}, a)} - Q(S_t,, A_t) \right ] \end{align} con: additional computational cost. pro: eliminate the variance due to the random seleciton of $A_{t+1}$. 6.7 Maximization Bias and Double Learning maximization bias: + a maximum over estimated values => an estimate of the maximum value => significant positive bias. root of problem: using the same samples (plays) both to determine the maximizing action and to estimate its value. => divide the plays in two sets ($Q_1, Q_2$) and use them to learn two indepedent estimates. (double learning) $Q_1(S_t, A_t) \gets Q_1(S_t, A_t) + \alpha \left [ R_{t+1} + \gamma Q_2 \left( S_{t+1}, \operatorname{argmax}a Q_1(S{t+1}, a) \right) - Q_1(S_t,, A_t) \right ]$ End of explanation
6,417
Given the following text description, write Python code to implement the functionality described below step by step Description: Embedding [embeddings.Embedding.0] input_dim 5, output_dim 3, input_length=7, mask_zero=False Step1: [embeddings.Embedding.1] input_dim 20, output_dim 5, input_length=10, mask_zero=True Step2: [embeddings.Embedding.2] input_dim 33, output_dim 2, input_length=5, mask_zero=False Step3: export for Keras.js tests
Python Code: input_dim = 5 output_dim = 3 input_length = 7 data_in_shape = (input_length,) emb = Embedding(input_dim, output_dim, input_length=input_length, mask_zero=False) layer_0 = Input(shape=data_in_shape) layer_1 = emb(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for i, w in enumerate(model.get_weights()): np.random.seed(1200 + i) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) print('W shape:', weights[0].shape) print('W:', format_decimal(weights[0].ravel().tolist())) arr_in = np.random.randint(0, input_dim - 1, data_in_shape) data_in = arr_in.ravel().tolist() print('') print('in shape:', data_in_shape) print('in:', data_in) result = model.predict(np.array([arr_in])) data_out_shape = result[0].shape data_out = format_decimal(result[0].ravel().tolist()) print('out shape:', data_out_shape) print('out:', data_out) DATA['embeddings.Embedding.0'] = { 'input': {'data': data_in, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out, 'shape': data_out_shape} } Explanation: Embedding [embeddings.Embedding.0] input_dim 5, output_dim 3, input_length=7, mask_zero=False End of explanation input_dim = 20 output_dim = 5 input_length = 10 data_in_shape = (input_length,) emb = Embedding(input_dim, output_dim, input_length=input_length, mask_zero=True) layer_0 = Input(shape=data_in_shape) layer_1 = emb(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for i, w in enumerate(model.get_weights()): np.random.seed(1210 + i) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) print('W shape:', weights[0].shape) print('W:', format_decimal(weights[0].ravel().tolist())) arr_in = np.random.randint(0, input_dim - 1, data_in_shape) data_in = arr_in.ravel().tolist() print('') print('in shape:', data_in_shape) print('in:', data_in) result = model.predict(np.array([arr_in])) data_out_shape = result[0].shape data_out = format_decimal(result[0].ravel().tolist()) print('out shape:', data_out_shape) print('out:', data_out) DATA['embeddings.Embedding.1'] = { 'input': {'data': data_in, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out, 'shape': data_out_shape} } Explanation: [embeddings.Embedding.1] input_dim 20, output_dim 5, input_length=10, mask_zero=True End of explanation input_dim = 33 output_dim = 2 input_length = 5 data_in_shape = (input_length,) emb = Embedding(input_dim, output_dim, input_length=input_length, mask_zero=False) layer_0 = Input(shape=data_in_shape) layer_1 = emb(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for i, w in enumerate(model.get_weights()): np.random.seed(1220 + i) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) print('W shape:', weights[0].shape) print('W:', format_decimal(weights[0].ravel().tolist())) arr_in = np.random.randint(0, input_dim - 1, data_in_shape) data_in = arr_in.ravel().tolist() print('') print('in shape:', data_in_shape) print('in:', data_in) result = model.predict(np.array([arr_in])) data_out_shape = result[0].shape data_out = format_decimal(result[0].ravel().tolist()) print('out shape:', data_out_shape) print('out:', data_out) DATA['embeddings.Embedding.2'] = { 'input': {'data': data_in, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out, 'shape': data_out_shape} } Explanation: [embeddings.Embedding.2] input_dim 33, output_dim 2, input_length=5, mask_zero=False End of explanation import os filename = '../../../test/data/layers/embeddings/Embedding.json' if not os.path.exists(os.path.dirname(filename)): os.makedirs(os.path.dirname(filename)) with open(filename, 'w') as f: json.dump(DATA, f) print(json.dumps(DATA)) Explanation: export for Keras.js tests End of explanation
6,418
Given the following text description, write Python code to implement the functionality described below step by step Description: Access TTree in Python using PyROOT <hr style="border-top-width Step1: Open a file which is located on the web. No type is to be specified for "f". Step2: Loop over the TTree called "events" in the file. It is accessed with the dot operator. Same holds for the access to the branches
Python Code: import ROOT Explanation: Access TTree in Python using PyROOT <hr style="border-top-width: 4px; border-top-color: #34609b;"> End of explanation f = ROOT.TFile.Open("https://root.cern.ch/files/summer_student_tutorial_tracks.root") Explanation: Open a file which is located on the web. No type is to be specified for "f". End of explanation maxPt=-1 for event in f.events: maxPt=-1 for track in event.tracks: pt = track.Pt() if pt > maxPt: maxPt = pt if event.evtNum % 100 == 0: print "Processing event number %i" %event.evtNum print "Max pt is %f" %maxPt Explanation: Loop over the TTree called "events" in the file. It is accessed with the dot operator. Same holds for the access to the branches: no need to set them up - they are just accessed by name, again with the dot operator. End of explanation
6,419
Given the following text description, write Python code to implement the functionality described below step by step Description: Experiment 6 Step1: Data Loading Step2: Plotting
Python Code: # import the modules import GPy import csv import numpy as np import cPickle as pickle import scipy.stats as stats import sklearn.metrics as metrics import GPy.plotting.Tango as Tango from matplotlib import pyplot as plt %matplotlib notebook Explanation: Experiment 6: TRo Journal In this experiment, the generalization of cloth models to unseen postures of the mannequin is verified. The evaluation is performed using RMSE, NRMSE, Pearson correlation as the parameters. In this notebook, the generalizability to unseen postures is evaluated through plots. End of explanation kinectExt = 'C' kinectDim = 7500 kinectKey = 'Cloud' mocapDim = 8 mocapExt = 'T' mocapKey = 'TopCoord' dataTypes = ['train','test'] parameters = ['rmse','nrmse','corr'] nShr = 4 nPos = 6 nTypes = 2 nParam = 3 dims = [kinectDim,mocapDim] keys = [kinectKey,mocapKey] metricRes = np.zeros((nTypes,nParam,nShr,nPos)) for dT,dataType in enumerate(dataTypes): for sInd in range(nShr): for pInd in range(nPos): res = pickle.load(open('../Results/Exp6/MRDRes%d%d.p' % (sInd+1,pInd+1),'rb')) for p,parameter in zip(range(len(parameters)),parameters): metricRes[dT,p,sInd,pInd] = res[dataType][parameter].mean() pickle.dump(metricRes,open('Result/metricResults.p','wb')) Explanation: Data Loading End of explanation def plotScales(train, test, options, a=0, n=6, legendFlag=True, legendLoc=2): fSize = 20 fig = plt.figure() ax = fig.add_subplot(111) x1 = np.arange(0.75,n+0.75) x2 = np.arange(1.25,n+1.25) c1 = Tango.colorsHex['mediumBlue'] c2 = Tango.colorsHex['mediumRed'] ax.bar(x1, height=train.mean(axis=a), width=0.4, align='center', color=c1, edgecolor='k', linewidth=1.2) ax.bar(x2, height=test.mean(axis=a), width=0.4, align='center', color=c2, edgecolor='k', linewidth=1.2) # setting the bar plot parameters ax.set_xlim(0.0, n+1.0) ax.set_ylim(options['ylimits']) ax.tick_params(axis='both', labelsize=fSize) ax.set_xticks(xrange(1, n+1)) ax.set_title(options['title'], fontsize=fSize) ax.set_ylabel(options['ylabel'], fontsize=fSize) ax.set_xlabel(options['xlabel'], fontsize=fSize) if legendFlag: ax.legend(options['legend'], fontsize=fSize-5, loc=legendLoc) plt.tight_layout() return ax def plotScales1(test, options, a=0, n=6): fSize = 20 fig = plt.figure() ax = fig.add_subplot(111) x = np.arange(1,n+1) c = Tango.colorsHex['mediumRed'] ax.bar(x, height=test.mean(axis=a), width=0.4, align='center', color=c, edgecolor='k', linewidth=1.2) # setting the bar plot parameters ax.set_xlim(0.0, n+1.0) ax.set_ylim(options['ylimits']) ax.tick_params(axis='both', labelsize=fSize) ax.set_xticks(xrange(1, n+1)) ax.set_title(options['title'], fontsize=fSize) ax.set_ylabel(options['ylabel'], fontsize=fSize) ax.set_xlabel(options['xlabel'], fontsize=fSize) #ax.legend(options['legend'], fontsize=fSize, loc=2) plt.tight_layout() return ax options = {'title':'','ylabel':'RMSE','xlabel':'Posture Index', 'legend':['Train','Test'], 'ylimits':[0.0,0.8]} plotScales(metricRes[0,0,:,:], metricRes[1,0,:,:], options) plt.savefig('Result/posRMSE.pdf', format='pdf') options = {'title':'','ylabel':'NRMSE','xlabel':'Posture Index', 'legend':['Train','Test'], 'ylimits':[0.0,0.21]} plotScales(metricRes[0,1,:,:], metricRes[1,1,:,:], options) plt.savefig('Result/posNRMSE.pdf', format='pdf') options = {'title':'','ylabel':'Pearson Corr.','xlabel':'Posture Index', 'legend':['Train','Test'], 'ylimits':[0.85,1.0]} plotScales(metricRes[0,2,:,:], metricRes[1,2,:,:], options, legendFlag=False) plt.savefig('Result/posCorr.pdf', format='pdf') Explanation: Plotting End of explanation
6,420
Given the following text description, write Python code to implement the functionality described below step by step Description: Linear time series analysis - AR/MA models Lorenzo Biasi (3529646), Julius Vernie (3502879) Task 1. AR(p) models. 1.1 Step1: We can see that simulating the data as an AR(1) model is not effective in giving us anything similar the aquired data. This is due to the fact that we made the wrong assumptions when we computed the coefficients of our data. Our data is in fact clearly not a stationary process and in particular cannot be from an AR(1) model alone, as there is a linear trend in time. The meaning of the slope that we computed shows that successive data points are strongly correlated. Step2: 1.2 Before estimating the coefficients of the AR(1) model we remove the linear trend in time, thus making it resemble more closely the model with which we are trying to analyze it. Step3: This time we obtain different coefficients, that we can use to simulate the data and see if they give us a similar result the real data. Step4: In the next plot we can see that our predicted values have an error that decays exponentially the further we try to make a prediction. By the time it arrives to 5 time steps of distance it equal to the variance. Step5: 1.4 By plotting the data we can already see that this cannot be a simple AR model. The data seems divided in 2 parts with very few data points in the middle. Step6: We tried to simulate the data with these coefficients but it is clearly uneffective Step7: By plotting the return plot we can better understand what is going on. The data can be divided in two parts. We can see that successive data is always around one of this two poles. If it were a real AR model we would expect something like the return plots shown below this one. Step8: We can see that in the autocorelation plot the trend is exponential, which is what we would expect, but it is taking too long to decay for being a an AR model with small value of $p$ Step9: Task 2. Autocorrelation and partial autocorrelation. 2.1 Step10: For computing the $\hat p$ for the AR model we predicted the parameters $a_i$ for various AR(5). We find that for p = 6 we do not have any correlation between previous values and future values. Step11: For the MA $\hat q$ could be around 4-6
Python Code: import numpy as np import matplotlib.pyplot as plt import scipy.io as sio from sklearn import datasets, linear_model %matplotlib inline def set_data(p, x): temp = x.flatten() n = len(temp[p:]) x_T = temp[p:].reshape((n, 1)) X_p = np.ones((n, p + 1)) for i in range(1, p + 1): X_p[:, i] = temp[i - 1: i - 1 + n] return X_p, x_T def AR(coeff, init, T): offset = coeff[0] mult_coef = np.flip(coeff, 0)[:-1] series = np.zeros(T) for k, x_i in enumerate(init): series[k] = x_i for i in range(k + 1, T): series[i] = np.sum(mult_coef * series[i - k - 1:i]) + np.random.normal() + offset return series def estimated_autocorrelation(x): n = len(x) mu, sigma2 = np.mean(x), np.var(x) r = np.correlate(x - mu, x - mu, mode = 'full')[-n:] result = r/(sigma2 * (np.arange(n, 0, -1))) return result def test_AR(x, coef, N): x = x.flatten() offset = coef[0] slope = coef[1] ave_err = np.empty((len(x) - N, N)) x_temp = np.empty(N) for i in range(len(x) - N): x_temp[0] = x[i] * slope + offset for j in range(N -1): x_temp[j + 1] = x_temp[j] * slope + offset ave_err[i, :] = (x_temp - x[i:i+N])**2 return ave_err x = sio.loadmat('Tut2_file1.mat')['x'].flatten() plt.plot(x * 2, ',') plt.xlabel('time') plt.ylabel('x') X_p, x_T = set_data(1, x) model = linear_model.LinearRegression() model.fit(X_p, x_T) model.coef_ Explanation: Linear time series analysis - AR/MA models Lorenzo Biasi (3529646), Julius Vernie (3502879) Task 1. AR(p) models. 1.1 End of explanation x_1 = AR(np.append(model.coef_, 0), [0, x[0]], 50001) plt.plot(x_1[1:], ',') plt.xlabel('time') plt.ylabel('x') Explanation: We can see that simulating the data as an AR(1) model is not effective in giving us anything similar the aquired data. This is due to the fact that we made the wrong assumptions when we computed the coefficients of our data. Our data is in fact clearly not a stationary process and in particular cannot be from an AR(1) model alone, as there is a linear trend in time. The meaning of the slope that we computed shows that successive data points are strongly correlated. End of explanation rgr = linear_model.LinearRegression() x = x.reshape((len(x)), 1) t = np.arange(len(x)).reshape(x.shape) rgr.fit(t, x) x_star= x - rgr.predict(t) plt.plot(x_star.flatten(), ',') plt.xlabel('time') plt.ylabel('x') Explanation: 1.2 Before estimating the coefficients of the AR(1) model we remove the linear trend in time, thus making it resemble more closely the model with which we are trying to analyze it. End of explanation X_p, x_T = set_data(1, x_star) model.fit(X_p, x_T) model.coef_ x_1 = AR(np.append(model.coef_[0], 0), [0, x_star[0]], 50000) plt.plot(x_1, ',') plt.xlabel('time') plt.ylabel('x') plt.plot(x_star[1:], x_star[:-1], ',') plt.xlabel(r'x$_{t - 1}$') plt.ylabel(r'x$_{t}$') Explanation: This time we obtain different coefficients, that we can use to simulate the data and see if they give us a similar result the real data. End of explanation err = test_AR(x_star, model.coef_[0], 10) np.sum(err, axis=0) / err.shape[0] plt.plot(np.sum(err, axis=0) / err.shape[0], 'o', label='Error') plt.plot([0, 10.], np.ones(2)* np.var(x_star), 'r', label='Variance') plt.grid(linestyle='dotted') plt.xlabel(r'$\Delta t$') plt.ylabel('Error') Explanation: In the next plot we can see that our predicted values have an error that decays exponentially the further we try to make a prediction. By the time it arrives to 5 time steps of distance it equal to the variance. End of explanation x = sio.loadmat('Tut2_file2.mat')['x'].flatten() plt.plot(x, ',') plt.xlabel('time') plt.ylabel('x') np.mean(x) X_p, x_T = set_data(1, x) model = linear_model.LinearRegression() model.fit(X_p, x_T) model.coef_ Explanation: 1.4 By plotting the data we can already see that this cannot be a simple AR model. The data seems divided in 2 parts with very few data points in the middle. End of explanation x_1 = AR(model.coef_[0], x[:1], 50001) plt.plot(x_1[1:], ',') plt.xlabel('time') plt.ylabel('x') Explanation: We tried to simulate the data with these coefficients but it is clearly uneffective End of explanation plt.plot(x[1:], x[:-1], ',') plt.xlabel(r'x$_{t - 1}$') plt.ylabel(r'x$_{t}$') plt.plot(x_star[1:], x_star[:-1], ',') plt.xlabel(r'x$_{t - 1}$') plt.ylabel(r'x$_{t}$') Explanation: By plotting the return plot we can better understand what is going on. The data can be divided in two parts. We can see that successive data is always around one of this two poles. If it were a real AR model we would expect something like the return plots shown below this one. End of explanation plt.plot(estimated_autocorrelation(x)[:200]) plt.xlabel(r'$\Delta$t') plt.ylabel(r'$\rho$') plt.plot(estimated_autocorrelation(x_1.flatten())[:20]) plt.xlabel(r'$\Delta$t') plt.ylabel(r'$\rho$') Explanation: We can see that in the autocorelation plot the trend is exponential, which is what we would expect, but it is taking too long to decay for being a an AR model with small value of $p$ End of explanation data = sio.loadmat('Tut2_file3.mat') x_AR = data['x_AR'].flatten() x_MA = data['x_MA'].flatten() Explanation: Task 2. Autocorrelation and partial autocorrelation. 2.1 End of explanation for i in range(3,6): X_p, x_T = set_data(i, x_AR) model = linear_model.LinearRegression() model.fit(X_p, x_T) plt.plot(estimated_autocorrelation((x_T - model.predict(X_p)).flatten())[:20], \ label='AR(' + str(i) + ')') plt.xlabel(r'$\Delta$t') plt.ylabel(r'$\rho$') plt.legend() Explanation: For computing the $\hat p$ for the AR model we predicted the parameters $a_i$ for various AR(5). We find that for p = 6 we do not have any correlation between previous values and future values. End of explanation plt.plot(estimated_autocorrelation(x_MA)[:20]) plt.xlabel(r'$\Delta$t') plt.ylabel(r'$\rho$') Explanation: For the MA $\hat q$ could be around 4-6 End of explanation
6,421
Given the following text description, write Python code to implement the functionality described below step by step Description: Permutation T-test on sensor data One tests if the signal significantly deviates from 0 during a fixed time window of interest. Here computation is performed on MNE sample dataset between 40 and 60 ms. Step1: Set parameters Step2: View location of significantly active sensors
Python Code: # Authors: Alexandre Gramfort <[email protected]> # # License: BSD (3-clause) import numpy as np import mne from mne import io from mne.stats import permutation_t_test from mne.datasets import sample print(__doc__) Explanation: Permutation T-test on sensor data One tests if the signal significantly deviates from 0 during a fixed time window of interest. Here computation is performed on MNE sample dataset between 40 and 60 ms. End of explanation data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' event_id = 1 tmin = -0.2 tmax = 0.5 # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) # pick MEG Gradiometers picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True, exclude='bads') epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6)) data = epochs.get_data() times = epochs.times temporal_mask = np.logical_and(0.04 <= times, times <= 0.06) data = np.mean(data[:, :, temporal_mask], axis=2) n_permutations = 50000 T0, p_values, H0 = permutation_t_test(data, n_permutations, n_jobs=1) significant_sensors = picks[p_values <= 0.05] significant_sensors_names = [raw.ch_names[k] for k in significant_sensors] print("Number of significant sensors : %d" % len(significant_sensors)) print("Sensors names : %s" % significant_sensors_names) Explanation: Set parameters End of explanation evoked = mne.EvokedArray(-np.log10(p_values)[:, np.newaxis], epochs.info, tmin=0.) # Extract mask and indices of active sensors in the layout stats_picks = mne.pick_channels(evoked.ch_names, significant_sensors_names) mask = p_values[:, np.newaxis] <= 0.05 evoked.plot_topomap(ch_type='grad', times=[0], scalings=1, time_format=None, cmap='Reds', vmin=0., vmax=np.max, units='-log10(p)', cbar_fmt='-%0.1f', mask=mask, size=3, show_names=lambda x: x[4:] + ' ' * 20, time_unit='s') Explanation: View location of significantly active sensors End of explanation
6,422
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction In group efforts, there is sometimes the impression that there are those who work, and those who talk. A naive question to ask is whether or not the people that tend to talk a lot actually get any work done. This is an obviously and purposefully obtuse question with an interesting answer. We can use BigBang's newest feature, git data collection, to compare all of the contributors to a project, in this case Scipy, based on their email and git commit activity. The hypothesis in this case was that people who commit a lot will also tend to email a lot, and vice versa, since their involvement in a project would usually require them to do both. This hypothesis was proven to be correct. However, the data reveals many more interesting phenomenon. Step1: Entity Resolution Git and Email data comes from two different datatables. To observe a single person's git and email data, we need a way to identify that person across the two different datatables. To solve this problem, I wrote an entity resolution client that will parse a Pandas dataframe and add a new column to it called "Person-ID" which gives each row an ID that represents one unique contributor. A person may go by many names ("Robert Smith, Rob B. Smith, Bob S., etc.) and use many different emails. However, this client will read through these data tables in one pass and consolidate these identities based on a few strategies. Step2: After we've run entity resolution on our dataframes, we split the dataframe into slices based on time. So for the entire life-span of the project, we will have NUM_SLICES different segments to analyze. We will be able to look at the git and email data up until that certain date, which can let us analyze these changes over time. Step3: Merging Data Tables Now we want to merge these two tables based on their Person-ID values. Basically, we first count how many emails / commits a certain contributor had in a certain slice. We then join all the rows with the same Person-ID to each other, so that we have the number of emails and the number of commits of each person in one row per person in one consolidated dataframe. We then delete all the rows where both of these values aren't defined. These represent people for whom we have git data but not mail data, or vice versa. Step4: Coloring We now assign a float value [0 --> 1] to each person. This isn't neccesary, but can let us graph these changes in a scatter plot and give each contributor a unique color to differentiate them. This will help us track an individual as their dot travels over time. Step5: Here we graph our data. Each dot represents a unique contributor's number of emails and commits. As you'll notice, the graph is on a log-log scale. Step6: Animations Below this point, you'll find the code for generating animations. This can take a long time (~30 mins) for a large number of slices. However, the pre-generated videos are below. The first video just shows all the contributors over time without unique colors. The second video has a color for each contributor, but also contains a Matplotlib bug where the minimum x and y values for the axes is not followed. There is a lot to observe. As to our hypothesis, it's clear that people who email more commit more. In our static graph, we could see many contributors on the x-axis -- people who only email -- but this dynamic graph allows us to see the truth. While it may seem that they're people who only email, the video shows that even these contributors eventually start committing. Most committers don't really get past 10 commits without starting to email the rest of the project, for pretty clear reasons. However, the emailers can "get away with" exclusively emailing for longer, but eventually they too start to commit. In general, not only is there a positive correlation, there's a general trend of everyone edging close to having a stable and relatively equal ratio of commits to emails.
Python Code: # Load the raw email and git data url = "http://mail.python.org/pipermail/scipy-dev/" arx = Archive(url,archive_dir="../archives") mailInfo = arx.data repo = repo_loader.get_repo("bigbang") gitInfo = repo.commit_data; Explanation: Introduction In group efforts, there is sometimes the impression that there are those who work, and those who talk. A naive question to ask is whether or not the people that tend to talk a lot actually get any work done. This is an obviously and purposefully obtuse question with an interesting answer. We can use BigBang's newest feature, git data collection, to compare all of the contributors to a project, in this case Scipy, based on their email and git commit activity. The hypothesis in this case was that people who commit a lot will also tend to email a lot, and vice versa, since their involvement in a project would usually require them to do both. This hypothesis was proven to be correct. However, the data reveals many more interesting phenomenon. End of explanation entityResolve = bigbang.entity_resolution.entityResolve mailAct = mailInfo.apply(entityResolve, axis=1, args =("From",None)) gitAct = gitInfo.apply(entityResolve, axis=1, args =("Committer Email","Committer Name")) Explanation: Entity Resolution Git and Email data comes from two different datatables. To observe a single person's git and email data, we need a way to identify that person across the two different datatables. To solve this problem, I wrote an entity resolution client that will parse a Pandas dataframe and add a new column to it called "Person-ID" which gives each row an ID that represents one unique contributor. A person may go by many names ("Robert Smith, Rob B. Smith, Bob S., etc.) and use many different emails. However, this client will read through these data tables in one pass and consolidate these identities based on a few strategies. End of explanation NUM_SLICES = 1500 # Number of animation frames. More means more loading time mailAct.sort("Date") gitAct.sort("Time") def getSlices(df, numSlices): sliceSize = len(df)/numSlices slices = [] for i in range(1, numSlices + 1): start = 0 next = (i)*sliceSize; next = min(next, len(df)-1) # make sure we don't go out of bounds slice = df.iloc[start:next] slices.append(slice) return slices mailSlices = getSlices(mailAct, NUM_SLICES) gitSlices = getSlices(gitAct, NUM_SLICES) Explanation: After we've run entity resolution on our dataframes, we split the dataframe into slices based on time. So for the entire life-span of the project, we will have NUM_SLICES different segments to analyze. We will be able to look at the git and email data up until that certain date, which can let us analyze these changes over time. End of explanation def processSlices(slices) : for i in range(len(slices)): slice = slices[i] slice = slice.groupby("Person-ID").size() slice.sort() slices[i] = slice def concatSlices(slicesA, slicesB) : # assumes they have the same number of slices # First is emails, second is commits ansSlices = [] for i in range(len(slicesA)): sliceA = slicesA[i] sliceB = slicesB[i] ans = pd.concat({"Emails" : sliceA, "Commits": sliceB}, axis = 1) ans = ans[pd.notnull(ans["Emails"])] ans = ans[pd.notnull(ans["Commits"])] ansSlices.append(ans); return ansSlices processSlices(mailSlices) processSlices(gitSlices) finalSlices = concatSlices(mailSlices, gitSlices) Explanation: Merging Data Tables Now we want to merge these two tables based on their Person-ID values. Basically, we first count how many emails / commits a certain contributor had in a certain slice. We then join all the rows with the same Person-ID to each other, so that we have the number of emails and the number of commits of each person in one row per person in one consolidated dataframe. We then delete all the rows where both of these values aren't defined. These represent people for whom we have git data but not mail data, or vice versa. End of explanation def idToFloat(id): return id*1.0/400.0; for i in range(len(finalSlices)): slice = finalSlices[i] toSet = [] for i in slice.index.values: i = idToFloat(i) toSet.append(i) slice["color"] = toSet Explanation: Coloring We now assign a float value [0 --> 1] to each person. This isn't neccesary, but can let us graph these changes in a scatter plot and give each contributor a unique color to differentiate them. This will help us track an individual as their dot travels over time. End of explanation data = finalSlices[len(finalSlices)-1] # Will break if there are 0 slices fig = plt.figure(figsize=(8, 8)) d = data x = d["Emails"] y = d["Commits"] c = d["color"] ax = plt.axes(xscale='log', yscale = 'log') plt.scatter(x, y, c=c, s=75) plt.ylim(0, 10000) plt.xlim(0, 10000) ax.set_xlabel("Emails") ax.set_ylabel("Commits") plt.plot([0, 1000],[0, 1000], linewidth=5) plt.show() Explanation: Here we graph our data. Each dot represents a unique contributor's number of emails and commits. As you'll notice, the graph is on a log-log scale. End of explanation from IPython.display import YouTubeVideo display(YouTubeVideo('GCcYJBq1Bcc', width=500, height=500)) display(YouTubeVideo('uP-z4jJqxmI', width=500, height=500)) fig = plt.figure(figsize=(8, 8)) a = finalSlices[0] print type(plt) ax = plt.axes(xscale='log', yscale = 'log') graph, = ax.plot(x ,y, 'o', c='red', alpha=1, markeredgecolor='none') ax.set_xlabel("Emails") ax.set_ylabel("Commits") plt.ylim(0, 10000) plt.xlim(0, 10000) def init(): graph.set_data([],[]); return graph, def animate(i): a = finalSlices[i] x = a["Emails"] y = a["Commits"] graph.set_data(x, y) return graph, anim = animation.FuncAnimation(fig, animate, init_func=init, frames=NUM_SLICES, interval=1, blit=True) anim.save('t1.mp4', fps=15) def main(): data = finalSlices first = finalSlices[0] fig = plt.figure(figsize=(8, 8)) d = data x = d[0]["Emails"] y = d[0]["Commits"] c = d[0]["color"] ax = plt.axes(xscale='log', yscale='log') scat = plt.scatter(x, y, c=c, s=100) plt.ylim(0, 10000) plt.xlim(0, 10000) plt.xscale('log') plt.yscale('log') ani = animation.FuncAnimation(fig, update_plot, frames=NUM_SLICES, fargs=(data, scat), blit=True) ani.save('test.mp4', fps=10) #plt.show() def update_plot(i, d, scat): x = d[i]["Emails"] y = d[i]["Commits"] c = d[i]["color"] plt.cla() ax = plt.axes() ax.set_xscale('log') ax.set_yscale('log') scat = plt.scatter(x, y, c=c, s=100) plt.ylim(0, 10000) plt.xlim(0, 10000) plt.xlabel("Emails") plt.ylabel("Commits") return scat, main() Explanation: Animations Below this point, you'll find the code for generating animations. This can take a long time (~30 mins) for a large number of slices. However, the pre-generated videos are below. The first video just shows all the contributors over time without unique colors. The second video has a color for each contributor, but also contains a Matplotlib bug where the minimum x and y values for the axes is not followed. There is a lot to observe. As to our hypothesis, it's clear that people who email more commit more. In our static graph, we could see many contributors on the x-axis -- people who only email -- but this dynamic graph allows us to see the truth. While it may seem that they're people who only email, the video shows that even these contributors eventually start committing. Most committers don't really get past 10 commits without starting to email the rest of the project, for pretty clear reasons. However, the emailers can "get away with" exclusively emailing for longer, but eventually they too start to commit. In general, not only is there a positive correlation, there's a general trend of everyone edging close to having a stable and relatively equal ratio of commits to emails. End of explanation
6,423
Given the following text description, write Python code to implement the functionality described below step by step Description: Minimal Example to Produce a Synthetic Light Curve Setup Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details. Step2: Adding Datasets Now we'll create an empty lc dataset Step3: Running Compute Now we'll compute synthetics at the times provided using the default options Step4: Plotting Now we can simply plot the resulting synthetic light curve.
Python Code: !pip install -I "phoebe>=2.0,<2.1" %matplotlib inline Explanation: Minimal Example to Produce a Synthetic Light Curve Setup Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). End of explanation import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary() Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details. End of explanation b.add_dataset('lc', times=np.linspace(0,1,201), dataset='mylc') Explanation: Adding Datasets Now we'll create an empty lc dataset: End of explanation b.run_compute(irrad_method='none') Explanation: Running Compute Now we'll compute synthetics at the times provided using the default options End of explanation axs, artists = b['mylc@model'].plot() axs, artists = b['mylc@model'].plot(x='phases') Explanation: Plotting Now we can simply plot the resulting synthetic light curve. End of explanation
6,424
Given the following text description, write Python code to implement the functionality described below step by step Description: Executed Step1: Notebook arguments measurement_id (int) Step2: Selecting a data file Step3: Data load and Burst search Load and process the data Step4: Compute background and burst search Step5: Let's take a look at the photon waiting times histograms and at the fitted background rates Step6: Using dplot exactly in the same way as for the single-spot data has now generated 8 subplots, one for each channel. Let's plot a timetrace for the background to see is there are significat variations during the measurement Step7: We can look at the timetrace of the photon stream (binning) Step8: Burst selection and FRET Step9: Selecting bursts by size Step10: 2-Gaussian peaks Step12: Fit Step13: $$f(x) = \frac{A}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}}$$ $$\log f(x) = \log \frac{A}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}} = \log{A} -\log{\sigma} - \log\sqrt{2\pi} -\frac{(x - \mu)^2}{2 \sigma^2}$$ $$w_1 \; f_1(x) + w_2 \; f_2(x) + w_3 \; f_3(x)$$ $$\log (w_1 \; f_1(x)) = \log{w_1} + \log{f_1(x)}$$ Step14: Kinetics Definitions Step15: Moving-window processing Step16: Burst-data Step17: Population fraction
Python Code: measurement_id = 0 windows = (60, 180) # Cell inserted during automated execution. windows = (30, 180) measurement_id = 1 Explanation: Executed: Tue Mar 28 00:43:40 2017 Duration: 41 seconds. End of explanation import time from pathlib import Path import pandas as pd from scipy.stats import linregress from scipy import optimize from IPython.display import display from fretbursts import * sns = init_notebook(fs=14) import lmfit; lmfit.__version__ import phconvert; phconvert.__version__ Explanation: Notebook arguments measurement_id (int): Select the measurement from the list. Valid values: 0 .. 3 1-spot realtime kinetics <p class=lead>This notebook executes the realtime-kinetics analysis.</p> The first cell of this notebook selects which measurement is analyzed. Measurements can be processed one-by-one, by manually running this notebook, or in batch by using the notebook: "1-spot bubble-bubble kinetics - Run-All". Loading the software End of explanation path = Path('./data/') pattern = 'singlespot*.hdf5' filenames = list(str(f) for f in path.glob(pattern)) filenames basenames = list(f.stem for f in path.glob(pattern)) basenames start_times = [600, 900, 900, 600, 600, 600, 600, 600, 600, 600, 600, 600] # time of NTP injection and start of kinetics filename = filenames[measurement_id] start_time = start_times[measurement_id] filename import os assert os.path.exists(filename) Explanation: Selecting a data file End of explanation d = loader.photon_hdf5(filename) plot_alternation_hist(d) loader.alex_apply_period(d) d.time_max Explanation: Data load and Burst search Load and process the data: End of explanation d.calc_bg(bg.exp_fit, time_s=10, tail_min_us='auto', F_bg=1.7) Explanation: Compute background and burst search: End of explanation dplot(d, hist_bg); Explanation: Let's take a look at the photon waiting times histograms and at the fitted background rates: End of explanation dplot(d, timetrace_bg); xlim(start_time - 150, start_time + 150) Explanation: Using dplot exactly in the same way as for the single-spot data has now generated 8 subplots, one for each channel. Let's plot a timetrace for the background to see is there are significat variations during the measurement: End of explanation #dplot(d, timetrace) #xlim(2, 3); ylim(-100, 100); Explanation: We can look at the timetrace of the photon stream (binning): End of explanation #%%timeit -n1 -r1 ddc = bext.burst_search_and_gate(d) ds1 = ddc.select_bursts(select_bursts.size, th1=25) ds = ds1.select_bursts(select_bursts.naa, th1=25) Explanation: Burst selection and FRET End of explanation bpl.alex_jointplot(ds) ds0 = ds.select_bursts(select_bursts.time, time_s1=0, time_s2=start_time-10) dplot(ds0, hist_fret, pdf=False); weights = 'size' bext.bursts_fitter(ds0, weights=weights) ds0.E_fitter.fit_histogram(mfit.factory_two_gaussians(p1_center=0.5, p2_center=0.9), verbose=False) dplot(ds0, hist_fret, show_model=True, weights=weights); ds0.E_fitter.params weights = None bext.bursts_fitter(ds0, weights=weights) ds0.E_fitter.fit_histogram(mfit.factory_two_gaussians(p1_center=0.5, p2_center=0.9), verbose=False) dplot(ds0, hist_fret, show_model=True, weights=weights); ds0.E_fitter.params Explanation: Selecting bursts by size End of explanation def gauss2(**params0): peak1 = lmfit.models.GaussianModel(prefix='p1_') peak2 = lmfit.models.GaussianModel(prefix='p2_') model = peak1 + peak2 model.set_param_hint('p1_center', **{'value': 0.6, 'min': 0.3, 'max': 0.8, **params0.get('p1_center', {})}) model.set_param_hint('p2_center', **{'value': 0.9, 'min': 0.8, 'max': 1.0, **params0.get('p2_center', {})}) for sigma in ['p%d_sigma' % i for i in (1, 2)]: model.set_param_hint(sigma, **{'value': 0.02, 'min': 0.01, **params0.get(sigma, {})}) for ampl in ['p%d_amplitude' % i for i in (1, 2)]: model.set_param_hint(ampl, **{'value': 0.5, 'min': 0.01, **params0.get(ampl, {})}) model.name = '3 gauss peaks' return model #%matplotlib notebook #fig, ax = plt.subplots(figsize=(12, 8)) #dplot(dm0, scatter_fret_size, ax=ax) bext.bursts_fitter(ds0, weights=None) ds0.E_fitter.fit_histogram(gauss2(), verbose=False) mfit.plot_mfit(ds0.E_fitter) params_2gauss = ds0.E_fitter.params plt.xlabel('E') plt.ylabel('PDF') plt.title('') params_2gauss ds_final = ds.select_bursts(select_bursts.time, time_s1=start_time+300, time_s2=ds.time_max + 1) ds_final.num_bursts bext.bursts_fitter(ds_final, weights=None) model = gauss2() model.set_param_hint('p2_center', value=params_2gauss.p2_center[0], vary=False) ds_final.E_fitter.fit_histogram(model, verbose=False) fig, ax = plt.subplots(figsize=(12, 6)) mfit.plot_mfit(ds_final.E_fitter, ax=ax) params_2gauss1 = ds_final.E_fitter.params params_2gauss1 #del params_2gauss0 is_runoff = 'runoff' in filename.lower() if 'params_2gauss0' not in locals(): params_2gauss0 = params_2gauss.copy() if is_runoff: params_2gauss0.p2_center = params_2gauss1.p2_center else: params_2gauss0.p1_center = params_2gauss1.p1_center params_2gauss0.p1_amplitude + params_2gauss0.p2_amplitude 'params_2gauss0' in locals() Explanation: 2-Gaussian peaks End of explanation from scipy import optimize params_fixed = dict( mu1=float(params_2gauss0.p1_center), mu2=float(params_2gauss0.p2_center), sig1=float(params_2gauss0.p1_sigma), sig2=float(params_2gauss0.p2_sigma), ) def em_weights_2gauss(x, a2, mu1, mu2, sig1, sig2): Responsibility function for a 2-Gaussian model. Return 2 arrays of size = x.size: the responsibility of each Gaussian population. a1 = 1 - a2 assert np.abs(a1 + a2 - 1) < 1e-3 f1 = a1 * gauss_pdf(x, mu1, sig1) f2 = a2 * gauss_pdf(x, mu2, sig2) γ1 = f1 / (f1 + f2) γ2 = f2 / (f1 + f2) return γ1, γ2 def em_fit_2gauss(x, a2_0, params_fixed, print_every=10, max_iter=100, rtol=1e-3): a2_new = a2_0 rel_change = 1 i = 0 while rel_change > rtol and i < max_iter: # E-step γ1, γ2 = em_weights_2gauss(x, a2_new, **params_fixed) assert np.allclose(γ1.sum() + γ2.sum(), x.size) # M-step a2_old = a2_new a2_new = γ2.sum()/γ2.size # Convergence rel_change = np.abs((a2_old - a2_new)/a2_new) i += 1 if (i % print_every) == 0: print(i, a2_new, rel_change) return a2_new, i from matplotlib.pylab import normpdf as gauss_pdf # Model PDF to be maximized def model_pdf(x, a2, mu1, mu2, sig1, sig2): a1 = 1 - a2 #assert np.abs(a1 + a2 + a3 - 1) < 1e-3 return (a1 * gauss_pdf(x, mu1, sig1) + a2 * gauss_pdf(x, mu2, sig2)) def func2min_lmfit(params, x): a2 = params['a2'].value mu1 = params['mu1'].value mu2 = params['mu2'].value sig1 = params['sig1'].value sig2 = params['sig2'].value return -np.sqrt(np.log(model_pdf(x, a2, mu1, mu2, sig1, sig2))) def func2min_scipy(params_fit, params_fixed, x): a2 = params_fit mu1 = params_fixed['mu1'] mu2 = params_fixed['mu2'] sig1 = params_fixed['sig1'] sig2 = params_fixed['sig2'] return -np.log(model_pdf(x, a2, mu1, mu2, sig1, sig2)).sum() # create a set of Parameters params = lmfit.Parameters() params.add('a2', value=0.5, min=0) for k, v in params_fixed.items(): params.add(k, value=v, vary=False) Explanation: Fit End of explanation x = ds0.E_ #x #result = lmfit.minimize(func2min_lmfit, params, args=(x,), method='nelder') #lmfit.report_fit(result.params) #optimize.brute(func2min_scipy, ranges=((0.01, 0.99), (0.01, 0.99)), Ns=101, args=(params, x)) res_em = em_fit_2gauss(x, 0.5, params_fixed) res_em res = optimize.minimize(func2min_scipy, x0=[0.5], args=(params_fixed, x), method='Nelder-Mead') res res = optimize.minimize(func2min_scipy, x0=[0.5], args=(params_fixed, x), bounds=((0,1),), method='SLSQP') res res = optimize.minimize(func2min_scipy, x0=[0.5], args=(params_fixed, x), bounds=((0,1),), method='TNC') res bins = np.arange(-0.1, 1.1, 0.025) plt.hist(x, bins, histtype='step', lw=2, normed=True); xx = np.arange(-0.1, 1.1, 0.005) #plt.plot(xx, model_pdf(xx, params)) plt.plot(xx, model_pdf(xx, a2=res_em[0], **params_fixed)) Explanation: $$f(x) = \frac{A}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}}$$ $$\log f(x) = \log \frac{A}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}} = \log{A} -\log{\sigma} - \log\sqrt{2\pi} -\frac{(x - \mu)^2}{2 \sigma^2}$$ $$w_1 \; f_1(x) + w_2 \; f_2(x) + w_3 \; f_3(x)$$ $$\log (w_1 \; f_1(x)) = \log{w_1} + \log{f_1(x)}$$ End of explanation def _kinetics_fit_em(dx, a2_0, params_fixed, **kwargs): kwargs = {'max_iter': 100, 'print_every': 101, **kwargs} a2, i = em_fit_2gauss(dx.E_, a2_0, params_fixed, **kwargs) return a2, i < kwargs['max_iter'] def _kinetics_fit_ll(dx, a2_0, params_fixed, **kwargs): kwargs = {'method':'Nelder-Mead', **kwargs} res = optimize.minimize(func2min_scipy, x0=[a2_0], args=(params_fixed, dx.E_), **kwargs) return res.x[0], res.success def _kinetics_fit_hist(dx, a2_0, params_fixed): E_fitter = bext.bursts_fitter(dx) model = mfit.factory_two_gaussians() model.set_param_hint('p1_center', value=params_fixed['mu1'], vary=False) model.set_param_hint('p2_center', value=params_fixed['mu2'], vary=False) model.set_param_hint('p1_sigma', value=params_fixed['sig1'], vary=False) model.set_param_hint('p2_sigma', value=params_fixed['sig2'], vary=False) E_fitter.fit_histogram(model, verbose=False) return (float(E_fitter.params.p2_amplitude), dx.E_fitter.fit_res[0].success) def kinetics_fit(ds_slices, params_fixed, a2_0=0.5, method='em', **method_kws): fit_func = { 'em': _kinetics_fit_em, 'll': _kinetics_fit_ll, 'hist': _kinetics_fit_hist} fit_list = [] for dx in ds_slices: a2, success = fit_func[method](dx, a2_0, params_fixed, **method_kws) df_i = pd.DataFrame(data=dict(p2_amplitude=a2, p1_center=params_fixed['mu1'], p2_center=params_fixed['mu2'], p1_sigma=params_fixed['sig1'], p2_sigma=params_fixed['sig2'], tstart=dx.slice_tstart, tstop=dx.slice_tstop, tmean=0.5*(dx.slice_tstart + dx.slice_tstop)), index=[0.5*(dx.slice_tstart + dx.slice_tstop)]) if not success: print('* ', end='', flush=True) continue fit_list.append(df_i) print(flush=True) return pd.concat(fit_list) start_time/60 Explanation: Kinetics Definitions End of explanation def print_slices(moving_window_params): msg = ' - Slicing measurement:' for name in ('start', 'stop', 'step', 'window'): msg += ' %s = %.1fs' % (name, moving_window_params[name]) print(msg, flush=True) num_slices = len(bext.moving_window_startstop(**moving_window_params)) print(' Number of slices %d' % num_slices, flush=True) t1 = time.time() time.ctime() ds.calc_max_rate(m=10) ds_high = ds.select_bursts(select_bursts.E, E1=0.85) step = 10 params = {} for window in windows: moving_window_params = dict(start=0, stop=ds.time_max, step=step, window=window) print_slices(moving_window_params) ds_slices = bext.moving_window_chunks(ds, time_zero=start_time, **moving_window_params) for meth in ['em', 'll', 'hist']: print(' >>> Fitting method %s ' % meth, end='', flush=True) p = kinetics_fit(ds_slices, params_fixed, method=meth) print(flush=True) p['kinetics'] = p.p2_amplitude p = p.round(dict(p1_center=3, p1_sigma=4, p2_amplitude=4, p2_center=3, p2_sigma=4, kinetics=4)) params[meth, window, step] = p print('Moving-window processing duration: %d seconds.' % (time.time() - t1)) Explanation: Moving-window processing End of explanation #moving_window_params = dict(start=0, stop=dsc.time_max, step=1, window=30) moving_window_params ds_slices_high = bext.moving_window_chunks(ds_high, **moving_window_params) df = bext.moving_window_dataframe(**moving_window_params) - start_time df['size_mean'] = [di.nt_.mean() for di in ds_slices] df['size_max'] = [di.nt_.max() for di in ds_slices] df['num_bursts'] = [di.num_bursts[0] for di in ds_slices] df['burst_width'] = [di.mburst_.width.mean()*di.clk_p*1e3 for di in ds_slices] df['burst_width_high'] = [di.mburst_.width.mean()*di.clk_p*1e3 for di in ds_slices_high] df['phrate_mean'] = [di.max_rate_.mean() for di in ds_slices] df = df.round(dict(tmean=1, tstart=1, tstop=1, size_mean=2, size_max=1, burst_width=2, burst_width_high=2, phrate_mean=1)) df labels = ('num_bursts', 'burst_width', 'size_mean', 'phrate_mean',) fig, axes = plt.subplots(len(labels), 1, figsize=(12, 3*len(labels))) for ax, label in zip(axes, labels): ax.plot('tstart', label, data=df) ax.legend(loc='best') #ax.set_ylim(0) # %%timeit -n1 -r1 # meth = 'em' # print(' >>> Fitting method %s' % meth, flush=True) # p = kinetics_fit(ds_slices, params_fixed, method=meth) # %%timeit -n1 -r1 # meth = 'hist' # print(' >>> Fitting method %s' % meth, flush=True) # p = kinetics_fit(ds_slices, params_fixed, method=meth) # %%timeit -n1 -r1 # meth = 'll' # print(' >>> Fitting method %s' % meth, flush=True) # p = kinetics_fit(ds_slices, params_fixed, method=meth) out_fname = 'results/%s_burst_data_vs_time__window%ds_step%ds.csv' % ( Path(filename).stem, moving_window_params['window'], moving_window_params['step']) out_fname df.to_csv(out_fname) Explanation: Burst-data End of explanation # np.abs((params['em', 30, 1] - params['ll', 30, 1]).p2_amplitude).max() methods = ('em', 'll', 'hist') for meth in methods: plt.figure(figsize=(14, 3)) plt.plot(params['em', windows[0], step].index, params['em', windows[0], step].kinetics, 'h', color='gray', alpha=0.2) plt.plot(params['em', windows[1], step].index, params['em', windows[1], step].kinetics, 'h', alpha=0.3) # (params['em', 5, 1].kinetics - params['ll', 5, 1].kinetics).plot() for window in windows: for meth in methods: out_fname = ('results/' + Path(filename).stem + '_%sfit_ampl_only__window%ds_step%ds.csv' % (meth, window, step)) print('- Saving: ', out_fname) params[meth, window, step].to_csv(out_fname) d Explanation: Population fraction End of explanation
6,425
Given the following text description, write Python code to implement the functionality described below step by step Description: Symbolic Computation Symbolic computation deals with symbols, representing them exactly, instead of numerical approximations (floating point). We will start with the following borrowed tutorial to introduce the concepts of SymPy. Devito uses SymPy heavily and builds upon it in its DSL. Step1: $\sqrt(8) = 2\sqrt(2)$, but it's hard to see that here Step2: SymPy can even simplify symbolic computations Step3: Note that simply adding two symbols creates an expression. Now let's play around with it. Step4: Note that expr - x was not x + 2y -x Step5: Exercise Solve $x^2 - 2 = 0$ using sympy.solve Step6: Pretty printing Step7: More symbols. Exercise Step8: Exercise Solve $x + 2y + 3z$ for $x$ Step9: Difference between symbol name and python variable name Step10: Symbol names can be more than one character long Step11: What happens when I print expr now? Does it print 3? Step12: How do we get 3? Step13: Equalities Step14: Suppose we want to ask whether $(x + 1)^2 = x^2 + 2x + 1$ Step15: Exercise Write a function that takes two expressions as input, and returns a tuple of two booleans. The first if they are equal symbolically, and the second if they are equal mathematically. More operations Step16: Exercise Step17: Write a function that takes a symbolic expression (like pi), and determines the first place where 789 appears. Tip Step18: Finite Differences
Python Code: import math math.sqrt(3) math.sqrt(8) Explanation: Symbolic Computation Symbolic computation deals with symbols, representing them exactly, instead of numerical approximations (floating point). We will start with the following borrowed tutorial to introduce the concepts of SymPy. Devito uses SymPy heavily and builds upon it in its DSL. End of explanation import sympy sympy.sqrt(3) Explanation: $\sqrt(8) = 2\sqrt(2)$, but it's hard to see that here End of explanation sympy.sqrt(8) from sympy import symbols x, y = symbols('x y') expr = x + 2*y expr Explanation: SymPy can even simplify symbolic computations End of explanation expr + 1 expr - x Explanation: Note that simply adding two symbols creates an expression. Now let's play around with it. End of explanation x*expr from sympy import expand, factor expanded_expr = expand(x*expr) expanded_expr factor(expanded_expr) from sympy import diff, sin, exp diff(sin(x)*exp(x), x) from sympy import limit limit(sin(x)/x, x, 0) Explanation: Note that expr - x was not x + 2y -x End of explanation # Type solution here from sympy import solve Explanation: Exercise Solve $x^2 - 2 = 0$ using sympy.solve End of explanation from sympy import init_printing, Integral, sqrt init_printing(use_latex='mathjax') Integral(sqrt(1/x), x) from sympy import latex latex(Integral(sqrt(1/x), x)) Explanation: Pretty printing End of explanation # NBVAL_SKIP # The following piece of code is supposed to fail as it is # The exercise is to fix the code expr2 = x + 2*y +3*z Explanation: More symbols. Exercise: fix the following piece of code End of explanation # Solution here from sympy import solve Explanation: Exercise Solve $x + 2y + 3z$ for $x$ End of explanation x, y = symbols("y z") x y # NBVAL_SKIP # The following code will error until the code in cell 16 above is # fixed z Explanation: Difference between symbol name and python variable name End of explanation crazy = symbols('unrelated') crazy + 1 x = symbols("x") expr = x + 1 x = 2 Explanation: Symbol names can be more than one character long End of explanation print(expr) Explanation: What happens when I print expr now? Does it print 3? End of explanation x = symbols("x") expr = x + 1 expr.subs(x, 2) Explanation: How do we get 3? End of explanation x + 1 == 4 from sympy import Eq Eq(x + 1, 4) Explanation: Equalities End of explanation (x + 1)**2 == x**2 + 2*x + 1 from sympy import simplify a = (x + 1)**2 b = x**2 + 2*x + 1 simplify(a-b) Explanation: Suppose we want to ask whether $(x + 1)^2 = x^2 + 2x + 1$ End of explanation z = symbols("z") expr = x**3 + 4*x*y - z expr.subs([(x, 2), (y, 4), (z, 0)]) from sympy import sympify str_expr = "x**2 + 3*x - 1/2" expr = sympify(str_expr) expr expr.subs(x, 2) expr = sqrt(8) expr expr.evalf() from sympy import pi pi.evalf(100) from sympy import cos expr = cos(2*x) expr.evalf(subs={x: 2.4}) Explanation: Exercise Write a function that takes two expressions as input, and returns a tuple of two booleans. The first if they are equal symbolically, and the second if they are equal mathematically. More operations End of explanation from IPython.core.display import Image Image(filename='figures/comic.png') Explanation: Exercise End of explanation from sympy import Function f, g = symbols('f g', cls=Function) f(x) f(x).diff() diffeq = Eq(f(x).diff(x, x) - 2*f(x).diff(x) + f(x), sin(x)) diffeq from sympy import dsolve dsolve(diffeq, f(x)) Explanation: Write a function that takes a symbolic expression (like pi), and determines the first place where 789 appears. Tip: Use the string representation of the number. Python starts counting at 0, but the decimal point offsets this Solving an ODE End of explanation f = Function('f') dfdx = f(x).diff(x) dfdx.as_finite_difference() from sympy import Symbol d2fdx2 = f(x).diff(x, 2) h = Symbol('h') d2fdx2.as_finite_difference(h) Explanation: Finite Differences End of explanation
6,426
Given the following text description, write Python code to implement the functionality described below step by step Description: Removing particles from the simulation This tutorial shows the differnet ways to remove particles from a REBOUND simulation. Let us start by setting up a simple simulation with 10 bodies, and assign them unique hashes so we can keep track of them (see UniquelyIdentifyingParticlesWithHashes.ipynb). Step1: Let us add one more particle, this time with a custom name Step2: Now let us run perform a short integration to isolate the particles that interest us for a longer simulation Step3: At this stage, we might be interested in particles that remained within some semimajor axis range, particles that were in resonance with a particular planet, etc. Let's imagine a simple (albeit arbitrary) case where we only want to keep particles that had $x < 0$ at the end of the preliminary integration. Let's first print out the particle hashes and x positions. Step4: Note that 4066125545 is the hash corresponding to the string "Saturn" we added above. We can use the remove() function to filter out particles. As an argument, we pass the corresponding index in the particles array. Step5: By default, the remove() function removes the i-th particle from the particles array, and shifts all particles with higher indices down by 1. This ensures that the original order in the particles array is preserved. Note that this is helpful for example if you use an integrator such as WHFast which uses Jacobi coordinates. By running through the planets in reverse order, we are guaranteed that when a particle with index i gets removed, the particle replacing it doesn't need to also be removed (we already checked it). If you have many particles and many removals (or you don't care about the ordering), you can save the reshuffling of all particles with higher indices with the flag keepSorted=0 Step6: We see that the order of the particles array has changed. Because in general particles can change positions in the particles array, a more robust way of referring to particles (rather than through their index) is through their hash, which won't change. You can pass sim.remove either the hash directly, or if you pass a string, it will be automatically converted to its corresponding hash Step7: If you try to remove a particle with an invalid index or hash, an exception is thrown, which might be catch using the standard python syntax
Python Code: import rebound import numpy as np sim = rebound.Simulation() sim.add(m=1., hash=0) for i in range(1,10): sim.add(a=i, hash=i) sim.move_to_com() print("Particle hashes:{0}".format([sim.particles[i].hash for i in range(sim.N)])) Explanation: Removing particles from the simulation This tutorial shows the differnet ways to remove particles from a REBOUND simulation. Let us start by setting up a simple simulation with 10 bodies, and assign them unique hashes so we can keep track of them (see UniquelyIdentifyingParticlesWithHashes.ipynb). End of explanation sim.add(a=10, hash="Saturn") print("Particle hashes:{0}".format([sim.particles[i].hash for i in range(sim.N)])) Explanation: Let us add one more particle, this time with a custom name: End of explanation Noutputs = 1000 xs = np.zeros((sim.N, Noutputs)) ys = np.zeros((sim.N, Noutputs)) times = np.linspace(0.,50*2.*np.pi, Noutputs, endpoint=False) for i, time in enumerate(times): sim.integrate(time) xs[:,i] = [sim.particles[j].x for j in range(sim.N)] ys[:,i] = [sim.particles[j].y for j in range(sim.N)] %matplotlib inline import matplotlib.pyplot as plt fig,ax = plt.subplots(figsize=(15,5)) for i in range(sim.N): plt.plot(xs[i,:], ys[i,:]) ax.set_aspect('equal') Explanation: Now let us run perform a short integration to isolate the particles that interest us for a longer simulation: End of explanation print("Hash\t\tx") for i in range(sim.N): print("{0}\t{1}".format(sim.particles[i].hash, xs[i,-1])) Explanation: At this stage, we might be interested in particles that remained within some semimajor axis range, particles that were in resonance with a particular planet, etc. Let's imagine a simple (albeit arbitrary) case where we only want to keep particles that had $x < 0$ at the end of the preliminary integration. Let's first print out the particle hashes and x positions. End of explanation for i in reversed(range(1,sim.N)): if xs[i,-1] > 0: sim.remove(i) print("Number of particles after cut = {0}".format(sim.N)) print("Hashes of remaining particles = {0}".format([p.hash for p in sim.particles])) Explanation: Note that 4066125545 is the hash corresponding to the string "Saturn" we added above. We can use the remove() function to filter out particles. As an argument, we pass the corresponding index in the particles array. End of explanation sim.remove(2, keepSorted=0) print("Number of particles after cut = {0}".format(sim.N)) print("Hashes of remaining particles = {0}".format([p.hash for p in sim.particles])) Explanation: By default, the remove() function removes the i-th particle from the particles array, and shifts all particles with higher indices down by 1. This ensures that the original order in the particles array is preserved. Note that this is helpful for example if you use an integrator such as WHFast which uses Jacobi coordinates. By running through the planets in reverse order, we are guaranteed that when a particle with index i gets removed, the particle replacing it doesn't need to also be removed (we already checked it). If you have many particles and many removals (or you don't care about the ordering), you can save the reshuffling of all particles with higher indices with the flag keepSorted=0: End of explanation sim.remove(hash="Saturn") print("Number of particles after cut = {0}".format(sim.N)) print("Hashes of remaining particles = {0}".format([p.hash for p in sim.particles])) Explanation: We see that the order of the particles array has changed. Because in general particles can change positions in the particles array, a more robust way of referring to particles (rather than through their index) is through their hash, which won't change. You can pass sim.remove either the hash directly, or if you pass a string, it will be automatically converted to its corresponding hash: End of explanation try: sim.remove(hash="Planet 9") except RuntimeError as e: print("A runtime error occured: {0}".format(e)) Explanation: If you try to remove a particle with an invalid index or hash, an exception is thrown, which might be catch using the standard python syntax: End of explanation
6,427
Given the following text description, write Python code to implement the functionality described below step by step Description: IST256 Lesson 03 Conditionals Zybook Ch3 P4E Ch3 Links Participation Step1: Python’s Relational Operators <table style="font-size Step2: A. 4 B. 5 C. 6 D. 7 Vote Now Step3: A. 4 B. 5 C. 6 D. 7 Vote Now Step4: elif versus a series of if statements Step5: Check Yourself Step6: A. One B. Two C. Three D. Four Vote Now Step7: Watch Me Code 2 The need for an exception handling
Python Code: if boolean-expression: statements-when-true else: statemrnts-when-false Explanation: IST256 Lesson 03 Conditionals Zybook Ch3 P4E Ch3 Links Participation: https://poll.ist256.com Zoom Chat!!! Agenda Homework 02 Solution Non-Linear Code Execution Relational and Logical Operators Different types of non-linear execution. Run-Time error handling Connect Activity A Boolean value is a/an ______? A. True or False value B. Zero-based value C. Non-Negative value D. Alphanumeric value Vote Now: https://poll.ist256.com What is a Boolean Expression? A Boolean expression evaluates to a Boolean value of <font color='red'> True </font> or <font color='green'> False </font>. Boolean expressions ask questions. GPA >3.2 <span>&#8594;</span> Is GPA greater than 3.2? The result of which is <font color='red'> True </font> or <font color='green'> False </font> based on the evaluation of the expression: GPA = 4.0 <span>&#8594;</span> GPA > 3.2 <span>&#8594;</span> <font color='red'> True </font> GPA = 2.0 <span>&#8594;</span> GPA > 3.2 <span>&#8594;</span> <font color='green'> False </font> Program Flow Control with IF The IF statement is used to branch your code based on a Boolean expression. End of explanation x = 15 y = 20 z = 2 x > y z*x <= y y >= x-z z*10 == x Explanation: Python’s Relational Operators <table style="font-size:1.2em;"> <thead><tr> <th>Operator</th> <th>What it does</th> <th>Examples</th> </tr></thead> <tbody> <tr> <td><code> > </code></td> <td> Greater than </td> <td> 4>2 (True)</td> </tr> <tr> <td><code> < </code></td> <td> Less than </td> <td> 4<2 (False)</td> </tr> <tr> <td><code> == </code></td> <td> Equal To </td> <td> 4==2 (False)</td> </tr> <tr> <td><code> != </code></td> <td> Not Equal To </td> <td> 4!=2 (True)</td> </tr> <tr> <td><code> >= </code></td> <td> Greater Than or Equal To </td> <td> 4>=2 (True)</td> <tr> <td><code> <= </code></td> <td> Less Than or Equal To </td> <td> 4<=2 (True)</td> </tr> </tbody> </table> Expressions consisting of relational operators evaluate to a Boolean value Watch Me Code 1! ``` Do you need more milk? When the Fudge family has less than 1 gallon of milk, we need more! ``` Check Yourself: Relational Operators On Which line number is the Boolean expression True? End of explanation raining = False snowing = True age = 45 age < 18 and raining age >= 18 and not snowing not snowing or not raining age == 45 and not snowing Explanation: A. 4 B. 5 C. 6 D. 7 Vote Now: https://poll.ist256.com Python’s Logical Operators <table style="font-size:1.2em;"> <thead><tr> <th>Operator</th> <th>What it does</th> <th>Examples</th> </tr></thead> <tbody> <tr> <td><code> and </code></td> <td> True only when both are True </td> <td> 4>2 and 4<5 (True)</td> </tr> <tr> <td><code> or </code></td> <td> False only when both are False </td> <td> 4<2 or 4==4 (True)</td> </tr> <tr> <td><code> not </code></td> <td> Negation(Opposite) </td> <td> not 4==2 (True)</td> </tr> <tr> <td><code> in </code></td> <td> Set operator </td> <td> 4 in [2,4,7] (True)</td> </tr> </tbody> </table> Check Yourself: Logical Operators On Which line number is the Boolean expression True? End of explanation if boolean-expression1: statements-when-exp1-true elif boolean-expression2: statements-when-exp2-true elif boolean-expression3: statements-when-exp3-true else: statements-none-are-true Explanation: A. 4 B. 5 C. 6 D. 7 Vote Now: https://poll.ist256.com Multiple Decisions: IF ladder Use elif to make more than one decision in your if statement. Only one code block within the ladder is executed. End of explanation x = int(input("enter an integer")) # one single statement. only one block executes if x>10: print("A:bigger than 10") elif x>20: print("A:bigger than 20") # Independent if's, each True Boolean executes a block if x>10: print("B:bigger than 10") if x>20: print("B:bigger than 20") Explanation: elif versus a series of if statements End of explanation if x > 20: if y == 4: print("One") elif y > 4: print("Two") else: print("Three") else: print("Four") Explanation: Check Yourself: IF Statement Assuming values x = 77 and y = 2 what value is printed? End of explanation try: statements-which might-throw-an-error except errorType1: code-when-Type1-happens except errorType2: code-when-Type2-happens finally: code-happens-regardless Explanation: A. One B. Two C. Three D. Four Vote Now: https://poll.ist256.com End-To-End Example, Part 1: Tax Calculations! The country of “Fudgebonia” determines your tax rate from the number of dependents: 0 <span>&#8594;</span> 30% 1 <span>&#8594;</span> 25% 2 <span>&#8594;</span> 18% 3 or more <span>&#8594;</span> 10% Write a program to prompt for number of dependents (0-3) and annual income. It should then calculate your tax rate and tax bill. Format numbers properly! Handle Bad Input with Exceptions Exceptions represent a class of errors which occur at run-time. We’ve seen these before when run a program and it crashes due to bad input. And we get a TypeError or ValueError. Python provides a mechanism try .. except to catch these errors at run-time and prevent your program from crashing. Exceptions are <i>exceptional</i>. They should ONLY be used to handle unforeseen errors in program input. Try…Except…Finally The Try... Except statement is used to handle exceptions. Remember that exceptions catch run-time errors! End of explanation try: x = float(input("Enter a number: ")) if x > 0: y = "a" else: y = "b" except ValueError: y = "c" print(y) Explanation: Watch Me Code 2 The need for an exception handling: - Bad input - try except finally - Good practice of catching the specific error Check Yourself: Conditionals Try/Except What prints on line 9 when you input the value '-45s'? End of explanation
6,428
Given the following text description, write Python code to implement the functionality described below step by step Description: <center> <img src="http Step1: 2.1 Simple compartment models Example 1 Step2: 2.1.1.- Simple population models Adding competition for resources Step3: Logistic growth with harvesting Step4: Models with delay Example 2 Differential equations with delay are not currently supported by Scipy, fortunately there are the following options Step5: 2.2.- Interacting population models 2.2.1.- Predator-prey models Example 3 Step6: 2.2.2.- SIR models in epidemiology Step7: 2.2.3.- Endemic diseases Step8: 2.2 Subcompartment models Example 5
Python Code: import numpy as np import scipy.sparse.linalg as sp import sympy as sym from scipy.linalg import toeplitz import ipywidgets as widgets from ipywidgets import IntSlider import matplotlib.pyplot as plt %matplotlib inline from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter plt.style.use('ggplot') import matplotlib as mpl mpl.rcParams['font.size'] = 14 mpl.rcParams['axes.labelsize'] = 20 mpl.rcParams['xtick.labelsize'] = 14 mpl.rcParams['ytick.labelsize'] = 14 sym.init_printing() from scipy.integrate import solve_ivp from ipywidgets import interact Explanation: <center> <img src="http://sct.inf.utfsm.cl/wp-content/uploads/2020/04/logo_di.png" style="width:60%"> <h1> INF-495 - Modelado Computacional Aplicado </h1> <h2> Prof. Claudio Torres, Ph.D. </h2> <h2> Version: 1.02</h2> </center> Textbook: Computational Mathematical Modeling, An Integrated Approach Across Scales by Daniela Calvetti and Erkki Somersalo. Chapter 2 End of explanation # Example 1, numerically. def f_example_1(t,y,c,k1,k2): X = y[0] Y = y[1] Xp = c(t)-k1*X Yp = k1*X-k2*Y return np.array([Xp,Yp]) # Initial condition y0 = np.array([500, 600]) a = 1200 t1 = 12*60 t2 = 19*60 tau = 30 c = lambda t: a*(np.exp(-((np.mod(t,24*60)-t1)**2)/(2*tau**2)) \ +np.exp(-((np.mod(t,24*60)-t2)**2)/(2*tau**2))) \ /(np.sqrt(2*np.pi*tau**2)) T_digest = 80 T_metabolic = 240 k1 = 1/T_digest k2 = 1/T_metabolic T = 2*24*60 # time where we want your solution t = np.linspace(0, T, 100) sol = solve_ivp(f_example_1, [0,T], y0, t_eval=t, args=(c,k1,k2)) plt.figure() plt.plot(t, sol.y[0], 'b', label='X(t): Energy in gastric channel') plt.plot(t, sol.y[1], 'r', label='Y(t): Energy in blood') plt.legend(loc='right', bbox_to_anchor=(2, 0.5)) plt.xlabel('t') plt.ylabel('kcal') plt.grid(True) plt.show() print(t1,t2) plt.figure() T = 5*24*60 t = np.linspace(0, T, 1000) plt.plot(t,c(t),'b') plt.grid(True) plt.plot() Explanation: 2.1 Simple compartment models Example 1: End of explanation # Example 1, numerically. def f_spm_logistic_growth(t,y,r,K): X = y Xp = r*(1-X/K)*X return Xp # Initial condition r = 1 K = 1000 T = 10 # time where we want your solution t = np.linspace(0, T, 100) plt.figure() for y0 in np.linspace(0,2000,20): sol = solve_ivp(f_spm_logistic_growth, [0,T], (y0,), t_eval=t, args=(r,K)) plt.plot(t, sol.y[0], 'b') plt.xlabel('t') plt.ylabel('population size') plt.grid(True) plt.show() Explanation: 2.1.1.- Simple population models Adding competition for resources: Logistic growth End of explanation def f_logistic_growth_harvesting(t,y,r,K,h): X = y Xp = r*(1-X/K)*X-h return Xp # Initial condition r = 1 K = 1000 h = 100 print(K*r/4,h) T = 8 Xplus = K/2*(1+np.sqrt(1-4*h/(K*r))) Xminus = K/2*(1-np.sqrt(1-4*h/(K*r))) print(Xplus,Xminus) # time where we want your solution t = np.linspace(0, T, 100) plt.figure() N=20 y0s = np.zeros(N+1) y0s[0] = Xminus-0.5 y0s[1:] = np.linspace(Xminus,1200,N) for y0 in y0s: sol = solve_ivp(f_logistic_growth_harvesting, [0,T], (y0,), t_eval=t, args=(r,K,h)) plt.plot(t, sol.y[0], 'b') plt.plot(t, sol.y[0]*0+Xplus, 'r') plt.plot(t, sol.y[0]*0+Xminus, 'r') plt.plot(t, sol.y[0]*0, 'r--') plt.xlabel('t') plt.ylabel('population size') plt.grid(True) plt.show() plt.figure() f = lambda X: -(r/K)*X**2+r*X-h Xs = np.linspace(0,Xplus+200,100) plt.plot(Xs, f(Xs), 'b') plt.plot(Xs, f(Xs)*0, 'r') plt.axvline(x=Xminus,color='k',label='$X_-$') plt.axvline(x=Xplus,color='g',label='$X_+$') plt.xlabel('X') plt.legend(loc='right', bbox_to_anchor=(1.3, 0.5)) plt.grid(True) plt.show() Explanation: Logistic growth with harvesting End of explanation # See: https://github.com/Zulko/ddeint from ddeint import ddeint r = 2 # Growth rate K = 100 # Carrying capacity tau = 1 # Lag of crowding X_hist = 50 # Constant history T = 30 # Time simulation def values_before_zero(t): return X_hist def f_delay1(Y, t): return r*(1-Y(t - tau)/K)*Y(t) t = np.linspace(0, T, 1000) sol = ddeint(f_delay1, values_before_zero, t) fig, ax = plt.subplots(1, figsize=(4, 4)) ax.plot(t, sol) plt.show() r = 1.4 # Growth rate K = 150 # Carrying capacity tau1 = 1 # Lag of birth tau2 = 5 # Lag of crowding X_hist = 100 # Constant history T = 300 # Time simulation beta = 0.5 # Rate of death def values_before_zero(t): return X_hist def f_delay2(Y, t): return r*Y(t-tau1)-beta*Y(t)-(r/K)*Y(t - tau2)*Y(t) t = np.linspace(0, T, 2000) sol = ddeint(f_delay2, values_before_zero, t) fig, ax = plt.subplots(1, figsize=(4, 4)) ax.plot(t, sol) plt.show() Explanation: Models with delay Example 2 Differential equations with delay are not currently supported by Scipy, fortunately there are the following options: For Python: pydelay, http://pydelay.sourceforge.net For R: "PBSddesolve: Solver for delay differential equations", https://github.com/pbs-software/pbs-ddesolve For Python using PBSddesolve: https://github.com/hensing/PyDDE (*) For Python: https://github.com/Zulko/ddeint End of explanation # Example 1, numerically. def f_example_3(t,y,eta,gamma,eps): X = y[0] Y = y[1] Xp = eta*X*(1-Y/(eta/gamma)) Yp = -eps*Y*(1-X/(eps/delta)) return np.array([Xp,Yp]) # Initial condition eta = 8-1/2 eps = 1/5 gamma = 1/100 delta = eps*gamma/(20*eta) X_eq = eps/delta Y_eq = eta/gamma y0 = np.array([0.9*X_eq, 1.2*Y_eq]) T = 15 # time where we want your solution t = np.linspace(0, T, 100) sol = solve_ivp(f_example_3,[0,T], y0, t_eval=t, args=(eta,gamma,eps)) X_output = sol.y[0] Y_output = sol.y[1] fig, ax1 = plt.subplots() color = 'tab:blue' ax1.set_xlabel('time [years]') ax1.set_ylabel('Prey population', color=color) ax1.plot(t, X_output, color=color) ax1.tick_params(axis='y', labelcolor=color) ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis color = 'tab:green' ax2.set_ylabel('Predator population', color=color) # we already handled the x-label with ax1 ax2.plot(t, Y_output, color=color) ax2.tick_params(axis='y', labelcolor=color) fig.tight_layout() # otherwise the right y-label is slightly clipped plt.show() plt.figure() plt.plot(t,X_output,'b') plt.plot(t,Y_output,'r') plt.grid(True) plt.show() plt.figure() plt.plot(X_output,Y_output,'k') plt.xlabel('prey') plt.ylabel('predator') plt.plot(X_eq,Y_eq,'b.',markersize=20) plt.plot(y0[0],y0[1],'r.',markersize=20) plt.text(X_eq,Y_eq,' equilibrium state') plt.text(y0[0],y0[1],' initial state') plt.show() fig = plt.figure() ax = fig.gca(projection='3d') ax.plot(X_output, Y_output, t,'b') ax.view_init(elev=40,azim=230) plt.xlabel('prey') plt.ylabel('predator') ax.set_zlabel('t') plt.show() Explanation: 2.2.- Interacting population models 2.2.1.- Predator-prey models Example 3 End of explanation # Example 1, numerically. def f_SIR(t,y,alpha,beta): S = y[0] I = y[1] R = y[2] Sp = -alpha*I*S Ip = alpha*I*S-beta*I Rp = beta*I return np.array([Sp,Ip,Rp]) # Initial condition N = 1e6 alpha = 1e-6 beta = 1/3 S0 = 9e5 I0 = 1e5 y0 = np.array([S0, I0, N-S0-I0]) T = 20 # time where we want your solution t = np.linspace(0, T, 100) sol = solve_ivp(f_SIR, [0,T], y0, t_eval=t, args=(alpha,beta)) plt.figure() plt.plot(t, sol.y[0], 'b', label='S(t)') plt.plot(t, sol.y[1], 'r', label='I(t)') plt.plot(t, sol.y[2], 'g', label='R(t)') plt.legend(loc='right', bbox_to_anchor=(1.3, 0.5)) plt.xlabel('t') plt.ylabel('number of individuals') plt.grid(True) plt.show() Explanation: 2.2.2.- SIR models in epidemiology End of explanation # Example 1, numerically. def f_SIR_endemic(t,y,alpha,beta,sigma,N): S = y[0] I = y[1] R = y[2] Sp = -alpha*I*S+sigma*N-sigma*S Ip = alpha*I*S-beta*I-sigma*I Rp = beta*I-sigma*R return np.array([Sp,Ip,Rp]) # Initial condition N = 1e6 alpha = 1e-6 beta = 1/3 sigma = 1/50 S0 = 9e5 I0 = 1e5 y0 = np.array([S0, I0, N-S0-I0]) T = 150 # time where we want your solution t = np.linspace(0, T, 100) sol = solve_ivp(f_SIR_endemic, [0,T], y0, t_eval=t, args=(alpha,beta,sigma,N)) plt.figure() plt.plot(t, sol.y[0], 'b', label='S(t)') plt.plot(t, sol.y[1], 'r', label='I(t)') plt.plot(t, sol.y[2], 'g', label='R(t)') plt.legend(loc='right', bbox_to_anchor=(2, 0.5)) plt.xlabel('t') plt.ylabel('number of individuals') plt.grid(True) plt.show() Explanation: 2.2.3.- Endemic diseases End of explanation # Volumnes Vol_ch = 50./1000 Vol_t = 0.3/1000 # Initial conditions in chamber Lac_ch_0 = 0.3 Glc_ch_0 = 1.6 O2_ch_0 = 0.6 CO2_ch_0 = 0.1 x0 = np.array([Lac_ch_0,Glc_ch_0,O2_ch_0,CO2_ch_0]) # Initial conditions in tissue Lac_t_0 = 0.8 Glc_t_0 = 0.5 O2_t_0 = 0.14 CO2_t_0 = 0.2 ATP_t_0 = 6.2 ADP_t_0 = 0.8 y0 = np.array([Lac_t_0,Glc_t_0,O2_t_0,CO2_t_0,ATP_t_0,ADP_t_0]) # Maximum rates T1 = 5e-5 T2 = 3e-5 V1 = 1e-4 V2 = 1e-4 # Affinities M1 = 0.5 M2 = 0.5 K1 = 0.5 K2 = 0.22 mu1 = 0.1 mu2 = 0.1 #Fick's constants lambda3 = 3e-3 lambda4 = 8e-3 # Constants defining the activity t0 = 3. # min tau = 10./60 # min # Activity levels a0 = 1e-5 delta_a = 1e-2 def f_example_5(t,z): x = z[:4] y = z[4:] Phi = np.zeros(4) Phi[0] = T1*((x[0]/(x[0]+M1))-(y[0]/(y[0]+M1))) Phi[1] = T2*((x[1]/(x[1]+M1))-(y[1]/(y[1]+M1))) Phi[2] = lambda3*(x[2]-y[2]) Phi[3] = lambda4*(x[3]-y[3]) p = y[5]/y[4] Psi1 = V1*(p/(p+mu1))*(y[1]/(y[1]+K1)) rho1 = np.array([2.,-1,0,0,2,-2])*Psi1 Psi2 = V2*(p/(p+mu2))*(y[1]*y[2]/(y[1]*y[2]+K2**2)) rho2 = np.array([0.,-1,-6,6,32,-32])*Psi2 a = a0+delta_a*(np.heaviside(t-(t0), 1)-np.heaviside(t-(t0+tau), 1)) rho3 = np.array([0.,0,0,0,-1,1])*a dxdt = -Phi/Vol_ch dydt = np.zeros(6) dydt[:4] = Phi dydt += rho1+rho2+rho3 dydt /=Vol_t return np.concatenate((dxdt,dydt), axis=0) # Initial condition z0 = np.concatenate((x0,y0), axis=0) T = 10 # time where we want your solution t = np.linspace(0, T, 100) sol = solve_ivp(f_example_5, [0,T], z0, t_eval=t) plt.figure(figsize=(10,10)) plt.subplot(221) plt.plot(t, sol.y[4], 'b-', label='Lac_t(t)') plt.plot(t, sol.y[5], 'b--', label='Glc_t(t)') plt.legend(loc='best') plt.axvline(x=t0,color='k',alpha=0.5) plt.axvline(x=t0+tau,color='k',alpha=0.5) plt.xlabel('t [min]') plt.ylabel('concentration [mmol/l]') plt.ylim([0.2,1]) plt.grid(True) plt.subplot(222) plt.plot(t, sol.y[6], 'b-', label='O2_t(t)') plt.plot(t, sol.y[7], 'b--', label='CO2_t(t)') plt.legend(loc='best') plt.axvline(x=t0,color='k',alpha=0.5) plt.axvline(x=t0+tau,color='k',alpha=0.5) plt.xlabel('t [min]') #plt.ylabel('concentration [mmol/l]') plt.ylim([0,0.8]) plt.grid(True) plt.subplot(223) plt.plot(t, sol.y[8], 'b-', label='ATP_t(t)') plt.plot(t, sol.y[9], 'b--', label='ADP_t(t)') plt.legend(loc='best') plt.axvline(x=t0,color='k',alpha=0.5) plt.axvline(x=t0+tau,color='k',alpha=0.5) plt.xlabel('t [min]') plt.ylabel('concentration [mmol/l]') plt.ylim([0,8]) plt.grid(True) plt.subplot(224) plt.plot(t, sol.y[2], 'b-', label='O2_Ch(t)') plt.legend(loc='best') plt.axvline(x=t0,color='k',alpha=0.5) plt.axvline(x=t0+tau,color='k',alpha=0.5) plt.xlabel('t [min]') #plt.ylabel('concentration [mmol/l]') plt.ylim([0.58,0.61]) plt.grid(True) plt.tight_layout() plt.show() Explanation: 2.2 Subcompartment models Example 5 End of explanation
6,429
Given the following text description, write Python code to implement the functionality described below step by step Description: Accessing Databases via Web APIs In this lesson we'll learn what an API (Application Programming Interface) is, how it's normally used, and how we can collect data from it. We'll then look at how Python can help us quickly gather data from APIs, parse the data, and write to a CSV. There are four sections Step1: All of these are standard Python libraries, so no matter your distribution, these should be installed. 1. Constructing an API GET request We're going to use the New York Times API. You'll need to first sign up for an API key. We know that every call to any API will require us to provide Step2: Notice we assign each variable as a string. While the requests library will convert integers, it's better to be consistent and use strings for all parameters of a GET request. We choose JSON as the response format, as it is easy to parse quickly with Python, though XML is often an viable frequently offered alternative. JSON stands for "Javascript object notation." It has a very similar structure to a python dictionary -- both are built on key/value pairs. You often want to send some sort of data in the URL’s query string. This data tells the API what information you want. In our case, we're going to look for articles about Duke Ellington. Requests allows you to provide these arguments as a dictionary, using the params keyword argument. In addition to the search term q, we have to put in the api-key term. We know these key names from the NYT API documentation. Step3: Now we're ready to make the request. We use the .get method from the requests library to make an HTTP GET Request. Step4: Now, we have a response object called response. We can get all the information we need from this object. For instance, we can see that the URL has been correctly encoded by printing the URL. Click on the link to see what happens. Step5: Click on that link to see it returns! Notice that all Python is doing here for us is helping us construct a complicated URL built with &amp; and = signs. You just noticed we could just as well copy and paste this URL to a browser and then save the response, but Python's requests library is much easier and scalable when making multiple queries in succession. Challenge 1 Step6: Challenge 2 Step7: 2. Parsing the JSON response We can read the content of the server’s response using .text Step8: What you see here is JSON text, encoded as unicode text. As mentioned, JSON is bascially a Python dictionary, and we can convert this string text to a Python dictionary by using the loads to load from a string. Step9: That looks intimidating! But it's really just a big dictionary. The most time-consuming part of using APIs is traversing the various key-value trees to see where the information you want resides. Let's see what keys we got in there. Step10: Looks like there were 93 hits total for our query. Let's take a look Step11: It starts with a square bracket, so it looks like a list, and from a glance it looks like the list of articles we're interested in. Step12: Let's just save this list to a new variable. Often when using web APIs, you'll spend the majority of your time restructuring the response data to how you want it. Step13: Wow! That's a lot of information about just one article! But wait... Step14: 3. Looping through result pages We're making progress, but we only have 10 items. The original response said we had 93 hits! Which means we have to make 93 /10, or 10 requests to get them all. Sounds like a job for a loop! Step15: 4. Exporting to CSV Great, now we have all the articles. Let's just take out some bits of information and write to a CSV. Step16: We can write our sifted information to a CSV now
Python Code: import requests # to make the GET request import json # to parse the JSON response to a Python dictionary import time # to pause after each API call import csv # to write our data to a CSV import pandas # to see our CSV Explanation: Accessing Databases via Web APIs In this lesson we'll learn what an API (Application Programming Interface) is, how it's normally used, and how we can collect data from it. We'll then look at how Python can help us quickly gather data from APIs, parse the data, and write to a CSV. There are four sections: Constructing an API GET request Parsing the JSON response Looping through result pages Exporting to CSV First we'll import the required Python libraries End of explanation # set API key var key = "" # set base url var base_url = "http://api.nytimes.com/svc/search/v2/articlesearch" # set response format var response_format = ".json" Explanation: All of these are standard Python libraries, so no matter your distribution, these should be installed. 1. Constructing an API GET request We're going to use the New York Times API. You'll need to first sign up for an API key. We know that every call to any API will require us to provide: a base URL for the API, (usually) some authorization code or key, and a format for the response. Let's write this information to some variables: End of explanation # set search parameters search_params = {"q": "Duke Ellington", "api-key": key} Explanation: Notice we assign each variable as a string. While the requests library will convert integers, it's better to be consistent and use strings for all parameters of a GET request. We choose JSON as the response format, as it is easy to parse quickly with Python, though XML is often an viable frequently offered alternative. JSON stands for "Javascript object notation." It has a very similar structure to a python dictionary -- both are built on key/value pairs. You often want to send some sort of data in the URL’s query string. This data tells the API what information you want. In our case, we're going to look for articles about Duke Ellington. Requests allows you to provide these arguments as a dictionary, using the params keyword argument. In addition to the search term q, we have to put in the api-key term. We know these key names from the NYT API documentation. End of explanation # make request response = requests.get(base_url + response_format, params=search_params) Explanation: Now we're ready to make the request. We use the .get method from the requests library to make an HTTP GET Request. End of explanation print(response.url) Explanation: Now, we have a response object called response. We can get all the information we need from this object. For instance, we can see that the URL has been correctly encoded by printing the URL. Click on the link to see what happens. End of explanation # set date parameters here search_params = {"q": "Duke Ellington", "api-key": key, "begin_date": "20150101", # date must be in YYYYMMDD format "end_date": "20151231"} # uncomment to test r = requests.get(base_url + response_format, params=search_params) print(r.url) Explanation: Click on that link to see it returns! Notice that all Python is doing here for us is helping us construct a complicated URL built with &amp; and = signs. You just noticed we could just as well copy and paste this URL to a browser and then save the response, but Python's requests library is much easier and scalable when making multiple queries in succession. Challenge 1: Adding a date range What if we only want to search within a particular date range? The NYT Article API allows us to specify start and end dates. Alter the search_params code above so that the request only searches for articles in the year 2015. You're going to need to look at the documentation to see how to do this. End of explanation # set page parameters here search_params["page"] = 0 # uncomment to test r = requests.get(base_url + response_format, params=search_params) print(r.url) Explanation: Challenge 2: Specifying a results page The above will return the first 10 results. To get the next ten, you need to add a "page" parameter. Change the search parameters above to get the second 10 results. End of explanation # inspect the content of the response, parsing the result as text response_text = r.text print(response_text[:1000]) Explanation: 2. Parsing the JSON response We can read the content of the server’s response using .text End of explanation # convert JSON response to a dictionary data = json.loads(response_text) print(data) Explanation: What you see here is JSON text, encoded as unicode text. As mentioned, JSON is bascially a Python dictionary, and we can convert this string text to a Python dictionary by using the loads to load from a string. End of explanation print(data.keys()) # this is boring print(data['status']) # so is this print(data['copyright']) # this is what we want! print(data['response']) print(data['response'].keys()) print(data['response']['meta'].keys()) print(data['response']['meta']['hits']) Explanation: That looks intimidating! But it's really just a big dictionary. The most time-consuming part of using APIs is traversing the various key-value trees to see where the information you want resides. Let's see what keys we got in there. End of explanation print(data['response']['docs']) Explanation: Looks like there were 93 hits total for our query. Let's take a look: End of explanation print(type(data['response']['docs'])) Explanation: It starts with a square bracket, so it looks like a list, and from a glance it looks like the list of articles we're interested in. End of explanation docs = data['response']['docs'] print(docs[0]) Explanation: Let's just save this list to a new variable. Often when using web APIs, you'll spend the majority of your time restructuring the response data to how you want it. End of explanation print(len(docs)) Explanation: Wow! That's a lot of information about just one article! But wait... End of explanation # get number of hits total (in any page we request) hits = data['response']['meta']['hits'] print("number of hits: ", str(hits)) # get number of pages pages = hits // 10 + 1 # make an empty list where we'll hold all of our docs for every page all_docs = [] # now we're ready to loop through the pages for i in range(pages): print("collecting page", str(i)) # set the page parameter search_params['page'] = i # make request r = requests.get(base_url + response_format, params=search_params) # get text and convert to a dictionary data = json.loads(r.text) # get just the docs docs = data['response']['docs'] # add those docs to the big list all_docs = all_docs + docs # IMPORTANT pause between calls time.sleep(5) print(len(all_docs)) Explanation: 3. Looping through result pages We're making progress, but we only have 10 items. The original response said we had 93 hits! Which means we have to make 93 /10, or 10 requests to get them all. Sounds like a job for a loop! End of explanation final_docs = [] for d in all_docs: # create empty dict for each doc to collect info targeted_info = {} targeted_info['id'] = d['_id'] targeted_info['headline'] = d['headline']['main'] targeted_info['date'] = d['pub_date'][0:10] # cutting time of day. targeted_info['word_count'] = d['word_count'] targeted_info['keywords'] = [keyword['value'] for keyword in d['keywords']] try: # some docs don't have this info targeted_info['lead_paragraph'] = d['lead_paragraph'] except: pass # append final doc info to list final_docs.append(targeted_info) Explanation: 4. Exporting to CSV Great, now we have all the articles. Let's just take out some bits of information and write to a CSV. End of explanation header = final_docs[1].keys() with open('all-docs.csv', 'w') as output_file: dict_writer = csv.DictWriter(output_file, header) dict_writer.writeheader() dict_writer.writerows(final_docs) pandas.read_csv('all-docs.csv') Explanation: We can write our sifted information to a CSV now: End of explanation
6,430
Given the following text description, write Python code to implement the functionality described below step by step Description: timeseries comparison We can compare pairs of time series (e.g. rates) as long as the sampling times for the two series are (roughly) the same. Below we make two almost identical series A and B, and look at their correlation. Even though we corrupt B with some noise the correlation is still pretty high. Step1: The more we corrupt B with noise, the lower the correlation Step2: we can also show that, by simply delaying the sine wave by an increasing amount, the correlation moves from 1 to -1, and back again Step3: distribution comparison If we're interested in comparing how a variable is distributed in two different streams, we can compare their histograms using a number known as the "Kullback Leibler divergence", normally referred to as D_KL. Let's make a histogram P and compare it to another histogram Q. Step4: Note that the Kullback Leibler Divergence IS NOT A DISTANCE! It's not symmetric, amongst other things. We can use the symmetrised DKL, which is just the sum. This still isn't a distance, but it's symmetric. Step5: if we make a distribution that is markedly different, we should see a higher divergence
Python Code: time = np.arange(0,20,0.1) A = [np.sin(x)**2 for x in time] sns.tsplot(A,interpolate=False) B = [np.sin(x)**2 + np.random.exponential(0.2) for x in time] sns.tsplot(B,interpolate=False) phi, p = scipy.stats.pearsonr(A,B) print phi # correlation is between -1 and 1. -1 means that one series goes up when the other goes down. Explanation: timeseries comparison We can compare pairs of time series (e.g. rates) as long as the sampling times for the two series are (roughly) the same. Below we make two almost identical series A and B, and look at their correlation. Even though we corrupt B with some noise the correlation is still pretty high. End of explanation genB = lambda r: [np.sin(x)**2 + np.random.exponential(r) for x in time] corruption = np.linspace(0.1, 3, 100) d = pd.DataFrame({"scale": corruption, "correlation": [scipy.stats.pearsonr(A,genB(r))[0] for r in corruption]}) sns.regplot("scale","correlation",d,fit_reg=False) Explanation: The more we corrupt B with noise, the lower the correlation: End of explanation genC = lambda phi: [np.sin(x + phi)**2 for x in np.arange(0,20,0.1)] #data = pd.DataFrame({"0-phase":genC(0), "pi-phase":genC(np.pi), "time":time}) #data = pd.melt(data,id_vars=["time"]) #sns.tsplot(time="time",data=data, value="value", condition="variable", interpolate=False,err_style=None) plt.figure() plt.plot(genC(0)) plt.plot(genC(np.pi/4)) plt.plot(genC(np.pi/2)) plt.show() phase = np.linspace(0,np.pi,100) d = pd.DataFrame({"phase": phase, "correlation": [scipy.stats.pearsonr(A,genC(phi))[0] for phi in phase]}) sns.regplot("phase","correlation",d,fit_reg=False) Explanation: we can also show that, by simply delaying the sine wave by an increasing amount, the correlation moves from 1 to -1, and back again End of explanation k = 10 categories = range(k) # we can pull a random categorical distribution from the Dirichlet, which is a distribution over vectors that sum to 1. # This is super cool but don't worry too much about it! P = np.random.dirichlet([1 for i in categories]) sns.barplot(x="categories", y="probability", data=pd.DataFrame({"probability":P, "categories":categories})) Q = np.random.dirichlet([1 for i in categories]) sns.barplot(x="categories", y="probability", data=pd.DataFrame({"probability":Q, "categories":categories})) DKL = lambda p,q : sum(p[i] * np.log(p[i]/q[i]) for i in range(len(p))) Explanation: distribution comparison If we're interested in comparing how a variable is distributed in two different streams, we can compare their histograms using a number known as the "Kullback Leibler divergence", normally referred to as D_KL. Let's make a histogram P and compare it to another histogram Q. End of explanation print DKL(P,Q), DKL(Q,P), DKL(P,Q)+DKL(Q,P) print scipy.stats.entropy(P,Q) ## note that scipy.stats.entropy will also give you the KL divergence! Explanation: Note that the Kullback Leibler Divergence IS NOT A DISTANCE! It's not symmetric, amongst other things. We can use the symmetrised DKL, which is just the sum. This still isn't a distance, but it's symmetric. End of explanation R = np.random.dirichlet([np.exp(i+1) for i in categories]) sns.barplot(x="categories", y="probability", data=pd.DataFrame({"probability":R, "categories":categories})) DKL(P,R)+DKL(R,P), DKL(R,Q)+DKL(Q,R) Explanation: if we make a distribution that is markedly different, we should see a higher divergence End of explanation
6,431
Given the following text description, write Python code to implement the functionality described below step by step Description: Batch processing Decide on a size limit for aiff files (882kb/10s file) Generate that much files Process those files and append results to DataFrame Remove those files Step1: Loading
Python Code: import hurry.filesize hurry.filesize.size(903168, system=hurry.filesize.alternative) hurry.filesize.size(882102, system=hurry.filesize.si) hurry.filesize.size(882000, system=hurry.filesize.si) hurry.filesize.size(1073741824, system=hurry.filesize.alternative) hurry.filesize.size(1073741824, system=hurry.filesize.si) 1073741824 / 882102 Explanation: Batch processing Decide on a size limit for aiff files (882kb/10s file) Generate that much files Process those files and append results to DataFrame Remove those files End of explanation dir_list = os.listdir(path=this_dir) if "df.p" in dir_list: _pickle_path = os.path.join(this_dir, "df.p") _old_df = pd.read_pickle(_pickle_path) _old_df["hash"] _old_df.dtypes pmtx = gk.generator.gendy1.gen_params(rows=20) df = gk.generator.gendy1.format_params(pmtx) df.sort_values(["hash"]) for i, row in df.iterrows(): session = nonrealtimetools.Session() builder = gk.generator.gendy1.make_builder(row) out = gk.generator.gendy1.build_out(builder) synthdef = builder.build() with session.at(0): synth_a = session.add_synth(duration=10, synthdef=synthdef) gk.util.render_session(session, this_dir, row["hash"]) dt = datetime.now().strftime("%Y_%m_%d") #_%H-%M-%S") identifier = '{0}-len{1}'.format(dt, str(df.shape[0])) df.to_pickle("{0}/df-{1}.p".format(this_dir, dt)) df.to_pickle("{0}/df.p".format(this_dir, dt)) Explanation: Loading End of explanation
6,432
Given the following text description, write Python code to implement the functionality described below step by step Description: The project I am angling towards deals with my website's traffic and taxonomy data. I will be trying to build models that can accurately predict which tags are best for specific channels of traffic, and will also be investigating the longitudinal nature of an article's lifecycle (days to 90% of traffic is my current threshold for an article being done, but I will refine that with some descriptive statistics). I'll also want to investigate what kind of content performs best in each month. Step1: First, import the table of tag-article mappings from our SQL db Step2: We only care about the articles for this analysis. Place entries are outside scope. Step3: Extract the tag name from the tag's URL Step4: Create a tag_url column that just has the tag's name Step5: Get dummies for each tag Step6: Join the dummies back to the main dataframe Step7: De-dupe articles but maintain the tagging data using groupby and sum Step8: Using a csv generated by a script I wrote that queries Google Analytics for pageviews per article from publish date to n-days post-publication, import pageview data and join it to the tag/article DataFrame Step9: Set the pageviews index to the url column to make joining easy Step10: Reset index Step11: Articles published more recently have, on average, received much more traffic than older articles (this reflects growth and heavier distribution of the newer content). The drop in the mean as we move into 2016 is an artifact of the article's lifecycle not being complete. Article lifecycle will be explored below. Step12: Let's import the time-series I created with a python script that asks GA for the daily time-series of Pageviewsof each article from publication date forward two years. Step13: It was easier to collect the data from GA by looping over the columns in my original dataframe, but having each row be an article record is easier to work with now, so we transpose. Step14: Let's determine how many days post-publication it takes for an article to collect 90% of total pageviews. Step15: Now let's look at the number of articles per tag (we will later join the two DataFrames above into one) Step16: Now I'm going to try this with the time series 30days pvs Step17: Let's check average performance when just looking at Simplereach Tag data Step18: Let's run some regression analysis on our tag_analysis DataFrame Step19: Now let's try it with KNN Step20: Let's check the roc_auc scores for both the knn and logistic regression models. Step21: Looks like they give similar scores, but the scores are easily manipulated by changing the threshold for the number of articles per tag and by changing the threshold for "success" (currently set at > 10,000 Pageviews). Step22: Now let's try RandomForest Step23: Let's try the Logistic Model but with more tags
Python Code: import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline Explanation: The project I am angling towards deals with my website's traffic and taxonomy data. I will be trying to build models that can accurately predict which tags are best for specific channels of traffic, and will also be investigating the longitudinal nature of an article's lifecycle (days to 90% of traffic is my current threshold for an article being done, but I will refine that with some descriptive statistics). I'll also want to investigate what kind of content performs best in each month. End of explanation df = pd.read_csv('atlas-taggings.csv') df.head(10) articles = df[df.tagged_type == 'Article'] Explanation: First, import the table of tag-article mappings from our SQL db End of explanation articles.head() Explanation: We only care about the articles for this analysis. Place entries are outside scope. End of explanation def get_tag(x): return x.split('/')[2] #changing this function to get_tag_name() in module. Explanation: Extract the tag name from the tag's URL End of explanation articles.tag_url = articles.tag_url.apply(get_tag) articles.head() Explanation: Create a tag_url column that just has the tag's name End of explanation test = pd.get_dummies(articles.tag_url) test.head() Explanation: Get dummies for each tag End of explanation articles = articles.join(test) articles.drop(['tag_id','tag_url','tagged_type','tagged_id'],axis=1,inplace=True) articles.head() Explanation: Join the dummies back to the main dataframe End of explanation unique_articles = articles.groupby('tagged_url').sum() #made into func unique_articles = unique_articles.reset_index() unique_articles = unique_articles.set_index('tagged_url') Explanation: De-dupe articles but maintain the tagging data using groupby and sum End of explanation #now we need the pageviews and have to map the URLs to Page Titles pageviews = pd.read_csv('output_articles_performance.csv',header=None,names=['url','published','pageviews']) pageviews.head() #In the future I should import the module and run it here instead of grabbing. pageviews.url = ['www.atlasobscura.com/articles/' + x for x in pageviews.url] pageviews.head() pageviews.describe() Explanation: Using a csv generated by a script I wrote that queries Google Analytics for pageviews per article from publish date to n-days post-publication, import pageview data and join it to the tag/article DataFrame End of explanation pageviews.set_index('url',inplace=True) article_set = unique_articles.join(pageviews) article_set.head() Explanation: Set the pageviews index to the url column to make joining easy End of explanation article_set.reset_index() article_set['upper_quartile'] = [1 if x > 10000 else 0 for x in article_set.pageviews] article_set.pageviews.plot(kind='hist', bins=100,title='Page View Distribution, All Content') article_set['published'] = pd.to_datetime(article_set['published']) article_set article_set['year'] = pd.DatetimeIndex(article_set['published']).year ax = article_set.boxplot(column='pageviews',by='year',figsize=(6,6),showfliers=False) ax.set(title='PV distribution by year',ylabel='pageviews') Explanation: Reset index End of explanation yearly = article_set.set_index('published').resample('M').mean().plot(y='pageviews') yearly.set(title='Total Pageviews By Month of Article Publication') Explanation: Articles published more recently have, on average, received much more traffic than older articles (this reflects growth and heavier distribution of the newer content). The drop in the mean as we move into 2016 is an artifact of the article's lifecycle not being complete. Article lifecycle will be explored below. End of explanation time_series = pd.read_csv('time-series.csv') type(time_series) time_series = time_series.drop('Unnamed: 0',axis=1) Explanation: Let's import the time-series I created with a python script that asks GA for the daily time-series of Pageviewsof each article from publication date forward two years. End of explanation time_series = time_series.T time_series.columns time_series['total'] = time_series.sum(axis=1) time_series.head() Explanation: It was easier to collect the data from GA by looping over the columns in my original dataframe, but having each row be an article record is easier to work with now, so we transpose. End of explanation time_series['days_to_90p']= [(time_series.iloc[x].expanding().sum() > time_series.iloc[x].total*.90).argmax() \ for x in range(len(time_series))] time_series.reset_index(inplace=True) time_series.head(1) time_series['index'] = ['www.atlasobscura.com/articles/' + x for x in time_series['index']] time_series.set_index('index',inplace=True) time_series = time_series.join(pageviews.published) time_series.head(5) time_series['published'] = pd.to_datetime(time_series.published) time_series['year_pub'] = pd.DatetimeIndex(time_series['published']).year time_series.boxplot(column='days_to_90p',by='year_pub') time_series.year_pub.value_counts(dropna=False) time_series[['days_to_90p','total','year_pub']].corr() #I DON'T KNOW WHY THIS WON'T WORK time_series['30-day-PVs'] = [time_series.fillna(value=0).iloc[x,0:31].sum() for x in range(len(time_series))] time_series['7-day-PVs'] = [time_series.fillna(value=0).iloc[x,0:8].sum() for x in range(len(time_series))] Explanation: Let's determine how many days post-publication it takes for an article to collect 90% of total pageviews. End of explanation total_tagged= pd.DataFrame(data=article_set.sum(),columns = ['num_tagged']) total_tagged.sort_values('num_tagged',ascending=False,inplace=True) total_tagged.drop('pageviews',axis=0,inplace=True) total_tagged[total_tagged.num_tagged >= 10].count() total_tagged[total_tagged.num_tagged <=5].index #tag_analysis = article_set.drop(total_tagged[total_tagged.num_tagged < 5].index,axis=1) #I'm resetting tag_analysis to contain all tags so I can manipulate later whenever I want. It makes it more clear. tag_analysis = article_set print tag_analysis.shape tag_analysis.head() tag_analysis.tail() tag_analysis.to_csv('tag_analysis_ready.csv') total_tagged.head(30) print total_tagged.shape from sklearn.preprocessing import PolynomialFeatures poly = PolynomialFeatures(interaction_only=True) poly_df = pd.DataFrame(poly.fit_transform(tag_analysis.fillna(0).drop(['published','pageviews','upper_quartile','year'],axis=1))) poly.n_output_features_ total_tagged.ix['extra-mile'] regular_features = ['places-you-can-no-longer-go','100-wonders','extra-mile','video-wonders','news','features','columns', 'found','animals','fleeting-wonders','visual','other-capitals-of-the-world','video','art','list','objects-of-intrigue', 'maps','morbid-monday','female-explorers','naturecultures'] total_tagged[total_tagged.num_tagged >10].shape interactions = pd.DataFrame() for item in regular_features: for column in tag_analysis.drop(['published','pageviews','upper_quartile','year'],axis=1).drop( total_tagged[total_tagged.num_tagged < 10].index,axis=1).columns: interactions[(item + '_' + column)] = tag_analysis[item] + tag_analysis[column] #Just sum the row and column and then turn any 2s into 1s and 1s into zeros. def correct_values(x): if x == 2.0: return 1 elif x == 1.0: return 0 else: return 0 for item in interactions.columns: interactions[item] = interactions[item].apply(correct_values) interactions.head(2) tagged_total = pd.DataFrame(data =interactions.sum(),columns=['num_tagged']) tagged_total = tagged_total.sort_values('num_tagged',ascending=False) identity_tags = tagged_total[0:26].index interactions = interactions.drop(identity_tags,axis=1) tagged_total = pd.DataFrame(data =interactions.sum(),columns=['num_tagged']) tagged_total = tagged_total.sort_values('num_tagged',ascending=False) tagged_total.head(10) #DO I WANT TO DROP THE EMPTY COLUMNS? #for item in interactions.columns: # if interactions[item].sum == 0: # interactions = interactions.drop(item,axis=1) interactions.head(10) interactions = interactions.join(pageviews) #drop empty cols def drop_zero_cols(df): for item in df.columns: if df[item].sum() == 0: df = df.drop(item,axis=1) else: continue return df interactions = drop_zero_cols(interactions.fillna(0).drop(['published','pageviews'],axis=1)) interactions = interactions.join(pageviews) interactions.head(1) interaction_totals = pd.DataFrame(interactions.sum().sort_values(ascending=False),columns=['num_tagged']) interaction_totals[interaction_totals.num_tagged < 4].shape interactions_analysis = interactions.drop(interaction_totals[interaction_totals.num_tagged < 4].index,axis=1) interactions_analysis.head() #Check whether number of Aggregated stories published per day has an impact on average/total Day 0 - 1 traffic. from sklearn import linear_model from sklearn import metrics from sklearn import cross_validation interactions_analysis['upper_quartile'] = [1 if x > 10000 else 0 for x in interactions.pageviews] interactions_analysis['twenty_thousand'] = [1 if x > 20000 else 0 for x in interactions.pageviews] y = interactions_analysis.upper_quartile X = interactions_analysis.drop(['pageviews','published','upper_quartile','twenty_thousand'],axis=1) kf = cross_validation.KFold(len(interactions_analysis),n_folds=5) scores = [] for train_index, test_index in kf: lr = linear_model.LogisticRegression().fit(X.iloc[train_index],y.iloc[train_index]) scores.append(lr.score(X.iloc[test_index],y.iloc[test_index])) print "average accuracy for LogisticRegression is", np.mean(scores) print "average of the set is: ", np.mean(y) interactions_lr_scores = lr.predict_proba(X)[:,1] print metrics.roc_auc_score(y,interactions_lr_scores) interactions_probabilities = pd.DataFrame(zip(X.columns,interactions_lr_scores),columns=['tags','probabilities']) interactions_probabilities.sort_values('probabilities',ascending=False) interaction_totals.head(2) def split_tag(x): return x.split('_')[1] interactions_probabilities = interactions_probabilities.reset_index() interactions_probabilities['subtag'] = interactions_probabilities.tags.apply(split_tag) interactions_probabilities = interactions_probabilities.sort_values(['tags','probabilities'],ascending=[1, 0]) interactions_probabilities = interactions_probabilities.set_index('tags').join(interaction_totals) interactions_probabilities interactions_probabilities['pageviews'] = [sum(interactions['pageviews'][interactions[item]==1]) for item in interactions_probabilities.tags] interactions_probabilities['mean-PVs'] = interactions_probabilities['pageviews'] // interactions_probabilities['num_tagged'] regular_features interactions_probabilities[interactions_probabilities.tags.str.contains('features')==True].sort_values('mean-PVs', ascending = False) interactions_probabilities.sort_values('probabilities',ascending = False) np.mean(interactions.pageviews) #I took the dashes out. Have to add back for this fix_regular_features = [x.replace(' ','-') for x in regular_features] fig,axes=plt.subplots(figsize=(10,10)) for item, name in enumerate(fix_regular_features): interactions.plot(x=interactions['pageviews'][interactions.columns.str.contains(name)==True],kind='box',ax=item) plt.show() #doublecheck my work on pageviews vs num-published pub_volume = tag_analysis[['published','pageviews']] pub_volume['num_pubbed'] = 1 pub_volume['published'] = pd.to_datetime(pub_volume.published) pub_volume = pub_volume.set_index('published') pub_volume.head(10) pub_volume = pub_volume.resample('M').sum().dropna() pub_volume['year'] = pub_volume.index.year pub_volume[pub_volume.index.year >=2015].corr() pub_volume[pub_volume.index.year >=2015].plot(kind='scatter',x='num_pubbed',y='pageviews') import seaborn as sns ax = sns.regplot(x='num_pubbed',y='pageviews',data=pub_volume) Explanation: Now let's look at the number of articles per tag (we will later join the two DataFrames above into one) End of explanation #doublecheck my work on pageviews vs num-published pub_volume = time_series[['published','7-day-PVs']] pub_volume['num_pubbed'] = 1 pub_volume['published'] = pd.to_datetime(pub_volume.published) pub_volume = pub_volume.set_index('published') pub_volume.head(10) num_holder = pub_volume.resample('D').sum().dropna().drop('7-day-PVs',axis=1) pub_volume = pub_volume.resample('D').sum().dropna().drop('num_pubbed',axis=1) pub_volume = pub_volume.join(num_holder) pub_volume['year'] = pub_volume.index.year pub_volume[pub_volume.index.year >=2015].corr() pub_volume[pub_volume.index >='2016-01-01'].plot(kind='scatter',x='num_pubbed',y='7-day-PVs',title='7-Day PVs') import seaborn as sns ax = sns.regplot(x='num_pubbed',y='7-day-PVs',data=pub_volume) Explanation: Now I'm going to try this with the time series 30days pvs End of explanation simplereach = pd.read_csv('simplereach-tags.csv') simplereach.head(1) simplereach = simplereach.set_index('Tag') total_tagged2 = total_tagged total_tagged2.head(4) total_tagged2.index = [x.replace('-',' ') for x in total_tagged.index] simplereach = simplereach.join(total_tagged2) simplereach['mean-PVs'] = simplereach['Page Views'] // simplereach['num_tagged'] simplereach['mean-shares'] = simplereach['Facebook Shares'] // simplereach['num_tagged'] simplereach = simplereach[['mean-PVs','mean-shares','num_tagged']] simplereach[simplereach['num_tagged'] > 5].sort_values('mean-PVs',ascending=False) #regular_features = [x.replace('-',' ') for x in regular_features] simplereach.ix[regular_features].sort_values('mean-PVs',ascending=False) Explanation: Let's check average performance when just looking at Simplereach Tag data End of explanation from sklearn import linear_model from sklearn import metrics tag_analysis.fillna(value=0,inplace=True) y = tag_analysis.upper_quartile X = tag_analysis.drop(['pageviews','published','upper_quartile'],axis=1) from sklearn import cross_validation kf = cross_validation.KFold(len(tag_analysis),n_folds=5) scores = [] for train_index, test_index in kf: lr = linear_model.LogisticRegression().fit(X.iloc[train_index],y.iloc[train_index]) scores.append(lr.score(X.iloc[test_index],y.iloc[test_index])) print "average accuracy for LogisticRegression is", np.mean(scores) print "average of the set is: ", np.mean(y) lr_scores = lr.predict_proba(X)[:,1] print metrics.roc_auc_score(y,lr_scores) print metrics.roc_auc_score(y,lr_scores) lr_scores coefficients = pd.DataFrame(zip(X.columns,lr.coef_[0]),columns=['tags','coefficients']) probabilities = pd.DataFrame(zip(X.columns,lr_scores),columns=['tags','probabilities']) probabilities.sort_values('probabilities',ascending=False) coefficients.sort_values('coefficients',ascending=False) tag_analysis[tag_analysis['100-wonders'] ==1].describe() tag_analysis.head() Explanation: Let's run some regression analysis on our tag_analysis DataFrame End of explanation from sklearn.grid_search import GridSearchCV from sklearn.neighbors import KNeighborsClassifier params = {'n_neighbors': [x for x in range(2,200,1)], 'weights': ['distance','uniform']} gs = GridSearchCV(estimator=KNeighborsClassifier(),param_grid=params,n_jobs=8,cv=10) gs.fit(X,y) print gs.best_params_ print gs.best_score_ print type(gs.best_estimator_) knn = gs.best_estimator_.fit(X,y) knn_scores = knn.predict_proba(X)[:,1] print np.mean(knn_scores) print np.mean(lr_scores) knn_probabilities = pd.DataFrame(zip(X.columns,knn_scores),columns=['tags','probabilities']) knn_probabilities.sort_values('probabilities',ascending=False) Explanation: Now let's try it with KNN End of explanation print 'knn', metrics.roc_auc_score(y,knn_scores) print 'lr', metrics.roc_auc_score(y,lr_scores) Explanation: Let's check the roc_auc scores for both the knn and logistic regression models. End of explanation probabilities = probabilities.set_index('tags') probabilities = probabilities.join(total_tagged) probabilities.to_csv('tag-probabilities-logisticregression.csv') Explanation: Looks like they give similar scores, but the scores are easily manipulated by changing the threshold for the number of articles per tag and by changing the threshold for "success" (currently set at > 10,000 Pageviews). End of explanation from sklearn.ensemble import RandomForestClassifier params = {'max_depth': np.arange(20,100,2), 'min_samples_leaf': np.arange(90,200,2), 'n_estimators': 20} gs1 = GridSearchCV(RandomForestClassifier(),param_grid=params, cv=10, scoring='roc_auc',n_jobs=8,verbose=1) gs1.fit(X,y) print gs1.best_params_ print gs1.best_score_ rf = RandomForestClassifier(gs1.best_estimator_) rf.fit(X,y) probs = rf.predict_proba(X)[:,1] print rf.score(X,y) print metrics.roc_auc_score(y,probs) probs = pd.DataFrame(zip(X.columns,probs),columns=['tags','probabilities']) probs.sort_values('probabilities',ascending=False) Explanation: Now let's try RandomForest End of explanation tag_analysis2 = article_set.drop(total_tagged[total_tagged.num_tagged < 15].index,axis=1) tag_analysis2['ten_thousand'] = [1 if x > 10000 else 0 for x in tag_analysis2.pageviews] tag_analysis2.fillna(value=0,inplace=True) y2 = tag_analysis2.ten_thousand X2 = tag_analysis2.drop(['pageviews','upper_quartile','ten_thousand'],axis=1) kf2 = cross_validation.KFold(len(tag_analysis2),n_folds=5) scores2 = [] for train_index, test_index in kf2: lr2 = linear_model.LogisticRegression().fit(X2.iloc[train_index],y2.iloc[train_index]) scores2.append(lr2.score(X2.iloc[test_index],y2.iloc[test_index])) print "average accuracy for LogisticRegression is", np.mean(scores2) print "average of the set is: ", np.mean(y2) print tag_analysis2.shape print y2.shape print X2.shape lr_scores2 = lr2.predict_proba(X2)[:,1] lr2_probs = pd.DataFrame(zip(X2.columns,lr_scores2),columns=['tags','probabilities']) lr2_probs.sort_values('probabilities',ascending=False) metrics.roc_auc_score(y2,lr2.predict_proba(X2)[:,1]) lr2_probs = lr2_probs.set_index('tags') lr2_probs = lr2_probs.join(total_tagged) plt.figure(figsize=(10,10)) plt.scatter(lr2_probs.num_tagged,lr2_probs.probabilities) plt.show() lr2_probs = lr2_probs.sort_values('probabilities',ascending=False) lr2_probs = lr2_probs.reset_index() lr2_probs.to_csv('min15tags_min10000pvs.csv') lr2_probs.shape lr2_probs Explanation: Let's try the Logistic Model but with more tags End of explanation
6,433
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmos MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required Step9: 2.2. Canonical Horizontal Resolution Is Required Step10: 2.3. Range Horizontal Resolution Is Required Step11: 2.4. Number Of Vertical Levels Is Required Step12: 2.5. High Top Is Required Step13: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required Step14: 3.2. Timestep Shortwave Radiative Transfer Is Required Step15: 3.3. Timestep Longwave Radiative Transfer Is Required Step16: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required Step17: 4.2. Changes Is Required Step18: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required Step19: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required Step20: 6.2. Scheme Method Is Required Step21: 6.3. Scheme Order Is Required Step22: 6.4. Horizontal Pole Is Required Step23: 6.5. Grid Type Is Required Step24: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required Step25: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required Step26: 8.2. Name Is Required Step27: 8.3. Timestepping Type Is Required Step28: 8.4. Prognostic Variables Is Required Step29: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required Step30: 9.2. Top Heat Is Required Step31: 9.3. Top Wind Is Required Step32: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required Step33: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required Step34: 11.2. Scheme Method Is Required Step35: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required Step36: 12.2. Scheme Characteristics Is Required Step37: 12.3. Conserved Quantities Is Required Step38: 12.4. Conservation Method Is Required Step39: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required Step40: 13.2. Scheme Characteristics Is Required Step41: 13.3. Scheme Staggering Type Is Required Step42: 13.4. Conserved Quantities Is Required Step43: 13.5. Conservation Method Is Required Step44: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required Step45: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required Step46: 15.2. Name Is Required Step47: 15.3. Spectral Integration Is Required Step48: 15.4. Transport Calculation Is Required Step49: 15.5. Spectral Intervals Is Required Step50: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required Step51: 16.2. ODS Is Required Step52: 16.3. Other Flourinated Gases Is Required Step53: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required Step54: 17.2. Physical Representation Is Required Step55: 17.3. Optical Methods Is Required Step56: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required Step57: 18.2. Physical Representation Is Required Step58: 18.3. Optical Methods Is Required Step59: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required Step60: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required Step61: 20.2. Physical Representation Is Required Step62: 20.3. Optical Methods Is Required Step63: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required Step64: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required Step65: 22.2. Name Is Required Step66: 22.3. Spectral Integration Is Required Step67: 22.4. Transport Calculation Is Required Step68: 22.5. Spectral Intervals Is Required Step69: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required Step70: 23.2. ODS Is Required Step71: 23.3. Other Flourinated Gases Is Required Step72: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required Step73: 24.2. Physical Reprenstation Is Required Step74: 24.3. Optical Methods Is Required Step75: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required Step76: 25.2. Physical Representation Is Required Step77: 25.3. Optical Methods Is Required Step78: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required Step79: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required Step80: 27.2. Physical Representation Is Required Step81: 27.3. Optical Methods Is Required Step82: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required Step83: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required Step84: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required Step85: 30.2. Scheme Type Is Required Step86: 30.3. Closure Order Is Required Step87: 30.4. Counter Gradient Is Required Step88: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required Step89: 31.2. Scheme Type Is Required Step90: 31.3. Scheme Method Is Required Step91: 31.4. Processes Is Required Step92: 31.5. Microphysics Is Required Step93: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required Step94: 32.2. Scheme Type Is Required Step95: 32.3. Scheme Method Is Required Step96: 32.4. Processes Is Required Step97: 32.5. Microphysics Is Required Step98: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required Step99: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required Step100: 34.2. Hydrometeors Is Required Step101: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required Step102: 35.2. Processes Is Required Step103: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required Step104: 36.2. Name Is Required Step105: 36.3. Atmos Coupling Is Required Step106: 36.4. Uses Separate Treatment Is Required Step107: 36.5. Processes Is Required Step108: 36.6. Prognostic Scheme Is Required Step109: 36.7. Diagnostic Scheme Is Required Step110: 36.8. Prognostic Variables Is Required Step111: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required Step112: 37.2. Cloud Inhomogeneity Is Required Step113: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required Step114: 38.2. Function Name Is Required Step115: 38.3. Function Order Is Required Step116: 38.4. Convection Coupling Is Required Step117: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required Step118: 39.2. Function Name Is Required Step119: 39.3. Function Order Is Required Step120: 39.4. Convection Coupling Is Required Step121: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required Step122: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required Step123: 41.2. Top Height Direction Is Required Step124: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required Step125: 42.2. Number Of Grid Points Is Required Step126: 42.3. Number Of Sub Columns Is Required Step127: 42.4. Number Of Levels Is Required Step128: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required Step129: 43.2. Type Is Required Step130: 43.3. Gas Absorption Is Required Step131: 43.4. Effective Radius Is Required Step132: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required Step133: 44.2. Overlap Is Required Step134: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required Step135: 45.2. Sponge Layer Is Required Step136: 45.3. Background Is Required Step137: 45.4. Subgrid Scale Orography Is Required Step138: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required Step139: 46.2. Source Mechanisms Is Required Step140: 46.3. Calculation Method Is Required Step141: 46.4. Propagation Scheme Is Required Step142: 46.5. Dissipation Scheme Is Required Step143: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required Step144: 47.2. Source Mechanisms Is Required Step145: 47.3. Calculation Method Is Required Step146: 47.4. Propagation Scheme Is Required Step147: 47.5. Dissipation Scheme Is Required Step148: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required Step149: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required Step150: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required Step151: 50.2. Fixed Value Is Required Step152: 50.3. Transient Characteristics Is Required Step153: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required Step154: 51.2. Fixed Reference Date Is Required Step155: 51.3. Transient Method Is Required Step156: 51.4. Computation Method Is Required Step157: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required Step158: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required Step159: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mri', 'sandbox-2', 'atmos') Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: MRI Source ID: SANDBOX-2 Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:19 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation
6,434
Given the following text description, write Python code to implement the functionality described below step by step Description: Intro Este es el primer taller de informática 3, que cubre conceptos básicos de Python, gráficos y ajustes, use este notebook para resolver el taller y, por favor use tantas celdas como necesite pero use las celdas marcadas como primera celda para cada ejercicio, en orden, para que sea más fácil calificar. Step1: Gráficos Realice los gráficos del pdf adjunto graficos.pdf, los gráficos deben tener todos los detalles (colores, grillas, y ordenamiento multigráfico). Step2: Ajustes Ajustes lineales Utilizando la función linregress, ajuste los siguíentes datos, produzca un gráfico con los datos originales y la regresión lineal, haga que el label del ajuste sea la ecuación de la recta que encontró incluyendo el valor del coeficiente de correlación r. Step3: Ajustes linealizados Usando linealización y la función linregress ajuste los siguientes datos, recuerde recuperar los factores y datos originales al terminar el ajuste lineal, realize el gráfico lado a lado con el ajuste lineal y el ajuste de los datos originales. Step4: Ajustes polinomiales Usando la función polyfit ajuste los siguientes polinomios, produzca un gráfico con los datos originales y el ajuste del polinomio, escriba el polinomio resultante para cada caso. Step5: Ajustes arbitrarios Usando la función curve_fit ajuste las siguientes funciones, recuerde incluir un gráfico de los datos originales, así como el ajuste obtenido, e incluya en los gráficos una referencia la la función objetivo usada así como los parámetros optimos obtenidos con el ajuste.
Python Code: # Ejecute esta celda para importar las librerías y funciones necesarias %matplotlib notebook from IPython.display import set_matplotlib_formats set_matplotlib_formats('png', 'pdf') import numpy as np import matplotlib.pyplot as plt from numpy import polyfit, polyval from scipy.stats import linregress from scipy.optimize import curve_fit Explanation: Intro Este es el primer taller de informática 3, que cubre conceptos básicos de Python, gráficos y ajustes, use este notebook para resolver el taller y, por favor use tantas celdas como necesite pero use las celdas marcadas como primera celda para cada ejercicio, en orden, para que sea más fácil calificar. End of explanation # Empiece a construir los gráficos en esta celda, use una celda por gráfico Explanation: Gráficos Realice los gráficos del pdf adjunto graficos.pdf, los gráficos deben tener todos los detalles (colores, grillas, y ordenamiento multigráfico). End of explanation x = np.linspace(0, 10) y = 3.0 * x - 5 y += np.random.normal(0, 0.1, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda x = np.linspace(0, 10) y = 3.0 * x - 5 y += np.random.normal(0, 1.0, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda x = np.linspace(0, 10) y = 3.0 * x - 5 y += np.random.normal(0, 10.0, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda Explanation: Ajustes Ajustes lineales Utilizando la función linregress, ajuste los siguíentes datos, produzca un gráfico con los datos originales y la regresión lineal, haga que el label del ajuste sea la ecuación de la recta que encontró incluyendo el valor del coeficiente de correlación r. End of explanation x = np.linspace(0, 10) y = 3.0 * x ** 2 y += np.random.normal(0, 0.1, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda x = np.linspace(0, 10) y = 3.0 * np.sqrt(x) y += y * np.random.normal(0, 0.05, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda x = np.linspace(0, 10) y = 3.0 * x ** 0.75 y += np.random.normal(0, 0.1, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda x = np.linspace(0, 10) y = np.sqrt(5 * x) y += y * np.random.normal(0, 0.05, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda x = np.linspace(0.1, 10) y = np.log(5 * x) y += y * np.random.normal(0, 0.05, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda Explanation: Ajustes linealizados Usando linealización y la función linregress ajuste los siguientes datos, recuerde recuperar los factores y datos originales al terminar el ajuste lineal, realize el gráfico lado a lado con el ajuste lineal y el ajuste de los datos originales. End of explanation x = np.linspace(-10, 10) y = polyval(np.random.normal(0, 3, size=2), x) y += np.random.normal(0, 0.5, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda x = np.linspace(-10, 10) y = polyval(np.random.normal(0, 3, size=3), x) y += np.random.normal(0, 0.5, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda x = np.linspace(-50, 50, 1000) y = polyval(np.random.normal(0, 10, size=4) / np.exp(np.linspace(1, 10, 4))[::-1], x) y += np.random.normal(0, 0.5, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda x = np.linspace(-50, 50, 1000) y = polyval(np.random.normal(0, 10, size=5) / np.exp(np.linspace(1, 10, 5))[::-1], x) y += np.random.normal(0, 0.5, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda Explanation: Ajustes polinomiales Usando la función polyfit ajuste los siguientes polinomios, produzca un gráfico con los datos originales y el ajuste del polinomio, escriba el polinomio resultante para cada caso. End of explanation x = np.linspace(-50, 50, 1000) y = np.exp(-(x - 10) ** 2 / 3) y += np.random.normal(0, 0.01, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda x = np.linspace(-50, 50, 1000) y = np.exp(-(x - 10) ** 2 / 3) + np.exp(-(x + 20) ** 2 / 3) y += np.random.normal(0, 0.01, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda x = np.linspace(-50, 50, 1000) y = 4 * np.exp(-(x - 10) ** 2 / 3) + 0.5 * np.exp(-(x + 20) ** 2 / 3) y += np.random.normal(0, 0.01, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda x = np.linspace(-50, 50, 1000) y = np.exp(-(x - 10) ** 2 / 3) + np.exp(-(x + 20) ** 2 / 3) + 0.1 * np.sqrt(np.abs(x)) y += np.random.normal(0, 0.01, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda x = np.linspace(-10, 10, 1000) y = 100 * np.exp(-(x) ** 2 / 3) + x ** 2 y += np.random.normal(0, 0.01, size=y.shape) # Empiece a construir el gráfico y el ajuste en esta celda Explanation: Ajustes arbitrarios Usando la función curve_fit ajuste las siguientes funciones, recuerde incluir un gráfico de los datos originales, así como el ajuste obtenido, e incluya en los gráficos una referencia la la función objetivo usada así como los parámetros optimos obtenidos con el ajuste. End of explanation
6,435
Given the following text description, write Python code to implement the functionality described below step by step Description: DAC-ADC Pmod Examples using Matplotlib and Widget Contents Pmod DAC-ADC Feedback Tracking the IO Error Error plot with Matplotlib Widget controlled plot Pmod DAC-ADC Feedback This example shows how to use the PmodDA4 DAC and the PmodAD2 ADC on the board, using the board's two Pmod interfaces. The notebook then compares the DAC output to the ADC input and tracks the errors. The errors are plotted using Matplotlib and an XKCD version of the plot is produced (for fun). Finally a slider widget is introduced to control the number of samples displayed in the error plot. Note Step1: 2. Program the ZYNQ PL Step2: 3. Instantiate the Pmod peripherals as Python objects Step3: 4. Write to DAC, read from ADC, print result Step4: Contents Tracking the IO Error Report DAC-ADC Pmod Loopback Measurement Error. Step5: Contents Error plot with Matplotlib This example shows plots in notebook (rather than in separate window). Step6: Contents Widget controlled plot In this example, we extend the IO plot with a slider widget to control the number of samples appearing in the output plot. We use the ipwidgets library and the simple interact() method to launch a slider bar. The interact function (ipywidgets.interact) automatically creates user interface (UI) controls for exploring code and data interactively. It is the easiest way to get started using IPython’s widgets. For more details see Using ipwidgets interact()
Python Code: from pynq.overlays.base import BaseOverlay from pynq.lib import Pmod_ADC, Pmod_DAC Explanation: DAC-ADC Pmod Examples using Matplotlib and Widget Contents Pmod DAC-ADC Feedback Tracking the IO Error Error plot with Matplotlib Widget controlled plot Pmod DAC-ADC Feedback This example shows how to use the PmodDA4 DAC and the PmodAD2 ADC on the board, using the board's two Pmod interfaces. The notebook then compares the DAC output to the ADC input and tracks the errors. The errors are plotted using Matplotlib and an XKCD version of the plot is produced (for fun). Finally a slider widget is introduced to control the number of samples displayed in the error plot. Note: The output of the DAC (pin A) must be connected with a wire to the input of the ADC (V1 input). 1. Import hardware libraries and classes End of explanation ol = BaseOverlay("base.bit") Explanation: 2. Program the ZYNQ PL End of explanation dac = Pmod_DAC(ol.PMODB) adc = Pmod_ADC(ol.PMODA) Explanation: 3. Instantiate the Pmod peripherals as Python objects End of explanation dac.write(0.35) sample = adc.read() print(sample) Explanation: 4. Write to DAC, read from ADC, print result End of explanation from math import ceil from time import sleep import numpy as np import matplotlib.pyplot as plt from pynq.lib import Pmod_ADC, Pmod_DAC from pynq.overlays.base import BaseOverlay ol = BaseOverlay("base.bit") dac = Pmod_DAC(ol.PMODB) adc = Pmod_ADC(ol.PMODA) delay = 0.0 values = np.linspace(0, 2, 20) samples = [] for value in values: dac.write(value) sleep(delay) sample = adc.read() samples.append(sample[0]) print('Value written: {:4.2f}\tSample read: {:4.2f}\tError: {:+4.4f}'. format(value, sample[0], sample[0]-value)) Explanation: Contents Tracking the IO Error Report DAC-ADC Pmod Loopback Measurement Error. End of explanation %matplotlib inline X = np.arange(len(values)) plt.bar(X + 0.0, values, facecolor='blue', edgecolor='white', width=0.5, label="Written_to_DAC") plt.bar(X + 0.25, samples, facecolor='red', edgecolor='white', width=0.5, label="Read_from_ADC") plt.title('DAC-ADC Linearity') plt.xlabel('Sample_number') plt.ylabel('Volts') plt.legend(loc='upper left', frameon=False) plt.show() Explanation: Contents Error plot with Matplotlib This example shows plots in notebook (rather than in separate window). End of explanation from math import ceil from time import sleep import numpy as np import matplotlib.pyplot as plt %matplotlib inline from ipywidgets import interact import ipywidgets as widgets ol = BaseOverlay("base.bit") dac = Pmod_DAC(ol.PMODB) adc = Pmod_ADC(ol.PMODA) def capture_samples(nmbr_of_samples): delay = 0.0 values = np.linspace(0, 2, nmbr_of_samples) samples = [] for value in values: dac.write(value) sleep(delay) sample = adc.read() samples.append(sample[0]) X = np.arange(nmbr_of_samples) plt.bar(X + 0.0, values[:nmbr_of_samples+1], facecolor='blue', edgecolor='white', width=0.5, label="Written_to_DAC") plt.bar(X + 0.25, samples[:nmbr_of_samples+1], facecolor='red', edgecolor='white', width=0.5, label="Read_from_ADC") plt.title('DAC-ADC Linearity') plt.xlabel('Sample_number') plt.ylabel('Volts') plt.legend(loc='upper left', frameon=False) plt.show() _ = interact(capture_samples, nmbr_of_samples=widgets.IntSlider( min=5, max=30, step=5, value=10, continuous_update=False)) Explanation: Contents Widget controlled plot In this example, we extend the IO plot with a slider widget to control the number of samples appearing in the output plot. We use the ipwidgets library and the simple interact() method to launch a slider bar. The interact function (ipywidgets.interact) automatically creates user interface (UI) controls for exploring code and data interactively. It is the easiest way to get started using IPython’s widgets. For more details see Using ipwidgets interact() End of explanation
6,436
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="https Step1: Data Step2: Model 1 Step3: Posterior predictive check Step4: Model 2 (departmental-specific offset) Step6: Poisson regression We now show we can emulate binomial regresison using 2 poisson regressions, following sec 11.3.3 of rethinking. We use a simplified model that just predicts outcomes, and has no features (just an offset term). Step7: Beta-binomial regression Sec 12.1.1 of rethinking. Code from snippet 12.2 of Du Phan's site Step8: Mixed effects model with joint prior This code is from https Step9: PGMs Step10: Causal inference with the latent DAG This is based on sec 6.3 (collider bias) of the Rethinking book. Code is from Du Phan, code snippet 6.25. We change the names to match our current example Step11: Logistic regression version We modify the scenario to match the UC Berkeley admissions scenario (with binary data) in sec 11.1.4. Step12: Counterfactual plot Similar to p140
Python Code: !pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro !pip install -q arviz import arviz as az az.__version__ !pip install causalgraphicalmodels #!pip install -U daft import numpy as np np.set_printoptions(precision=3) import matplotlib.pyplot as plt import math import os import warnings import pandas as pd import jax print("jax version {}".format(jax.__version__)) print("jax backend {}".format(jax.lib.xla_bridge.get_backend().platform)) import jax.numpy as jnp from jax import random, vmap from jax.scipy.special import expit rng_key = random.PRNGKey(0) rng_key, rng_key_ = random.split(rng_key) import numpyro import numpyro.distributions as dist from numpyro.distributions import constraints from numpyro.distributions.transforms import AffineTransform from numpyro.diagnostics import hpdi, print_summary from numpyro.infer import Predictive from numpyro.infer import MCMC, NUTS from numpyro.infer import SVI, Trace_ELBO, init_to_value from numpyro.infer.autoguide import AutoLaplaceApproximation import numpyro.optim as optim import daft from causalgraphicalmodels import CausalGraphicalModel from sklearn.preprocessing import StandardScaler n = jax.local_device_count() print(n) Explanation: <a href="https://colab.research.google.com/github/always-newbie161/pyprobml/blob/issue_hermes78/notebooks/logreg_ucb_admissions_numpyro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Binomial logistic regression for UCB admissions We illustrate binary logistic regression on 2 discrete inputs using the example in sec 11.1.4 of Statistical Rethinking ed 2. The numpyro code is from Du Phan's site End of explanation url = "https://raw.githubusercontent.com/fehiepsi/rethinking-numpyro/master/data/UCBadmit.csv" UCBadmit = pd.read_csv(url, sep=";") d = UCBadmit display(d) print(d.to_latex(index=False)) dat_list = dict( admit=d.admit.values, applications=d.applications.values, gid=(d["applicant.gender"] != "male").astype(int).values, ) dat_list["dept_id"] = jnp.repeat(jnp.arange(6), 2) print(dat_list) # extract number of applicaitons for dept 2 (C) d.applications[dat_list["dept_id"].copy() == 2] d.applications[dat_list["dept_id"].copy() == 2].sum() # application rate per department pg = jnp.stack( list( map( lambda k: jnp.divide( d.applications[dat_list["dept_id"].copy() == k].values, d.applications[dat_list["dept_id"].copy() == k].sum(), ), range(6), ) ), axis=0, ).T pg = pd.DataFrame(pg, index=["male", "female"], columns=d.dept.unique()) display(pg.round(2)) print(pg.to_latex()) # admisions rate per department pg = jnp.stack( list( map( lambda k: jnp.divide( d.admit[dat_list["dept_id"].copy() == k].values, d.applications[dat_list["dept_id"].copy() == k].values, ), range(6), ) ), axis=0, ).T pg = pd.DataFrame(pg, index=["male", "female"], columns=d.dept.unique()) display(pg.round(2)) print(pg.to_latex()) Explanation: Data End of explanation dat_list = dict( admit=d.admit.values, applications=d.applications.values, gid=(d["applicant.gender"] != "male").astype(int).values, ) def model(gid, applications, admit=None): a = numpyro.sample("a", dist.Normal(0, 1.5).expand([2])) logit_p = a[gid] numpyro.sample("admit", dist.Binomial(applications, logits=logit_p), obs=admit) m11_7 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4) m11_7.run(random.PRNGKey(0), **dat_list) m11_7.print_summary(0.89) post = m11_7.get_samples() diff_a = post["a"][:, 0] - post["a"][:, 1] diff_p = expit(post["a"][:, 0]) - expit(post["a"][:, 1]) print_summary({"diff_a": diff_a, "diff_p": diff_p}, 0.89, False) Explanation: Model 1 End of explanation def ppc(mcmc_run, model_args): post = mcmc_run.get_samples() pred = Predictive(mcmc_run.sampler.model, post)(random.PRNGKey(2), **model_args) admit_pred = pred["admit"] admit_rate = admit_pred / d.applications.values plt.errorbar( range(1, 13), jnp.mean(admit_rate, 0), jnp.std(admit_rate, 0) / 2, fmt="o", c="k", mfc="none", ms=7, elinewidth=1, ) plt.plot(range(1, 13), jnp.percentile(admit_rate, 5.5, 0), "k+") plt.plot(range(1, 13), jnp.percentile(admit_rate, 94.5, 0), "k+") # draw lines connecting points from same dept for i in range(1, 7): x = 1 + 2 * (i - 1) # 1,3,5,7,9,11 y1 = d.admit.iloc[x - 1] / d.applications.iloc[x - 1] # male y2 = d.admit.iloc[x] / d.applications.iloc[x] # female plt.plot((x, x + 1), (y1, y2), "bo-") plt.annotate(d.dept.iloc[x], (x + 0.5, (y1 + y2) / 2 + 0.05), ha="center", color="royalblue") plt.gca().set(ylim=(0, 1), xticks=range(1, 13), ylabel="admit", xlabel="case") ppc(m11_7, {"gid": dat_list["gid"], "applications": dat_list["applications"]}) plt.savefig("admissions_ppc.pdf", dpi=300) plt.show() Explanation: Posterior predictive check End of explanation dat_list["dept_id"] = jnp.repeat(jnp.arange(6), 2) def model(gid, dept_id, applications, admit=None): a = numpyro.sample("a", dist.Normal(0, 1.5).expand([2])) delta = numpyro.sample("delta", dist.Normal(0, 1.5).expand([6])) logit_p = a[gid] + delta[dept_id] numpyro.sample("admit", dist.Binomial(applications, logits=logit_p), obs=admit) m11_8 = MCMC(NUTS(model), num_warmup=2000, num_samples=2000, num_chains=4) m11_8.run(random.PRNGKey(0), **dat_list) m11_8.print_summary(0.89) post = m11_8.get_samples() diff_a = post["a"][:, 0] - post["a"][:, 1] diff_p = expit(post["a"][:, 0]) - expit(post["a"][:, 1]) print_summary({"diff_a": diff_a, "diff_p": diff_p}, 0.89, False) data_dict = {"gid": dat_list["gid"], "dept_id": dat_list["dept_id"], "applications": dat_list["applications"]} ppc(m11_8, data_dict) # ppc(m11_8, dat_list) # must exclude 'admit' for predictive distribution plt.savefig("admissions_ppc_per_dept.pdf", dpi=300) plt.show() Explanation: Model 2 (departmental-specific offset) End of explanation # binomial model of overall admission probability def model(applications, admit): a = numpyro.sample("a", dist.Normal(0, 1.5)) logit_p = a numpyro.sample("admit", dist.Binomial(applications, logits=logit_p), obs=admit) m_binom = AutoLaplaceApproximation(model) svi = SVI( model, m_binom, optim.Adam(1), Trace_ELBO(), applications=d.applications.values, admit=d.admit.values, ) p_binom, losses = svi.run(random.PRNGKey(0), 1000) m_binom = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4) m_binom.run(random.PRNGKey(0), d.applications.values, d.admit.values) m_binom.print_summary(0.95) logit = jnp.mean(m_binom.get_samples()["a"]) print(expit(logit)) def model(rej, admit): a1, a2 = numpyro.sample("a", dist.Normal(0, 1.5).expand([2])) lambda1 = jnp.exp(a1) lambda2 = jnp.exp(a2) numpyro.sample("rej", dist.Poisson(lambda2), obs=rej) numpyro.sample("admit", dist.Poisson(lambda1), obs=admit) m_pois = MCMC(NUTS(model), num_warmup=1000, num_samples=1000, num_chains=3) m_pois.run(random.PRNGKey(0), d.reject.values, d.admit.values) m_pois.print_summary(0.95) params = jnp.mean(m_pois.get_samples()["a"], 0) a1 = params[0] a2 = params[1] lam1 = jnp.exp(a1) lam2 = jnp.exp(a2) print([lam1, lam2]) print(lam1 / (lam1 + lam2)) Explanation: Poisson regression We now show we can emulate binomial regresison using 2 poisson regressions, following sec 11.3.3 of rethinking. We use a simplified model that just predicts outcomes, and has no features (just an offset term). End of explanation d = UCBadmit d["gid"] = (d["applicant.gender"] != "male").astype(int) dat = dict(A=d.admit.values, N=d.applications.values, gid=d.gid.values) def model(gid, N, A=None): a = numpyro.sample("a", dist.Normal(0, 1.5).expand([2])) phi = numpyro.sample("phi", dist.Exponential(1)) theta = numpyro.deterministic("theta", phi + 2) # shape pbar = expit(a[gid]) # mean numpyro.sample("A", dist.BetaBinomial(pbar * theta, (1 - pbar) * theta, N), obs=A) m12_1 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4) m12_1.run(random.PRNGKey(0), **dat) post = m12_1.get_samples() post["theta"] = Predictive(m12_1.sampler.model, post)(random.PRNGKey(1), **dat)["theta"] post["da"] = post["a"][:, 0] - post["a"][:, 1] print_summary(post, 0.89, False) post gid = 1 # draw posterior mean beta distribution x = jnp.linspace(0, 1, 101) pbar = jnp.mean(expit(post["a"][:, gid])) theta = jnp.mean(post["theta"]) plt.plot(x, jnp.exp(dist.Beta(pbar * theta, (1 - pbar) * theta).log_prob(x))) plt.gca().set(ylabel="Density", xlabel="probability admit", ylim=(0, 3)) # draw 50 beta distributions sampled from posterior for i in range(50): p = expit(post["a"][i, gid]) theta = post["theta"][i] plt.plot(x, jnp.exp(dist.Beta(p * theta, (1 - p) * theta).log_prob(x)), "k", alpha=0.2) plt.title("distribution of female admission rates") plt.savefig("admissions_betabinom_female_rate.pdf") plt.show() fig, ax = plt.subplots() labels = ["male", "female"] colors = ["b", "r"] for gid in [0, 1]: # draw posterior mean beta distribution x = jnp.linspace(0, 1, 101) pbar = jnp.mean(expit(post["a"][:, gid])) theta = jnp.mean(post["theta"]) y = jnp.exp(dist.Beta(pbar * theta, (1 - pbar) * theta).log_prob(x)) ax.plot(x, y, label=labels[gid], color=colors[gid]) ax.set_ylabel("Density") ax.set_xlabel("probability admit") ax.set_ylim(0, 3) # draw some beta distributions sampled from posterior for i in range(10): p = expit(post["a"][i, gid]) theta = post["theta"][i] y = jnp.exp(dist.Beta(p * theta, (1 - p) * theta).log_prob(x)) plt.plot(x, y, colors[gid], alpha=0.2) plt.title("distribution of admission rates") plt.legend() plt.savefig("admissions_betabinom_rates.pdf") plt.show() post = m12_1.get_samples() admit_pred = Predictive(m12_1.sampler.model, post)(random.PRNGKey(1), gid=dat["gid"], N=dat["N"])["A"] admit_rate = admit_pred / dat["N"] plt.scatter(range(1, 13), dat["A"] / dat["N"]) plt.errorbar( range(1, 13), jnp.mean(admit_rate, 0), jnp.std(admit_rate, 0) / 2, fmt="o", c="k", mfc="none", ms=7, elinewidth=1, ) plt.plot(range(1, 13), jnp.percentile(admit_rate, 5.5, 0), "k+") plt.plot(range(1, 13), jnp.percentile(admit_rate, 94.5, 0), "k+") plt.savefig("admissions_betabinom_post_pred.pdf") plt.show() Explanation: Beta-binomial regression Sec 12.1.1 of rethinking. Code from snippet 12.2 of Du Phan's site End of explanation from numpyro.examples.datasets import UCBADMIT, load_dataset def glmm(dept, male, applications, admit=None): v_mu = numpyro.sample("v_mu", dist.Normal(0, jnp.array([4.0, 1.0]))) sigma = numpyro.sample("sigma", dist.HalfNormal(jnp.ones(2))) L_Rho = numpyro.sample("L_Rho", dist.LKJCholesky(2, concentration=2)) scale_tril = sigma[..., jnp.newaxis] * L_Rho # non-centered parameterization num_dept = len(np.unique(dept)) z = numpyro.sample("z", dist.Normal(jnp.zeros((num_dept, 2)), 1)) v = jnp.dot(scale_tril, z.T).T logits = v_mu[0] + v[dept, 0] + (v_mu[1] + v[dept, 1]) * male if admit is None: # we use a Delta site to record probs for predictive distribution probs = expit(logits) numpyro.sample("probs", dist.Delta(probs), obs=probs) numpyro.sample("admit", dist.Binomial(applications, logits=logits), obs=admit) def run_inference(dept, male, applications, admit, rng_key): kernel = NUTS(glmm) mcmc = MCMC(kernel, num_warmup=500, num_samples=1000, num_chains=1) mcmc.run(rng_key, dept, male, applications, admit) return mcmc.get_samples() def print_results(header, preds, dept, male, probs): columns = ["Dept", "Male", "ActualProb", "Pred(p25)", "Pred(p50)", "Pred(p75)"] header_format = "{:>10} {:>10} {:>10} {:>10} {:>10} {:>10}" row_format = "{:>10.0f} {:>10.0f} {:>10.2f} {:>10.2f} {:>10.2f} {:>10.2f}" quantiles = jnp.quantile(preds, jnp.array([0.25, 0.5, 0.75]), axis=0) print("\n", header, "\n") print(header_format.format(*columns)) for i in range(len(dept)): print(row_format.format(dept[i], male[i], probs[i], *quantiles[:, i]), "\n") _, fetch_train = load_dataset(UCBADMIT, split="train", shuffle=False) dept, male, applications, admit = fetch_train() rng_key, rng_key_predict = random.split(random.PRNGKey(1)) zs = run_inference(dept, male, applications, admit, rng_key) pred_probs = Predictive(glmm, zs)(rng_key_predict, dept, male, applications)["probs"] header = "=" * 30 + "glmm - TRAIN" + "=" * 30 print_results(header, pred_probs, dept, male, admit / applications) # make plots fig, ax = plt.subplots(figsize=(8, 6), constrained_layout=True) ax.plot(range(1, 13), admit / applications, "o", ms=7, label="actual rate") ax.errorbar( range(1, 13), jnp.mean(pred_probs, 0), jnp.std(pred_probs, 0), fmt="o", c="k", mfc="none", ms=7, elinewidth=1, label=r"mean $\pm$ std", ) ax.plot(range(1, 13), jnp.percentile(pred_probs, 5, 0), "k+") ax.plot(range(1, 13), jnp.percentile(pred_probs, 95, 0), "k+") ax.set( xlabel="cases", ylabel="admit rate", title="Posterior Predictive Check with 90% CI", ) ax.legend() plt.savefig("ucbadmit_plot.pdf") Explanation: Mixed effects model with joint prior This code is from https://numpyro.readthedocs.io/en/latest/examples/ucbadmit.html. End of explanation # p344 dag = CausalGraphicalModel(nodes=["G", "D", "A"], edges=[("G", "D"), ("G", "A"), ("D", "A")]) out = dag.draw() display(out) out.render(filename="admissions_dag", format="pdf") # p345 dag = CausalGraphicalModel(nodes=["G", "D", "A"], edges=[("G", "D"), ("G", "A"), ("D", "A")], latent_edges=[("D", "A")]) out = dag.draw() display(out) out.render(filename="admissions_dag_hidden", format="pdf") Explanation: PGMs End of explanation N = 200 # number of samples b_GP = 1 # direct effect of G on P b_GC = 0 # direct effect of G on C b_PC = 1 # direct effect of P on C b_U = 2 # direct effect of U on P and C with numpyro.handlers.seed(rng_seed=1): U = 2 * numpyro.sample("U", dist.Bernoulli(0.5).expand([N])) - 1 G = numpyro.sample("G", dist.Normal().expand([N])) P = numpyro.sample("P", dist.Normal(b_GP * G + b_U * U)) C = numpyro.sample("C", dist.Normal(b_PC * P + b_GC * G + b_U * U)) df_gauss = pd.DataFrame({"C": C, "P": P, "G": G, "U": U}) def model_linreg(P, G, C): a = numpyro.sample("a", dist.Normal(0, 1)) b_PC = numpyro.sample("b_PC", dist.Normal(0, 1)) b_GC = numpyro.sample("b_GC", dist.Normal(0, 1)) sigma = numpyro.sample("sigma", dist.Exponential(1)) mu = a + b_PC * P + b_GC * G numpyro.sample("C", dist.Normal(mu, sigma), obs=C) data_gauss = {"P": df_gauss.P.values, "G": df_gauss.G.values, "C": df_gauss.C.values} m6_11 = AutoLaplaceApproximation(model_linreg) svi = SVI(model_linreg, m6_11, optim.Adam(0.3), Trace_ELBO(), **data_gauss) p6_11, losses = svi.run(random.PRNGKey(0), 1000) post = m6_11.sample_posterior(random.PRNGKey(1), p6_11, (1000,)) print_summary(post, 0.89, False) mcmc_run = MCMC(NUTS(model_linreg), num_warmup=200, num_samples=200, num_chains=4) mcmc_run.run(random.PRNGKey(0), **data) mcmc_run.print_summary(0.89) def model_linreg_hidden(P, G, U, C): a = numpyro.sample("a", dist.Normal(0, 1)) b_PC = numpyro.sample("b_PC", dist.Normal(0, 1)) b_GC = numpyro.sample("b_GC", dist.Normal(0, 1)) b_U = numpyro.sample("U", dist.Normal(0, 1)) sigma = numpyro.sample("sigma", dist.Exponential(1)) mu = a + b_PC * P + b_GC * G + b_U * U numpyro.sample("C", dist.Normal(mu, sigma), obs=C) m6_12 = AutoLaplaceApproximation(model_linreg_hidden) svi = SVI( model_linreg_hidden, m6_12, optim.Adam(1), Trace_ELBO(), P=d.P.values, G=d.G.values, U=d.U.values, C=d.C.values, ) p6_12, losses = svi.run(random.PRNGKey(0), 1000) post = m6_12.sample_posterior(random.PRNGKey(1), p6_12, (1000,)) print_summary(post, 0.89, False) Explanation: Causal inference with the latent DAG This is based on sec 6.3 (collider bias) of the Rethinking book. Code is from Du Phan, code snippet 6.25. We change the names to match our current example: P (parents) -> D (department), C (child) -> A (admit). Linear regression version End of explanation N = 200 # number of samples b_GP = 1 # direct effect of G on P b_GC = 0 # direct effect of G on C b_PC = 1 # direct effect of P on C b_U = 2 # direct effect of U on P and C with numpyro.handlers.seed(rng_seed=1): # U = 2 * numpyro.sample("U", dist.Bernoulli(0.5).expand([N])) - 1 U = numpyro.sample("U", dist.Normal().expand([N])) # G = numpyro.sample("G", dist.Normal().expand([N])) G = numpyro.sample("G", dist.Bernoulli(0.5).expand([N])) P = numpyro.sample("P", dist.Normal(b_GP * G + b_U * U)) # C = numpyro.sample("C", dist.Normal(b_PC * P + b_GC * G + b_U * U)) logits = b_PC * P + b_GC * G + b_U * U probs = expit(logits) C = numpyro.sample("C", dist.BernoulliProbs(probs)) df_binary = pd.DataFrame({"C": C, "G": G, "P": P, "U": U, "probs": probs}) display(df_binary.head(10)) def model_causal(C=None, G=None, P=None, U=None): U = numpyro.sample("U", dist.Normal(), obs=U) G = numpyro.sample("G", dist.Bernoulli(0.5), obs=G) P = numpyro.sample("P", dist.Normal(b_GP * G + b_U * U), obs=P) logits = b_PC * P + b_GC * G + b_U * U probs = expit(logits) C = numpyro.sample("C", dist.BernoulliProbs(probs), obs=C) return np.array([C, G, P, U]) def make_samples(C=None, G=None, P=None, U=None, nsamples=200): data_list = [] with numpyro.handlers.seed(rng_seed=0): for i in range(nsamples): out = model_causal(C, G, P, U) data_list.append(out) df = pd.DataFrame.from_records(data_list, columns=["C", "G", "P", "U"]) return df df_binary = make_samples() display(df_binary.head()) Cbar = df_binary["C"].values.mean() Gbar = df_binary["G"].values.mean() Pbar = df_binary["P"].values.mean() Ubar = df_binary["U"].values.mean() print([Cbar, Gbar, Pbar, Ubar]) print(b_GP * Gbar + b_U * Ubar) # expected Pbar N = len(df0) prob_admitted0 = np.sum(df0.C.values) / N prob_admitted1 = np.sum(df1.C.values) / N print([prob_admitted0, prob_admitted1]) def model_logreg(C=None, G=None, P=None): a = numpyro.sample("a", dist.Normal(0, 1)) b_PC = numpyro.sample("b_PC", dist.Normal(0, 0.1)) b_GC = numpyro.sample("b_GC", dist.Normal(0, 0.1)) logits = a + b_PC * P + b_GC * G numpyro.sample("C", dist.Bernoulli(logits=logits), obs=C) data_binary = {"P": df_binary.P.values, "G": df_binary.G.values, "C": df_binary.C.values} warmup = 1000 samples = 500 mcmc_run = MCMC(NUTS(model_logreg), num_warmup=warmup, num_samples=samples, num_chains=4) mcmc_run.run(random.PRNGKey(0), **data) mcmc_run.print_summary(0.89) Explanation: Logistic regression version We modify the scenario to match the UC Berkeley admissions scenario (with binary data) in sec 11.1.4. End of explanation # p(C | do(G), do(P)) Pfixed = 0 df0 = make_samples(G=0, P=Pfixed, nsamples=200) display(df0.head()) Cbar0 = df0["C"].values.mean() df1 = make_samples(G=1, P=Pfixed, nsamples=200) display(df1.head()) Cbar1 = df1["C"].values.mean() print([Cbar0, Cbar1]) sim_dat = dict(G=jnp.array([0, 1]), P=jnp.array(Pfixed)) post = mcmc_run.get_samples() pred = Predictive(model_logreg, post)(random.PRNGKey(22), **sim_dat) print(pred["C"].shape) print(np.mean(pred["C"], axis=0)) a_est = post["a"].mean() b_PC_est = post["b_PC"].mean() b_GC_est = post["b_GC"].mean() P = Pfixed G = np.array([0, 1]) logits = a_est + b_PC_est * P + b_GC_est * G np.set_printoptions(formatter={"float": lambda x: "{0:0.3f}".format(x)}) print(expit(logits)) pred Explanation: Counterfactual plot Similar to p140 End of explanation
6,437
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction This IPython notebook illustrates how to perform blocking using Overlap blocker. First, we need to import py_entitymatching package and other libraries as follows Step1: Then, read the (sample) input tables for blocking purposes. Step2: Ways To Do Overlap Blocking There are three different ways to do overlap blocking Step3: For the given two tables, we will assume that two persons with no sufficient overlap between their addresses do not refer to the same real world person. So, we apply overlap blocking on address. Specifically, we tokenize the address by word and include the tuple pairs if the addresses have at least 3 overlapping tokens. That is, we block all the tuple pairs that do not share at least 3 tokens in address. Step4: In the above, we used word-level tokenizer. Overlap blocker also supports q-gram based tokenizer and it can be used as follows Step5: Updating Stopwords Commands in the Overlap Blocker removes some stop words by default. You can avoid this by specifying rem_stop_words parameter to False Step6: You can check what stop words are getting removed like this Step7: You can update this stop word list (with some domain specific stop words) and do the blocking. Step8: Handling Missing Values If the input tuples have missing values in the blocking attribute, then they are ignored by default. You can set allow_missing_values to be True to include all possible tuple pairs with missing values. Step9: Block a Candidata Set To Produce Reduced Set of Tuple Pairs Step10: In the above, we see that the candidate set produced after blocking over input tables include tuple pairs that have at least three tokens in overlap. Adding to that, we will assume that two persons with no overlap of their names cannot refer to the same person. So, we block the candidate set of tuple pairs on name. That is, we block all the tuple pairs that have no overlap of tokens. Step11: In the above, we saw that word level tokenization was used to tokenize the names. You can also use q-gram tokenization like this Step12: Handling Missing Values As we saw with block_tables, you can include all the possible tuple pairs with the missing values using allow_missing parameter block the candidate set with the updated set of stop words. Step13: Block Two tuples To Check If a Tuple Pair Would Get Blocked We can apply overlap blocking to a tuple pair to check if it is going to get blocked. For example, we can check if the first tuple from A and B will get blocked if we block on address.
Python Code: # Import py_entitymatching package import py_entitymatching as em import os import pandas as pd Explanation: Introduction This IPython notebook illustrates how to perform blocking using Overlap blocker. First, we need to import py_entitymatching package and other libraries as follows: End of explanation # Get the datasets directory datasets_dir = em.get_install_path() + os.sep + 'datasets' # Get the paths of the input tables path_A = datasets_dir + os.sep + 'person_table_A.csv' path_B = datasets_dir + os.sep + 'person_table_B.csv' # Read the CSV files and set 'ID' as the key attribute A = em.read_csv_metadata(path_A, key='ID') B = em.read_csv_metadata(path_B, key='ID') A.head() Explanation: Then, read the (sample) input tables for blocking purposes. End of explanation # Instantiate overlap blocker object ob = em.OverlapBlocker() Explanation: Ways To Do Overlap Blocking There are three different ways to do overlap blocking: Block two tables to produce a candidate set of tuple pairs. Block a candidate set of tuple pairs to typically produce a reduced candidate set of tuple pairs. Block two tuples to check if a tuple pair would get blocked. Block Tables to Produce a Candidate Set of Tuple Pairs End of explanation # Specify the tokenization to be 'word' level and set overlap_size to be 3. C1 = ob.block_tables(A, B, 'address', 'address', word_level=True, overlap_size=3, l_output_attrs=['name', 'birth_year', 'address'], r_output_attrs=['name', 'birth_year', 'address'], show_progress=False) # Display first 5 tuple pairs in the candidate set. C1.head() Explanation: For the given two tables, we will assume that two persons with no sufficient overlap between their addresses do not refer to the same real world person. So, we apply overlap blocking on address. Specifically, we tokenize the address by word and include the tuple pairs if the addresses have at least 3 overlapping tokens. That is, we block all the tuple pairs that do not share at least 3 tokens in address. End of explanation # Set the word_level to be False and set the value of q (using q_val) C2 = ob.block_tables(A, B, 'address', 'address', word_level=False, q_val=3, overlap_size=3, l_output_attrs=['name', 'birth_year', 'address'], r_output_attrs=['name', 'birth_year', 'address'], show_progress=False) # Display first 5 tuple pairs C2.head() Explanation: In the above, we used word-level tokenizer. Overlap blocker also supports q-gram based tokenizer and it can be used as follows: End of explanation # Set the parameter to remove stop words to False C3 = ob.block_tables(A, B, 'address', 'address', word_level=True, overlap_size=3, rem_stop_words=False, l_output_attrs=['name', 'birth_year', 'address'], r_output_attrs=['name', 'birth_year', 'address'], show_progress=False) # Display first 5 tuple pairs C3.head() Explanation: Updating Stopwords Commands in the Overlap Blocker removes some stop words by default. You can avoid this by specifying rem_stop_words parameter to False End of explanation ob.stop_words Explanation: You can check what stop words are getting removed like this: End of explanation # Include Franciso as one of the stop words ob.stop_words.append('francisco') ob.stop_words # Set the word level tokenizer to be True C4 = ob.block_tables(A, B, 'address', 'address', word_level=True, overlap_size=3, l_output_attrs=['name', 'birth_year', 'address'], r_output_attrs=['name', 'birth_year', 'address'], show_progress=False) C4.head() Explanation: You can update this stop word list (with some domain specific stop words) and do the blocking. End of explanation # Introduce some missing value A1 = em.read_csv_metadata(path_A, key='ID') A1.loc[0, 'address'] = pd.np.NaN # Set the word level tokenizer to be True C5 = ob.block_tables(A1, B, 'address', 'address', word_level=True, overlap_size=3, allow_missing=True, l_output_attrs=['name', 'birth_year', 'address'], r_output_attrs=['name', 'birth_year', 'address'], show_progress=False) len(C5) C5 Explanation: Handling Missing Values If the input tuples have missing values in the blocking attribute, then they are ignored by default. You can set allow_missing_values to be True to include all possible tuple pairs with missing values. End of explanation #Instantiate the overlap blocker ob = em.OverlapBlocker() Explanation: Block a Candidata Set To Produce Reduced Set of Tuple Pairs End of explanation # Specify the tokenization to be 'word' level and set overlap_size to be 1. C6 = ob.block_candset(C1, 'name', 'name', word_level=True, overlap_size=1, show_progress=False) C6 Explanation: In the above, we see that the candidate set produced after blocking over input tables include tuple pairs that have at least three tokens in overlap. Adding to that, we will assume that two persons with no overlap of their names cannot refer to the same person. So, we block the candidate set of tuple pairs on name. That is, we block all the tuple pairs that have no overlap of tokens. End of explanation # Specify the tokenization to be 'word' level and set overlap_size to be 1. C7 = ob.block_candset(C1, 'name', 'name', word_level=False, q_val= 3, overlap_size=1, show_progress=False) C7.head() Explanation: In the above, we saw that word level tokenization was used to tokenize the names. You can also use q-gram tokenization like this: End of explanation # Introduce some missing values A1.loc[2, 'name'] = pd.np.NaN C8 = ob.block_candset(C5, 'name', 'name', word_level=True, overlap_size=1, allow_missing=True, show_progress=False) Explanation: Handling Missing Values As we saw with block_tables, you can include all the possible tuple pairs with the missing values using allow_missing parameter block the candidate set with the updated set of stop words. End of explanation # Display the first tuple from table A A.loc[[0]] # Display the first tuple from table B B.loc[[0]] # Instantiate Attr. Equivalence Blocker ob = em.OverlapBlocker() # Apply blocking to a tuple pair from the input tables on zipcode and get blocking status status = ob.block_tuples(A.loc[0], B.loc[0],'address', 'address', overlap_size=1) # Print the blocking status print(status) Explanation: Block Two tuples To Check If a Tuple Pair Would Get Blocked We can apply overlap blocking to a tuple pair to check if it is going to get blocked. For example, we can check if the first tuple from A and B will get blocked if we block on address. End of explanation
6,438
Given the following text description, write Python code to implement the functionality described below step by step Description: All the Linear Algebra You Need for AI The purpose of this notebook is to serve as an explanation of two crucial linear algebra operations used when coding neural networks Step1: PyTorch The fastai deep learning library uses PyTorch, a Python framework for dynamic neural networks with GPU acceleration, which was released by Facebook's AI team. PyTorch has two overlapping, yet distinct, purposes. As described in the PyTorch documentation Step2: Normalize Many machine learning algorithms behave better when the data is normalized, that is when the mean is 0 and the standard deviation is 1. We will subtract off the mean and standard deviation from our training set in order to normalize the data Step3: Note that for consistency (with the parameters we learn when training), we subtract the mean and standard deviation of our training set from our validation set. Step4: Look at the data In any sort of data science work, it's important to look at your data, to make sure you understand the format, how it's stored, what type of values it holds, etc. To make it easier to work with, let's reshape it into 2d images from the flattened 1d format. Helper methods Step5: Plots Step6: It's the digit 3! And that's stored in the y value Step7: We can look at part of an image Step8: The Most Important Machine Learning Concepts Functions, parameters, and training A function takes inputs and returns outputs. For instance, $f(x) = 3x + 5$ is an example of a function. If we input $2$, the output is $3\times 2 + 5 = 11$, or if we input $-1$, the output is $3\times -1 + 5 = 2$ Functions have parameters. The above function $f$ is $ax + b$, with parameters a and b set to $a=3$ and $b=5$. Machine learning is often about learning the best values for those parameters. For instance, suppose we have the data points on the chart below. What values should we choose for $a$ and $b$? <img src="images/sgd2.gif" alt="" style="width Step9: Neural networks We will use fastai's ImageClassifierData, which holds our training and validation sets and will provide batches of that data in a form ready for use by a PyTorch model. Step10: We will begin with the highest level abstraction Step11: Each input is a vector of size $28\times 28$ pixels and our output is of size $10$ (since there are 10 digits Step12: Fitting is the process by which the neural net learns the best parameters for the dataset. Step13: GPUs are great at handling lots of data at once (otherwise don't get performance benefit). We break the data up into batches, and that specifies how many samples from our dataset we want to send to the GPU at a time. The fastai library defaults to a batch size of 64. On each iteration of the training loop, the error on 1 batch of data will be calculated, and the optimizer will update the parameters based on that. An epoch is completed once each data sample has been used once in the training loop. Now that we have the parameters for our model, we can make predictions on our validation set. Step14: Let's see how some of our preditions look! Step15: These predictions are pretty good! Coding the Neural Net ourselves Recall that above we used PyTorch's Sequential to define a neural network with a linear layer, a non-linear layer (ReLU), and then another linear layer. Step16: It turns out that Linear is defined by a matrix multiplication and then an addition. Let's try defining this ourselves. This will allow us to see exactly where matrix multiplication is used (we will dive in to how matrix multiplication works in teh next section). Just as Numpy has np.matmul for matrix multiplication (in Python 3, this is equivalent to the @ operator), PyTorch has torch.matmul. PyTorch class has two things Step17: We create our neural net and the optimizer. (We will use the same loss and metrics from above). Step18: Now we can check our predictions Step19: what torch.matmul (matrix multiplication) is doing Now let's dig in to what we were doing with torch.matmul Step20: Broadcasting The term broadcasting describes how arrays with different shapes are treated during arithmetic operations. The term broadcasting was first used by Numpy, although is now used in other libraries such as Tensorflow and Matlab; the rules can vary by library. From the Numpy Documentation Step21: How are we able to do a > 0? 0 is being broadcast to have the same dimensions as a. Remember above when we normalized our dataset by subtracting the mean (a scalar) from the entire data set (a matrix) and dividing by the standard deviation (another scalar)? We were using broadcasting! Other examples of broadcasting with a scalar Step22: Broadcasting a vector to a matrix We can also broadcast a vector to a matrix Step23: Although numpy does this automatically, you can also use the broadcast_to method Step24: The numpy expand_dims method lets us convert the 1-dimensional array c into a 2-dimensional array (although one of those dimensions has value 1). Step25: Broadcasting Rules When operating on two arrays, Numpy/PyTorch compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when they are equal, or one of them is 1 Arrays do not need to have the same number of dimensions. For example, if you have a $256 \times 256 \times 3$ array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible Step26: We get the same answer using torch.matmul Step27: The following is NOT matrix multiplication. What is it? Step28: From a machine learning perspective, matrix multiplication is a way of creating features by saying how much we want to weight each input column. Different features are different weighted averages of the input columns. The website matrixmultiplication.xyz provides a nice visualization of matrix multiplcation Draw a picture Step29: Homework Step30: Matrix-Matrix Products <img src="images/shop.png" alt="floating point" style="width
Python Code: %load_ext autoreload %autoreload 2 from fastai.imports import * from fastai.torch_imports import * from fastai.io import * Explanation: All the Linear Algebra You Need for AI The purpose of this notebook is to serve as an explanation of two crucial linear algebra operations used when coding neural networks: matrix multiplication and broadcasting. Introduction Matrix multiplication is a way of combining two matrices (involving multiplying and summing their entries in a particular way). Broadcasting refers to how libraries such as Numpy and PyTorch can perform operations on matrices/vectors with mismatched dimensions (in particular cases, with set rules). We will use broadcasting to show an alternative way of thinking about matrix multiplication from, different from the way it is standardly taught. In keeping with the fast.ai teaching philosophy of "the whole game", we will: first use a pre-defined class for our neural network then define the net ourselves to see where it uses matrix multiplication & broadcasting and finally dig into the details of how those operations work This is different from how most math courses are taught, where you have to learn all the individual elements before you can combine them (Harvard professor David Perkins call this elementitis), but it is similar to how topics like driving and baseball are taught. That is, you can start driving without knowing how an internal combustion engine works, and children begin playing baseball before they learn all the formal rules. <img src="images/demba_combustion_engine.png" alt="" style="width: 50%"/> <center> (source: Demba Ba and Arvind Nagaraj) </center> More linear algebra resources This notebook was originally created for a 40 minute talk I gave at the O'Reilly AI conference in San Francisco. If you want further resources for linear algebra, here are a few recommendations: 3Blue1Brown Essence of Linear Algebra videos about geometric intuition, which are gorgeous and great for visual learners Khan Academy Linear Algebra videos covering traditional linear algebra material Immersive linear algebra free online textbook with interactive graphics Chapter 2 of Ian Goodfellow's Deep Learning Book for a fairly academic take Computational Linear Algebra: a free, online fast.ai course, originally taught in the University of San Francisco's Masters in Analytics program. It includes a free online textbook and series of videos. This course is very different from standard linear algebra (which often focuses on how humans do matrix calculations), because it is about how to get computers to do matrix computations with speed and accuracy, and incorporates modern tools and algorithms. All the material is taught in Python and centered around solving practical problems such as removing the background from a surveillance video or implementing Google's PageRank search algorithm on Wikipedia pages. Our Tools We will be using the open source deep learning library, fastai, which provides high level abstractions and best practices on top of PyTorch. This is the highest level, simplest way to get started with deep learning. Please note that fastai requires Python 3 to function. It is currently in pre-alpha, so items may move around and more documentation will be added in the future. Imports End of explanation path = '../data/' import os os.makedirs(path, exist_ok=True) URL='http://deeplearning.net/data/mnist/' FILENAME='mnist.pkl.gz' def load_mnist(filename): return pickle.load(gzip.open(filename, 'rb'), encoding='latin-1') get_data(URL+FILENAME, path+FILENAME) ((x, y), (x_valid, y_valid), _) = load_mnist(path+FILENAME) Explanation: PyTorch The fastai deep learning library uses PyTorch, a Python framework for dynamic neural networks with GPU acceleration, which was released by Facebook's AI team. PyTorch has two overlapping, yet distinct, purposes. As described in the PyTorch documentation: <img src="images/what_is_pytorch.png" alt="pytorch" style="width: 80%"/> The neural network functionality of PyTorch is built on top of the Numpy-like functionality for fast matrix computations on a GPU. Although the neural network purpose receives way more attention, both are very useful. We'll implement a neural net from scratch today using PyTorch. Further learning: If you are curious to learn what dynamic neural networks are, you may want to watch this talk by Soumith Chintala, Facebook AI researcher and core PyTorch contributor. If you want to learn more PyTorch, you can try this introductory tutorial or this tutorial to learn by examples. About GPUs Graphical processing units (GPUs) allow for matrix computations to be done with much greater speed, as long as you have a library such as PyTorch that takes advantage of them. Advances in GPU technology in the last 10-20 years have been a key part of why neural networks are proving so much more powerful now than they did a few decades ago. You may own a computer that has a GPU which can be used. For the many people that either don't have a GPU (or have a GPU which can't be easily accessed by Python), there are a few differnt options: Don't use a GPU: For the sake of this tutorial, you don't have to use a GPU, although some computations will be slower. The only change needed to the code is to remove .cuda() wherever it appears. Use crestle, through your browser: Crestle is a service that gives you an already set up cloud service with all the popular scientific and deep learning frameworks already pre-installed and configured to run on a GPU in the cloud. It is easily accessed through your browser. New users get 10 hours and 1 GB of storage for free. After this, GPU usage is 34 cents per hour. I recommend this option to those who are new to AWS or new to using the console. Set up an AWS instance through your console: You can create an AWS instance with a GPU by following the steps in this fast.ai setup lesson.] AWS charges 90 cents per hour for this. Data About The Data Today we will be working with MNIST, a classic data set of hand-written digits. Solutions to this problem are used by banks to automatically recognize the amounts on checks, and by the postal service to automatically recognize zip codes on mail. <img src="images/mnist.png" alt="" style="width: 60%"/> A matrix can represent an image, by creating a grid where each entry corresponds to a different pixel. <img src="images/digit.gif" alt="digit" style="width: 55%"/> (Source: Adam Geitgey ) Download Let's download, unzip, and format the data. End of explanation mean = x.mean() std = x.std() x=(x-mean)/std x.mean(), x.std() Explanation: Normalize Many machine learning algorithms behave better when the data is normalized, that is when the mean is 0 and the standard deviation is 1. We will subtract off the mean and standard deviation from our training set in order to normalize the data: End of explanation x_valid = (x_valid-mean)/std x_valid.mean(), x_valid.std() Explanation: Note that for consistency (with the parameters we learn when training), we subtract the mean and standard deviation of our training set from our validation set. End of explanation %matplotlib inline import numpy as np import matplotlib.pyplot as plt def show(img, title=None): plt.imshow(img, interpolation='none', cmap="gray") if title is not None: plt.title(title) def plots(ims, figsize=(12,6), rows=2, titles=None): f = plt.figure(figsize=figsize) cols = len(ims)//rows for i in range(len(ims)): sp = f.add_subplot(rows, cols, i+1) sp.axis('Off') if titles is not None: sp.set_title(titles[i], fontsize=16) plt.imshow(ims[i], interpolation='none', cmap='gray') Explanation: Look at the data In any sort of data science work, it's important to look at your data, to make sure you understand the format, how it's stored, what type of values it holds, etc. To make it easier to work with, let's reshape it into 2d images from the flattened 1d format. Helper methods End of explanation x_valid.shape x_imgs = np.reshape(x_valid, (-1,28,28)); x_imgs.shape show(x_imgs[0], y_valid[0]) y_valid.shape Explanation: Plots End of explanation y_valid[0] Explanation: It's the digit 3! And that's stored in the y value: End of explanation x_imgs[0,10:15,10:15] show(x_imgs[0,10:15,10:15]) plots(x_imgs[:8], titles=y_valid[:8]) Explanation: We can look at part of an image: End of explanation from fastai.metrics import * from fastai.model import * from fastai.dataset import * from fastai.core import * import torch.nn as nn Explanation: The Most Important Machine Learning Concepts Functions, parameters, and training A function takes inputs and returns outputs. For instance, $f(x) = 3x + 5$ is an example of a function. If we input $2$, the output is $3\times 2 + 5 = 11$, or if we input $-1$, the output is $3\times -1 + 5 = 2$ Functions have parameters. The above function $f$ is $ax + b$, with parameters a and b set to $a=3$ and $b=5$. Machine learning is often about learning the best values for those parameters. For instance, suppose we have the data points on the chart below. What values should we choose for $a$ and $b$? <img src="images/sgd2.gif" alt="" style="width: 70%"/> In the above gif fast.ai Practical Deep Learning for Coders course, intro to SGD notebook), an algorithm called stochastic gradient descent is being used to learn the best parameters to fit the line to the data (note: in the gif, the algorithm is stopping before the absolute best parameters are found). This process is called training or fitting. Most datasets will not be well-represented by a line. We could use a more complicated function, such as $g(x) = ax^2 + bx + c + \sin d$. Now we have 4 parameters to learn: $a$, $b$, $c$, and $d$. This function is more flexible than $f(x) = ax + b$ and will be able to accurately model more datasets. Neural networks take this to an extreme, and are infinitely flexible. They often have thousands, or even hundreds of thousands of parameters. However the core idea is the same as above. The neural network is a function, and we will learn the best parameters for modeling our data. Training & Validation data sets Possibly the most important idea in machine learning is that of having separate training & validation data sets. As motivation, suppose you don't divide up your data, but instead use all of it. And suppose you have lots of parameters: This is called over-fitting. A validation set helps prevent this problem. <img src="images/overfitting2.png" alt="" style="width: 70%"/> <center> Underfitting and Overfitting </center> The error for the pictured data points is lowest for the model on the far right (the blue curve passes through the red points almost perfectly), yet it's not the best choice. Why is that? If you were to gather some new data points, they most likely would not be on that curve in the graph on the right, but would be closer to the curve in the middle graph. This illustrates how using all our data can lead to overfitting. Neural Net (with nn.torch) Imports End of explanation md = ImageClassifierData.from_arrays(path, (x,y), (x_valid, y_valid)) Explanation: Neural networks We will use fastai's ImageClassifierData, which holds our training and validation sets and will provide batches of that data in a form ready for use by a PyTorch model. End of explanation net = nn.Sequential( nn.Linear(28*28, 256), nn.ReLU(), nn.Linear(256, 10) ).cuda() Explanation: We will begin with the highest level abstraction: using a neural net defined by PyTorch's Sequential class. End of explanation loss=F.cross_entropy metrics=[accuracy] opt=optim.Adam(net.parameters()) Explanation: Each input is a vector of size $28\times 28$ pixels and our output is of size $10$ (since there are 10 digits: 0, 1, ..., 9). We use the output of the final layer to generate our predictions. Often for classification problems (like MNIST digit classification), the final layer has the same number of outputs as there are classes. In that case, this is 10: one for each digit from 0 to 9. These can be converted to comparative probabilities. For instance, it may be determined that a particular hand-written image is 80% likely to be a 4, 18% likely to be a 9, and 2% likely to be a 3. In our case, we are not interested in viewing the probabilites, and just want to see what the most likely guess is. Layers Sequential defines layers of our network, so let's talk about layers. Neural networks consist of linear layers alternating with non-linear layers. This creates functions which are incredibly flexible. Deeper layers are able to capture more complex patterns. Layer 1 of a convolutional neural network: <img src="images/zeiler1.png" alt="pytorch" style="width: 40%"/> <center> Matthew Zeiler and Rob Fergus </center> Layer 2: <img src="images/zeiler2.png" alt="pytorch" style="width: 90%"/> <center> Matthew Zeiler and Rob Fergus </center> Deeper layers can learn about more complicated shapes (although we are only using 2 layers in our network): <img src="images/zeiler4.png" alt="pytorch" style="width: 90%"/> <center> Matthew Zeiler and Rob Fergus </center> Training the network Next we will set a few inputs for our fit method: - Optimizer: algorithm for finding the minimum. typically these are variations on stochastic gradient descent, involve taking a step that appears to be the right direction based on the change in the function. - Loss: what function is the optimizer trying to minimize? We need to say how we're defining the error. - Metrics: other calculations you want printed out as you train End of explanation fit(net, md, epochs=1, crit=loss, opt=opt, metrics=metrics) Explanation: Fitting is the process by which the neural net learns the best parameters for the dataset. End of explanation preds = predict(net, md.val_dl) preds = preds.max(1)[1] Explanation: GPUs are great at handling lots of data at once (otherwise don't get performance benefit). We break the data up into batches, and that specifies how many samples from our dataset we want to send to the GPU at a time. The fastai library defaults to a batch size of 64. On each iteration of the training loop, the error on 1 batch of data will be calculated, and the optimizer will update the parameters based on that. An epoch is completed once each data sample has been used once in the training loop. Now that we have the parameters for our model, we can make predictions on our validation set. End of explanation plots(x_imgs[:8], titles=preds[:8]) Explanation: Let's see how some of our preditions look! End of explanation # Our code from above net = nn.Sequential( nn.Linear(28*28, 256), nn.ReLU(), nn.Linear(256, 10) ).cuda() Explanation: These predictions are pretty good! Coding the Neural Net ourselves Recall that above we used PyTorch's Sequential to define a neural network with a linear layer, a non-linear layer (ReLU), and then another linear layer. End of explanation def get_weights(*dims): return nn.Parameter(torch.randn(*dims)/dims[0]) class SimpleMnist(nn.Module): def __init__(self): super().__init__() self.l1_w = get_weights(28*28, 256) # Layer 1 weights self.l1_b = get_weights(256) # Layer 1 bias self.l2_w = get_weights(256, 10) # Layer 2 weights self.l2_b = get_weights(10) # Layer 2 bias def forward(self, x): x = x.view(x.size(0), -1) x = torch.matmul(x, self.l1_w) + self.l1_b # Linear Layer x = x * (x > 0).float() # Non-linear Layer x = torch.matmul(x, self.l2_w) + self.l2_b # Linear Layer return x Explanation: It turns out that Linear is defined by a matrix multiplication and then an addition. Let's try defining this ourselves. This will allow us to see exactly where matrix multiplication is used (we will dive in to how matrix multiplication works in teh next section). Just as Numpy has np.matmul for matrix multiplication (in Python 3, this is equivalent to the @ operator), PyTorch has torch.matmul. PyTorch class has two things: constructor (says parameters) and a forward method (how to calculate prediction using those parameters) The method forward describes how the neural net converts inputs to outputs. In PyTorch, the optimizer knows to try to optimize any attribute of type Parameter. End of explanation net2 = SimpleMnist().cuda() opt=optim.Adam(net2.parameters()) fit(net2, md, epochs=1, crit=loss, opt=opt, metrics=metrics) Explanation: We create our neural net and the optimizer. (We will use the same loss and metrics from above). End of explanation preds = predict(net2, md.val_dl).max(1)[1] plots(x_imgs[:8], titles=preds[:8]) Explanation: Now we can check our predictions: End of explanation a = np.array([10, 6, -4]) b = np.array([2, 8, 7]) a + b a < b Explanation: what torch.matmul (matrix multiplication) is doing Now let's dig in to what we were doing with torch.matmul: matrix multiplication. First, let's start with a simpler building block: broadcasting. Element-wise operations Broadcasting and element-wise operations are supported in the same way by both numpy and pytorch. Operators (+,-,*,/,>,<,==) are usually element-wise. Examples of element-wise operations: End of explanation a a > 0 Explanation: Broadcasting The term broadcasting describes how arrays with different shapes are treated during arithmetic operations. The term broadcasting was first used by Numpy, although is now used in other libraries such as Tensorflow and Matlab; the rules can vary by library. From the Numpy Documentation: The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. In addition to the efficiency of broadcasting, it allows developers to write less code, which typically leads to fewer errors. This section was adapted from Chapter 4 of the fast.ai Computational Linear Algebra course. Broadcasting with a scalar End of explanation a + 1 m = np.array([[1, 2, 3], [4,5,6], [7,8,9]]); m m * 2 Explanation: How are we able to do a > 0? 0 is being broadcast to have the same dimensions as a. Remember above when we normalized our dataset by subtracting the mean (a scalar) from the entire data set (a matrix) and dividing by the standard deviation (another scalar)? We were using broadcasting! Other examples of broadcasting with a scalar: End of explanation c = np.array([10,20,30]); c m + c Explanation: Broadcasting a vector to a matrix We can also broadcast a vector to a matrix: End of explanation np.broadcast_to(c, (3,3)) c.shape Explanation: Although numpy does this automatically, you can also use the broadcast_to method: End of explanation np.expand_dims(c,0).shape m + np.expand_dims(c,0) np.expand_dims(c,1).shape m + np.expand_dims(c,1) np.broadcast_to(np.expand_dims(c,1), (3,3)) Explanation: The numpy expand_dims method lets us convert the 1-dimensional array c into a 2-dimensional array (although one of those dimensions has value 1). End of explanation m, c m @ c # np.matmul(m, c) Explanation: Broadcasting Rules When operating on two arrays, Numpy/PyTorch compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when they are equal, or one of them is 1 Arrays do not need to have the same number of dimensions. For example, if you have a $256 \times 256 \times 3$ array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible: Image (3d array): 256 x 256 x 3 Scale (1d array): 3 Result (3d array): 256 x 256 x 3 The numpy documentation includes several examples of what dimensions can and can not be broadcast together. Matrix Multiplication We are going to use broadcasting to define matrix multiplication. Matrix-Vector Multiplication End of explanation torch.matmul(torch.from_numpy(m), torch.from_numpy(c)) Explanation: We get the same answer using torch.matmul: End of explanation m * c (m * c).sum(axis=1) c np.broadcast_to(c, (3,3)) Explanation: The following is NOT matrix multiplication. What is it? End of explanation n = np.array([[10,40],[20,0],[30,-5]]); n m @ n (m * n[:,0]).sum(axis=1) (m * n[:,1]).sum(axis=1) Explanation: From a machine learning perspective, matrix multiplication is a way of creating features by saying how much we want to weight each input column. Different features are different weighted averages of the input columns. The website matrixmultiplication.xyz provides a nice visualization of matrix multiplcation Draw a picture End of explanation import numpy as np #Exercise: Use Numpy to compute the answer to the above Explanation: Homework: another use of broadcasting If you want to test your understanding of the above tutorial. I encourage you to work through it again, only this time use CIFAR 10, a dataset that consists of 32x32 color images in 10 different categories. Color images have an extra dimension, containing RGB values, compared to black & white images. <img src="images/cifar10.png" alt="" style="width: 70%"/> <center> (source: Cifar 10) </center> Fortunately, broadcasting will make it relatively easy to add this extra dimension (for color RGB), but you will have to make some changes to the code. Other applications of Matrix and Tensor Products Here are some other examples of where matrix multiplication arises. This material is taken from Chapter 1 of my Computational Linear Algebra course. Matrix-Vector Products: The matrix below gives the probabilities of moving from 1 health state to another in 1 year. If the current health states for a group are: - 85% asymptomatic - 10% symptomatic - 5% AIDS - 0% death what will be the % in each health state in 1 year? <img src="images/markov_health.jpg" alt="floating point" style="width: 80%"/>(Source: Concepts of Markov Chains) Answer End of explanation #Exercise: Use Numpy to compute the answer to the above Explanation: Matrix-Matrix Products <img src="images/shop.png" alt="floating point" style="width: 100%"/>(Source: Several Simple Real-world Applications of Linear Algebra Tools) Answer End of explanation
6,439
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1><div align='left'> Open Source can improve current scientific practice</div></h1> <h3>Ipython notebooks are a great tool to support this</h3> Sophie Balemans and Stijn Van Hoey EGU 2015 - PICO session on Open Source Computing in Hydrology The ideals of (hydrological) science Provide verifiable answers about water and solutions to water-related problems. The validation of these results by reproduction. An altruistic, collective enterprise for humanity's benefit F Perez The <span class="strike">ideals</span> reality of (hydrological) science <span class="strike">Provide verifiable answers about water and solutions to water-related problems.</span> The pursuit of highly cited papers for your CV. <span class="strike">The validation of our results by reproduction.</span> Validation by convincing journal referees who didn't see your code or data. <span class="strike">An altruistic, collective enterprise for humanity's benefit.</span> A deadly race to outrun your colleagues in front of the bear of funding. F Perez Free and Open Source Software (FOSS) in this context <span class="emph">Open, collaborative by definition.</span> Industrial competition can coexist... <span class="emph">Continuous</span>, public process. Distributed credit. Open peer review. <span class="emph">Reproducible by necessity.</span> <span class="emph">Public bug tracking.</span> The use of <span class="emph">licenses</span> is essential (CC, BSD, GPL,...) F Perez FOSS $\neq$ free work <center><img src="figs/workfree.png" width="70%"></center> All waiting for the developer... <center><img src="figs/crowd.jpg" width="70%"></center> <div class="center"><h2>...or all developers?</h2></div> Graveyard of good intentions <center><img src="figs/graveyard.jpg" width="70%"></center> Towards continuous and collaborative What do we need Step1: Measured vs modelled discharge Step2: Exploring the parameter space Simulating 20000 <span class="emph">Monte-Carlo</span> runs Sampling from <span class="emph">uniform distribution</span> <span class="emph">Parallel</span> calculation within IPython notebook Step3: Visualisation in 2D scatter plot Step4: What about the model? Parameter boundaries correct? Optimal paramersets change periodically... Correct model structure? NOT an optimization tool! The function Select parametersets based on
Python Code: # %load PDM_HPC.py pars =pd.read_csv('data/example2_PDM_parameters.txt',header=0, sep=',', index_col=0) measured = pd.read_csv('data/example_PDM_measured.txt', header=0, sep='\t', decimal='.', index_col=0) modelled = pd.read_csv('data/example2_PDM_outputs.txt',header=0, sep=',', index_col=0).T modeloutput1 = pd.DataFrame(modelled.iloc[:,0].values, index=measured.index) Explanation: <h1><div align='left'> Open Source can improve current scientific practice</div></h1> <h3>Ipython notebooks are a great tool to support this</h3> Sophie Balemans and Stijn Van Hoey EGU 2015 - PICO session on Open Source Computing in Hydrology The ideals of (hydrological) science Provide verifiable answers about water and solutions to water-related problems. The validation of these results by reproduction. An altruistic, collective enterprise for humanity's benefit F Perez The <span class="strike">ideals</span> reality of (hydrological) science <span class="strike">Provide verifiable answers about water and solutions to water-related problems.</span> The pursuit of highly cited papers for your CV. <span class="strike">The validation of our results by reproduction.</span> Validation by convincing journal referees who didn't see your code or data. <span class="strike">An altruistic, collective enterprise for humanity's benefit.</span> A deadly race to outrun your colleagues in front of the bear of funding. F Perez Free and Open Source Software (FOSS) in this context <span class="emph">Open, collaborative by definition.</span> Industrial competition can coexist... <span class="emph">Continuous</span>, public process. Distributed credit. Open peer review. <span class="emph">Reproducible by necessity.</span> <span class="emph">Public bug tracking.</span> The use of <span class="emph">licenses</span> is essential (CC, BSD, GPL,...) F Perez FOSS $\neq$ free work <center><img src="figs/workfree.png" width="70%"></center> All waiting for the developer... <center><img src="figs/crowd.jpg" width="70%"></center> <div class="center"><h2>...or all developers?</h2></div> Graveyard of good intentions <center><img src="figs/graveyard.jpg" width="70%"></center> Towards continuous and collaborative What do we need: <span class="emph">Training</span> of students, Phds,... <br>Creating a future generation of scientists with reproducibility as default<br> Provide version control, script-based development, database management... in the curricula <span class="emph">Continuous funding</span> of open source development<br>Payed to maintain and develop open source projects <span class="emph">Tools</span> that facilitate a reproducible workflow<br>knitr, Ipython Notebook, git, RunMyCode, VIStrails, Authorea,... <div class="center"><h2>Ipython Notebook</h2></div> Reproducible science Reproducibility <span class="emph">at publication time?</span> is TOO late! We need to embed the <span class="emph">entire lifecycle</span> of a scientific idea: exploratory stuff (collaborative) development production (simulations on HPC, data visualisation,...) publication (with <span class="warn">reproducible</span> results) teach about it Go back to 1. Ipython (Jupyter!) Notebook can support on the different levels Ipython (Jupyter!) Notebook... (this is a notebook!) <div class="center"><h3>Minimize effort between analysis and sharing</h3></div> <center><img src="figs/IPynbWorkflows.png" width="70%"></center> Interactive shell for data-analysis and exploration Interaction between languages (R, Julia,...) Parallel computing ipynb to latex, pdf, html, html,slides, publications, books,... Loading of images, websites, widgets,... ... Check it out on https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks Scripts, so it can be version controlled! <center><img src="figs/version-control.jpg" width="70%"></center> Recap 5. teach about it The same file can be used to do analysis, create course notes and retrieve slides using <span class="emph">nbconvert</span> Students can <span class="emph">interactively</span> work on their notebook Different useful <span class="emph">features</span>: eg. <div class="center"><h2>interactive widgets</h2></div> Conceptual rainfall-runoff model <br/> <center><img src="figs/PDM_adapted.png" width="80%"></center> End of explanation fig, ax = plt.subplots(figsize=(10, 6)) p1 = measured.plot(ax=ax, label='measured') p2 = modeloutput1.plot(ax=ax, label='modelled') t = ax.set_ylabel(r'Q m$^3$s$^{-1}$') plt.legend(['measured', 'modelled']) from scatter_hist2 import create_scatterhist, create_seasonOF names = pars.columns time=np.array(measured.index) modelled.index = time pars_name={} for i in range(0, names.size): pars_name[names[i]]=i Explanation: Measured vs modelled discharge: End of explanation objective_functions = create_seasonOF(modelled, measured) Explanation: Exploring the parameter space Simulating 20000 <span class="emph">Monte-Carlo</span> runs Sampling from <span class="emph">uniform distribution</span> <span class="emph">Parallel</span> calculation within IPython notebook End of explanation from scatter_hist_season import create_scatterhist scatter = create_scatterhist(pars, 2, 1, objective_functions, names, objective_function='SSE', threshold=0.4, season = 'Winter') scatter = create_scatterhist(pars, 2, 1, objective_functions, names, objective_function='SSE', threshold=0.4, season = 'Spring') Explanation: Visualisation in 2D scatter plot End of explanation #Loading interact functionality from IPython.html.widgets import interact, fixed Explanation: What about the model? Parameter boundaries correct? Optimal paramersets change periodically... Correct model structure? NOT an optimization tool! The function Select parametersets based on: Objective function (SSE, RMSE, RRMSE) Time period of interest (whole year or specific season) Relative threshold (scaled between 0 and 1) Visualisation of a 2D parameter response surface of selected parametersets together with histograms More interactive? command: interact(...) End of explanation
6,440
Given the following text description, write Python code to implement the functionality described below step by step Description: Outline Glossary 5. Imaging Previous Step1: Import section specific modules Step2: 5.1 Spatial Frequencies<a id='imaging Step3: For simplicity convert the RGB-color images to grayscale Step4: Start by displaying the images in the spatial domain, i.e. the domain we usually look at images in. We use the term spatial domain to describe signals which are ordered by a distance which is directly related to the physical distance between the two signals, e.g. for the duck image below the eye is close to the head and far away from the foot as it is in the real world. This may seem like such a simple concept and you may wonder why the point is even being made. But, once we step into the spatial frequency domain our intution will be lost and we need to use the spatial domain relationship to try to regain our footing. Step5: Figure Step6: Figure Step7: Figure Step8: Figure Step9: Figure Step10: Figure Step11: Figure Step12: Figure Step13: Figure Step14: Left Step15: Left Step16: Left Step17: Left Step18: Left Step19: Left Step20: Left Step22: Left
Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline from IPython.display import HTML HTML('../style/course.css') #apply general CSS Explanation: Outline Glossary 5. Imaging Previous: 5. Introduction Next: 5.2 Sampling and Point Spread Functions Import standard modules: End of explanation import matplotlib.image as mpimg from IPython.display import HTML HTML('../style/code_toggle.html') Explanation: Import section specific modules: End of explanation #soccer = mpimg.imread('figures/WLA_moma_Umberto_Boccioni_Dynamism_of_a_Soccer_Player_1913_512.png') cyclist = mpimg.imread('figures/Umberto_Boccioni_Dynamism_of_a_Cyclist_512.png') duck = mpimg.imread('figures/Anas_platyrhynchos_male_female_quadrat_512.png') Explanation: 5.1 Spatial Frequencies<a id='imaging:sec:spatial'></a> To first approximation, the observed sky and sampled visibilites have a Fourier pair relationship via van Cittert–Zernike theorem &#10142;, that is the Fourier Transform of the sampled visibilites is an image of the observed sky, and the Fourier Transform of the observed sky image is the sampled visibilities. Before we dive into specifics of radio interferometric imaging using aperture synthesis, we will cover the ideas which relate the spatial domain (i.e. an image) to the spatial frequency domain (i.e. the visibilities). To do this we will start with the Fourier transform which is the mathematics that relate the two domains. Using that framework we will transform simple signals between the two domains to understand how the a signal in one domain appears in the other. This is the groundwork we need to understand before we can start constructing sky images from the observed visibilities of an interferometric array. 5.1.1 The Fourier Transform in Two Dimensions<a id='imaging:sec:ft2d'></a> We will start with a bit of mathematics, but will soon move to some visual examples which are often more useful to get started to learn about Fourier transforms. As described in $\S$ 2.4 &#10142; the Fourier Transform of a continuous, 1-dimensional function $f(x)$ is \begin{equation} g(t) = \int_{-\infty}^{\infty} f(x) e^{-2\pi i xt} dx \end{equation} The discrete Fourier transform (DFT) $\S$ 2.8 &#10142; of a discrete, 1-dimensional function $f(x)$ is $$g(t) = \sum_{-\infty}^{\infty} f(x) e^{-2\pi i xt} dx$$ The notation $g \rightleftharpoons f$ denotes that the function $f$ and $g$ are Fourier pairs, that is if $g$ is the Fourier transform of $f$, then $f$ is the inverse Fourier transform of $g$. A 2-dimensional Fourier transform is simply the product of the Fourier Transform in each dimension (as each dimension is orthogonal). $$ I(l,m) = \int_{-\infty}^{\infty} V_u(u) e^{-2\pi iul} \,du \int_{-\infty}^{\infty} V_v(v) e^{-2\pi ivm} \, dv = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} V(u,v) e^{-2\pi i(ul+vm)} \,du\,dv $$ By Euler's forumla, $e^{ix} = \cos x + i \sin x$, this can be expanded to $$ I(l,m) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} V(u,v) (\cos(2\pi(ul+vm)) - i\sin(2\pi(ul+vm))) \,du\,dv $$ If $I(l,m)$ represents an image, i.e. a discrete 2-dimensional function, then we see that $I(l,m)$ can be represented as the Fourier Transform of the visibilities $V(u,v)$. By Euler's formula the Fourier Transform is the decomposition of the visibilties into sine and cosine functions. <a id="eq:dft2_IV"></a> $$ I(l,m) = \sum_{u=-\infty}^{\infty} \sum_{v=-\infty}^{\infty} V(u,v) (\cos(2\pi(ul+vm)) - i\sin(2\pi(ul+vm))) \,\Delta u\,\Delta v $$ As the image-visibility relation can be approximated as a Fourier pair $I \rightleftharpoons V$, the visibilities $V(u,v)$ can be described as the sine/cosine decomposition of the image $I(l,m)$ <a id="eq:dft2_VI"></a> $$ V(u,v) = \sum_{l=-1}^{1} \sum_{m=-1}^{1} I(l,m) (\cos(2\pi(ul+vm)) + i\sin(2\pi(ul+vm))) \,\Delta l\,\Delta m $$ The approximation of the relation between the image and visibilities is fundamental to using interferometry for aperture synthesis. There are many ways to interpret this relationship. A useful idea is that the sky is made up of mostly 'point source' objects which one might call stars or galaxies or blobs. Whatever these are, they are 'unresolved' sources or delta functions. Equation above &#10549; states that a delta function in an image is a complex sinusoidal wave in the visibility domain and vica-versa. This means that one does not need to fully sample the visibility space to detect a point source in the image domain. Put in other words, this is an issue a signal being sparse in one domain (image) and dense in another domain (visibility). It is to our advantage to make a measurement in a domain where the signal is spread through-out the space and then transform the observed signal into a more preferable (but sparser) domain. To understand this concept which is the core of interferometric imaging lets look at a few examples which will hopefully start to create some intuition about the image-visibility Fourier relationship. 5.1.2 The Fourier Transform of an Image<a id='imaging:sec:ftImage'></a> Let us start by looking at the effect applying a Fourier Transform to an image. Note, through out this section we will be using the discrete fast Fourier Transform &#10142;. The transformation of irregularly sampled visibilities to a regularly gridded array is discussed in $\S$ 5.3, for now we will only assume our signal is regularly sampled (i.e gridded). Let us start by looking at some generic images, the first is a painting by Umberto Boccioni called Dynamism of a Cyclist and the other is a duck (well, one full male mallard and part of a female mallard). End of explanation def rgb2gray(rgb): r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2] gray = 0.2989 * r + 0.5870 * g + 0.1140 * b #standard grayscale conversion return gray gCyclist = rgb2gray(cyclist) gDuck = rgb2gray(duck) Explanation: For simplicity convert the RGB-color images to grayscale: End of explanation fig = plt.figure(figsize=(8,8)) plt.title('Dynamism of a Cyclist') img0plot = plt.imshow(gCyclist) img0plot.set_cmap('gray') #fig.savefig("cyclist_gray.png", bbox_inches='tight', pad_inches=0) Explanation: Start by displaying the images in the spatial domain, i.e. the domain we usually look at images in. We use the term spatial domain to describe signals which are ordered by a distance which is directly related to the physical distance between the two signals, e.g. for the duck image below the eye is close to the head and far away from the foot as it is in the real world. This may seem like such a simple concept and you may wonder why the point is even being made. But, once we step into the spatial frequency domain our intution will be lost and we need to use the spatial domain relationship to try to regain our footing. End of explanation fig = plt.figure(figsize=(8,8)) plt.title('A Duck') img1plot = plt.imshow(gDuck) img1plot.set_cmap('gray') #fig.savefig("duck_gray.png", bbox_inches='tight', pad_inches=0) Explanation: Figure: Dynamism of a Cyclist by Umberto Boccioni End of explanation fftCyclist = np.fft.fftshift(np.fft.fft2(gCyclist)) fig, axes = plt.subplots(figsize=(16,8)) plt.suptitle('The Fourier Transform of \'Dynamism of a Cyclist\'') plt.subplot(1,2,1) plt.imshow( 10. * np.log10(np.abs(fftCyclist))) #amplitude (decibels) plt.subplot(1,2,2) plt.imshow( np.angle(fftCyclist)) #phase #fig.savefig("soccer_fft_gray.png", bbox_inches='tight', pad_inches=0) Explanation: Figure: A Duck With our eyes and visual learning to understand signals in the spatial domian these are clearly two different images. This may seem like an obvious point, of course they are very different images, but that is because we have learned to understand signals in the spatial domain very well. The spatial frequency domain represents the decomposition of a spatial domain signal into complex sinusoidal waves. In the next section we will look at some simple examples which hopefully provide some insight into this domain. A useful metaphor here is to that of painting an image with a number different-sized paint brushes at different stroke angles. The painting is the spatial-domain image, and one can think of a point in the spatial-frequency domain as representing the amount of paint (in our case, a complex power) applied using a specific brush size at a specific stroke angle. In the spatial frequency domain, all points at the same radius represent the same sized paint brush but at all possible stroke angles. And, points further out represent smaller paint brushes (that is the brushes which can do fine details) compared to points closer to the centre which are for the large structures such as flat backgrounds. Additionally, not as much stroke angle control is required for the large paint brushes compared to the smaller brushes, so only a few points are need in the spatial frequency domain to fully represent a large brush. Now, let us take a step in the spatial frequency domain via a 2-dimensional Fourier Transform. This results in a complex 2-d array where the pixels closest to the centre of the array represent the low-frequency, or 'large-scale', structure of the original image. At the edge of the array are the high-frequency, or 'fine-scale', structure of the image. The high-frequency components represent the details of the image. End of explanation fftDuck = np.fft.fftshift(np.fft.fft2(gDuck)) fig, axes = plt.subplots(figsize=(16,8)) plt.suptitle('The Fourier Transform of a Duck') plt.subplot(1,2,1) plt.imshow( 10. * np.log10(np.abs(fftDuck))) #amplitude (decibels) plt.subplot(1,2,2) plt.imshow(np.angle(fftDuck)) #phase #fig.savefig("duck_fft_gray.png", bbox_inches='tight', pad_inches=0) Explanation: Figure: Amplitude, in decibels, (left) and the phase (right) of the Fourier transform of 'Dynamism of a Cyclist' End of explanation fig = plt.figure(figsize=(8,8)) plt.title('Hybrid image: amplitude (painting), phase (duck)') phs = np.angle(fftDuck) #phase of the duck amp = np.abs(fftCyclist) #amplitude of the painting fftHybrid = amp * (np.cos(phs) + 1j * np.sin(phs)) #construct an image with the phase of a duck and amplitude of the painting hybrid = np.abs(np.fft.ifft2(np.fft.fftshift(fftHybrid))) #compute the inverse Fourier Transform hybridPlot = plt.imshow(hybrid) hybridPlot.set_cmap('gray') #fig.savefig("hybrid_phs_duck_amp_cyclist.png", bbox_inches='tight', pad_inches=0) Explanation: Figure: Amplitude, in decibels, (left) and the phase (right) of the Fourier transform of a duck As can be seen in the amplitude figures on the left the majority of the power is in the central region, i.e. the majority of the signal can be represented with large scale, or low-frequency sinusoidal waves. Try plotting the amplitudes without first rescaling them logarithmically and you should see only a few bright pixels in the middle of the figure. In the phase figures the values range from $-2\pi$ to $2\pi$ and are fairly random. There is some coherent structure that goes left to right and top to bottom passing through the centre of the figure. The main insight to gain from the phase figure is that given a complex image, of say a duck for example, leads to very complex phase structure. An interesting question to consider is 'which is more important, amplitude or phase?' This seems like an odd question, because intuition would seem to indicate that both amplitude and phase information is needed. It turns out that phase information contains most of the structural information. Let us start by taking a look at what this means, and then get into why this is the case. Start by switching the phase information of the two images. End of explanation fig = plt.figure(figsize=(8,8)) plt.title('Hybrid image: amplitude (duck), phase (painting)') phs = np.angle(fftCyclist) #phase of the painting amp = np.abs(fftDuck) #amplitude of the duck fftHybrid = amp * (np.cos(phs) + 1j * np.sin(phs)) #construct an image with the phase of a painting and amplitude of the duck hybrid = np.abs(np.fft.ifft2(np.fft.fftshift(fftHybrid))) #compute the inverse Fourier Transform hybridPlot = plt.imshow(hybrid) hybridPlot.set_cmap('gray') #fig.savefig("hybrid_phs_soccer_amp_duck.png", bbox_inches='tight', pad_inches=0) Explanation: Figure: 'Dynamism of a Duck', reconstructed hybrid image using the spatial frequency amplitudes of 'Dynamism of a Cyclist' and the phases of the duck. This seems kind of like an amazing thing to see, we have combined the two images (using half the information from each) and clearly we see a duck and not the painting. In fact, the painting is not visable at all. When we instead use the phase of the painting and the amplitude of the duck we now see the painting and not the duck. End of explanation fig = plt.figure(figsize=(8,8)) plt.title('Duck (phase-only)') phs = np.angle(fftDuck) amp = 1.*np.ones_like(fftDuck) #set all the amplitude values to 1 fftPhsImg0 = amp * (np.cos(phs) + 1j * np.sin(phs)) phsImg0 = np.abs(np.fft.ifft2(np.fft.fftshift(fftPhsImg0))) phsImg0Plot = plt.imshow(phsImg0) phsImg0Plot.set_cmap('gray') #fig.savefig("phs_only_duck.png", bbox_inches='tight', pad_inches=0) Explanation: Figure: 'Dynamism of a Duck (Redux)', reconstructed hybrid image using the spatial frequency amplitudes of the duck and the phases of 'Dynamism of a Cyclist'. Though with both hybrid images the image in which the phase is extracted from dominates they are not perfect reconstructions of only one image. In both there is now a sheen of noise across the images. The amplitude information plays a role. This noisiness of the hybrid image is due to the fact that the two original images are effectly uncorrelated, i.e. they are not related. So, one would not expect the amtplitude and phase information from one to match the other. What would happen if instead of swapping the amplitude information of the duck for the painting we just set all the amplitude of all the pixels to one? That is, we effectively throw out any amplitude information and make a phase information-only image? End of explanation fig, axes = plt.subplots(figsize=(16,8)) plt.title('Duck (amplitude-only)') phs = np.zeros_like(fftDuck) #set the phase information to 0 amp = np.abs(fftDuck) fftAmpImg0 = amp plt.subplot(1,2,1) plt.title('Duck (amp-only)') ampImg0 = np.abs(np.fft.fftshift(np.fft.ifft2(fftAmpImg0))) ampImg0Plot = plt.imshow(ampImg0) ampImg0Plot.set_cmap('gray') plt.subplot(1,2,2) plt.title('Duck (amp-only (dB))') ampImg0deci = 10.*np.log10(np.abs(np.fft.fftshift(np.fft.ifft2(fftAmpImg0)))) ampImg0deciPlot = plt.imshow(ampImg0deci) ampImg0deciPlot.set_cmap('gray') #fig.savefig("amp_only_duck.png", bbox_inches='tight', pad_inches=0) Explanation: Figure: reconstructed image of a duck using only the phase information by setting all the amplitudes to unity. The duck is still visibale when we only use the phase information, but now we see something interesting. Only the fine details (i.e. the high-frequency structure) like the little feathers and the outline (edges) are visible. We will get to this later in the chapter but by setting every ampltiude pixel to 1 we have created a weighting function which will favour the fine-scale or high-frequency structure in the image over the large-scale structure such as the background water. Can we do the same trick with using only amplitude information, i.e. reconstruct the image after setting all the phases to zero? End of explanation fig, axes = plt.subplots(figsize=(16,8)) plt.subplot(1,2,1) plt.title('Duck (real-only)') fftRealImg1 = fftDuck.real realImg1 = np.abs(np.fft.ifft2(fftRealImg1)) plt.imshow(realImg1) plt.set_cmap('gray') plt.subplot(1,2,2) plt.title('Duck (imaginary-only)') fftImagImg1 = 1j * fftDuck.imag imagImg1 = np.abs(np.fft.ifft2(fftImagImg1)) plt.imshow(imagImg1) plt.set_cmap('gray') Explanation: Figure: reconstructed image of a duck using only the amplitude information by setting all the phases to zero. Left is in linear and right is in logarithmic (decibel) scale. As it turns out, using only the amplitude information results in an almost empty image with some power spread over the centre of the image. Taking the logarithm of the image results in something similar to the amplitude figure of the Fourier transformed image shown earlier in the section. A way to think about the phase and amplitude information is that phase information describes the structure of an image, that is, where the power needs to be placed. And ampltiude information describes the intensity of that structure given a position. The constructive and destructive interference of all the sinusoidal waves leads to a reconstruction of the image. If the amplitudes of all the sinusoidal waves is unity then there is still interference between the waves leading to a partial image. But, if that phase information is gone then there will be no interference and there will only be noise in the reconstructed image. This is an important idea that will come up again soon when we consider interferometric visibility sampling and weighting functions. You will see that we can play with the amplitudes to filter structure out of visibilities. But, we are getting ahead of ourselves here, time enough for that later. As an aside, what happens when we only use the real or imaginary information to reconstruct the image? End of explanation def pointSourceFFT(imgSize, ypos, xpos, amp=1.): img = np.zeros((imgSize+1, imgSize+1)) #odd sized array such that there is a central pixel img[ypos, xpos] = amp #make the central pixel have an intensity of 1 fftImg = np.fft.fft2(np.fft.fftshift(img)) #compute the Fourier transform of the image fig, axes = plt.subplots(figsize=(16,8)) plt.subplot(1,2,1) plt.title('Image') plt.imshow(img, interpolation='nearest') plt.set_cmap('gray') plt.colorbar(shrink=0.5) plt.subplot(1,2,2) plt.title('Fourier Transform (phase)') plt.imshow(np.angle(fftImg)) plt.set_cmap('hsv') plt.colorbar(shrink=0.5) #amplitudes are all 1. print 'FFT Max:', np.max(np.abs(fftImg)), 'Min:', np.min(np.abs(fftImg)) imgSize = 128 pointSourceFFT(imgSize, (imgSize/2)+1, (imgSize/2)+1) Explanation: Figure: reconstructed image of a duck using only the real components (left) and imaginary components (right). There seems to be a lot going on here, but the core of this effect is that the original image is strictly real-valued. When the image is Fourier transformed the resulting signal is now 'twice' the information because it is complex (there is a real and imaginary component at each point) but no new information has been created as it is a linear transform. The Fourier transformed signal is redundant along a diagonal axis, each value above the diagonal has an equivalent (conjugated) value below the diagonal. Now if only the real-component is used to inverse Fourier transform back to the spatial domain then this results in an image which is the sum of the original image and a $180^{\circ}$ rotated image, this is an addition because conjugation does not effect the sign of the real components. The imaginary-component image is the difference image of the original and a $180^{\circ}$ rotated image because the conjugation flips the imaginary sign. The idea with this section has been to start with an image with a significant amount of structure (or information) which we can understand and then to transform that image into a new domain where our intution is lost. And, you may still be lost as we need to create some intution about the spatial frequency domain. In the next section we will start with simple structures to see how we can build up images out of basic components. 5.1.3 The Fourier Transform of a Point Source<a id='imaging:sec:ftPoint'></a> Instead of looking at the Fourier Transform of an image with a large amount of structure let us examine a few simple images and the effect of applying a Fourier transform. The simpliest image one can make is an empty image with a single pixel, or what it is also called a point source as the source is not resolved (in this case the sizeof a single pixel or smaller), with unity intensity in the centre pixel. End of explanation imgSize = 128 pointSourceFFT(imgSize, (imgSize/2)+1, (imgSize/2)) Explanation: Left: a point source image, a simple image of a the centre pixel with unity intensity. Right: the spatial frequency phase of the point source image, all the phases are zero because the source is exactly in the middle of the image. An image with a single intensity pixel in the centre is equivalent to a 2-dimensional boxcar function with a width the size of the pixel, as the number of pixels in the image increases this approaches a delta function. The Fourier Transform of a delta function ($\S$ 2.4 &#10142;) is a flat field of constant value as can be seen by printing the maximum and minimum amplitudes, they are both 1. When the pixel is at the centre of the image the phase is also flat, but as the position of the pixel is changed the phase will show a fringe pattern related to the position. Moving the source position by a single pixel to the left by one pixel the amplitude remains the same but now a fringe pattern appears left to right across the phase. End of explanation imgSize = 128 pointSourceFFT(imgSize, imgSize/2, (imgSize/2)+1) Explanation: Left: a point source image, a simple image of a pixel with intensity 1, offset from the centre by one pixel to the left. Right: the spatial frequency phase of the point source image. From the phase plot we see that the phase is constant in $y$ with respect to the a fixed $x$ position, and rotates around the unit circle once in the $x$ direction. Instead, moving the position one pixel up from the centre the fringe pattern is now top to bottom. End of explanation imgSize = 128 pointSourceFFT(imgSize, (imgSize/2)+1 - 10, (imgSize/2)+1) #offset the point source 10 pixels north above the centre pointSourceFFT(imgSize, (imgSize/2)+1 - 20, (imgSize/2)+1) #offset the point source 20 pixels north above the centre pointSourceFFT(imgSize, (imgSize/2)+1 - 30, (imgSize/2)+1) #offset the point source 30 pixels north above the centre pointSourceFFT(imgSize, (imgSize/2)+1 - 40, (imgSize/2)+1) #offset the point source 40 pixels north above the centre Explanation: Left: a point source image, a simple image of a pixel with intensity 1, offset from the centre by one pixel up. Right: the spatial frequency phase of the point source image. With the past three examples the minimum and maximum amplitudes have remained the same but we see different phase plots. The direction of the phase indicates the position of the point source relative to the centre of the image. Now we will see that by moving the point source away from the centre the phase frequency will increase. End of explanation def pointSourceFFTCircle(imgSize, ypos, xpos, amp=1., radius=10.): img = np.zeros((imgSize+1, imgSize+1)) #odd sized array such that there is a central pixel img[ypos, xpos] = amp #make the central pixel have an intensity of 1 fftImg = np.fft.fft2(np.fft.fftshift(img)) #compute the Fourier transform of the image fig = plt.figure(figsize=(16,8)) ax = fig.add_subplot(1, 2, 1) plt.title('Image') c = plt.Circle(((imgSize/2)+1, (imgSize/2)+1), radius, color='blue', linewidth=1, fill=False) ax.add_patch(c) plt.imshow(img, interpolation='nearest') plt.set_cmap('gray') plt.colorbar(shrink=0.5) ax = fig.add_subplot(1, 2, 2) plt.title('Fourier Transform (phase)') plt.imshow(np.angle(fftImg)) plt.set_cmap('hsv') plt.colorbar(shrink=0.5) #amplitudes are all 1. print 'FFT Max:', np.max(np.abs(fftImg)), 'Min:', np.min(np.abs(fftImg)) imgSize = 128 pointSourceFFTCircle(imgSize, (imgSize/2)+1 - 10, (imgSize/2)+1 - 0, amp=1., radius=10.) pointSourceFFTCircle(imgSize, (imgSize/2)+1 - 7, (imgSize/2)+1 - 7, amp=1.,radius=10.) pointSourceFFTCircle(imgSize, (imgSize/2)+1 - 0, (imgSize/2)+1 - 10, amp=1.,radius=10.) pointSourceFFTCircle(imgSize, (imgSize/2)+1 + 7, (imgSize/2)+1 - 7, amp=1.,radius=10.) pointSourceFFTCircle(imgSize, (imgSize/2)+1 + 10, (imgSize/2)+1 - 0, amp=1.,radius=10.) Explanation: Left: a point source image, a simple image of a pixel with intensity 1, progressively offset from the centre. Right: the spatial frequency phase of the point source image. The further the point source is from the centre the faster the phase rotates, one would also say that the fringe frequency increases. By fixing the distance of the point source to the phase centre but rotation the position around a circle we see that the frequency does not change but the direction of the fringe changes. Note, the circle is drawn on the plot for reference, but is not part of the image. End of explanation def multipleSourcesFFT(imgSize, pos, amp): img = np.zeros((imgSize+1, imgSize+1)) #odd sized array such that there is a central pixel for p,a in zip(pos, amp): img[p[0], p[1]] = a #make the central pixel have an intensity of 1 fftImg = np.fft.fft2(np.fft.fftshift(img)) #compute the Fourier transform of the image fig, axes = plt.subplots(figsize=(16,8)) plt.subplot(1,2,1) plt.title('Image') plt.imshow(img, interpolation='nearest') plt.set_cmap('gray') plt.colorbar(shrink=0.5) plt.subplot(1,2,2) plt.title('Fourier Transform (phase)') plt.imshow(np.angle(fftImg)) plt.set_cmap('hsv') plt.colorbar(shrink=0.5) #amplitudes are all 1. print 'FFT Max:', np.max(np.abs(fftImg)), 'Min:', np.min(np.abs(fftImg)) imgSize = 128 multipleSourcesFFT(imgSize, [[64, 65], [90,65]], [1., 1.]) multipleSourcesFFT(imgSize, [[65, 64], [65,80]], [1., 1.]) multipleSourcesFFT(imgSize, [[64, 65], [90,80]], [1., 1.]) Explanation: Left: a point source image, a simple image of a pixel with intensity 1, rotated around a 10 pixel radius circle. The blue circle is plotted for reference. Right: the spatial frequency phase of the point source image. In the top plot where the point source is 10 pixels above the centre has a similar phase plot to that of the last figure where the source is 10 pixels below the centre, but looking carefully the phase is rotating in the opposite direction. For every point in the image there is another point which makes up a conjugated pair. Now from these simple image examples we can see something about how sparsity works in different domains. In the image domain a single point source can be described by only a position and intensity value, i.e. we would say the image is sparse. But, transforming that image in the spatial frequency domain means that we now need to know the phase and amplitude of every pixel. That is, the information has been spread out over every pixel, in the spatial frequency domain the point source signal is now dense. This comes about because a true point-source, i.e. a delta function, can only be reconstructed by an infinite set of complex sinusoidal waves. We have been cheating a bit here by using a point source which actually has a size in these examples. That is, the point source has a size which is a pixel in length in the $x$ and $y$ direction. As a hint about what lays in store, if we know that the signal we are interested in is mostly point source-like then we know that the signal will be spread across the spatial frequency domain and we don't need to know exactly where that signal is. We don't even need to fully measure the spatial frequency domain if we make assumptions about what the source is to reconstruct the the image of the sky we are interested in. But, this is for later. 5.1.4 The Fourier Transform of Two Point Sources<a id='imaging:sec:ft2Point'></a> Now that we have looked at the simple case of a single point source image and how moving that source around the image changes the visibilities, we will get a bit more complex by introducing a second point source. A small step, but that is what we need at the moment. The spatial frequency phases of two point sources, each with unity amplitude, is the average of the phases of each individual point source. End of explanation imgSize = 128 multipleSourcesFFT(imgSize, [[64, 65], [90,80]], [1., 0.1]) multipleSourcesFFT(imgSize, [[64, 65], [90,80]], [1., 1.0]) multipleSourcesFFT(imgSize, [[64, 65], [90,80]], [1., 10.]) Explanation: Left: position of two point sources, each with an amplitude of 1. Right: the resulting spatial frequency phases. From these figures we can see there are two different sinusoidal waves in the phase plots, The first is a slowly varying wave going top to bottom due to the source near the centre of the image. The second wave changes frequency and direction based on the position of the second source. From the first row we can see that because the sources are aligned in $x$, then the phase of both goes top to bottom, but at different frequencies. Similarly, in the second row, the sources are aligned in $y$ and thus both phases go left to right. In the third row we see that the phase is at an angle because of the second source position. Now, the phases are not added equally, the average phase value at each spatial frequency is a weighted average of each source based on the amplitude of that source. In the previous examples both sources had the same amplitude so the phases were simply an average. End of explanation multipleSourcesFFT(imgSize, [[64,65], [90,80], [40,10], [60,50], [20,80]], [.2, 0.3, 0.5, 0.1, 0.7]) Explanation: Left: two point sources at fixed positions with the central source having unity amplitude and the further out source having an amplitude of 0.1, 1, 10 for each row. Right: the resulting phases which are an amplitude weighted average of the phases of each individual source. Once we start including more than two sources in the image we can see the resulting spatial frequency plots can become very complex. End of explanation fftDuck = np.fft.fftshift(np.fft.fft2(gDuck)) def reconstructImage(vis, nsamples): randomly select a few values from the spatial frequency (visibility) domain and reconstruct the image with those samples. To do a full reconstruction of the image, nsamples needs to be larger than the total number of pixels in the image because the np.random.randint() chooses with replacement so positions will be doubles counted subVis = np.zeros_like(vis) ypos = np.random.randint(0, vis.shape[0]-1, size=nsamples) xpos = np.random.randint(0, vis.shape[1]-1, size=nsamples) subVis[ypos, xpos] = vis[ypos, xpos] #insert a subset of visibilities newImg = np.abs(np.fft.ifft2(np.fft.fftshift(subVis))) fig, axes = plt.subplots(figsize=(16,8)) plt.subplot(1,2,1) plt.title('Sampled Visibilities') plt.imshow(np.abs(subVis).astype(bool), interpolation='nearest') plt.set_cmap('gray') plt.subplot(1,2,2) plt.title('Reconstructed Image') plt.imshow(newImg) plt.set_cmap('gray') reconstructImage(fftDuck, 1e1) reconstructImage(fftDuck, 1e3) reconstructImage(fftDuck, 1e5) reconstructImage(fftDuck, 1e6) Explanation: Left: five point sources at fixed positions with the central source having an amplitude of $0.2$ and the further out sources having an amplitude of $0.3$, $0.5$, $0.1$, and $0.7$. Right: the resulting phases which are an amplitude weighted average of the phases of each individual source. 5.1.5 Building up an Image from Sampled Visibilities<a id='imaging:sec:ftsampVis'></a> Now, since we stated at the beginning of this section that the spatial domain and spatial frequency domain are Fourier pairs we can think about these point source examples as a sampling in the spatial frequency (visibility) domain and the resulting Fourier transformed signal as an image using this sampling. This may sound odd, but let us use an example to show what this means. Starting from the Fourier transformed duck image shown at the beginning of the section, we can construct a new image by selecting a few sample points with what is called a sampling function (more on this in the next in $\S$ 5.2 &#10142;) in the spatial frequency domain and creating a new image. End of explanation
6,441
Given the following text description, write Python code to implement the functionality described below step by step Description: Plotting One of the biggest advantages of the notebook format is that you can mix code and plots. In this example we'll draw some simple mathematical functions using matplotlib. Setup The first thing you need to do is tell the kernel that you would like to use matplotlib to make plots. You do this using the magic invocation Step1: Draw a plot Next we'll draw a plot using the standard matplotlib commands. We'll use the pyplot module which has a similar interface to MATLAB and Octave for plotting. See the official matplotlib page for documentation on matplotlib and examples. Step3: Prettier plot Matplotlib has many options for making pretty plots. Here is an example of a simulated damped oscillator. In this example we're using numpy to generate and keep track of the numerical data. Demo source Step6: Size To change the size of plots you can use the regular matplotlib commands. Here is a very small cosine plot. Step8: Large inline plots are scaled down to fit the notebook width. Plots smaller than the width are not resized. Windowed Plots You can also let matplotlib create plots in a new window so that you can independently resize and interact with the plot. Here is the cosine example in a new window.
Python Code: %matplotlib inline Explanation: Plotting One of the biggest advantages of the notebook format is that you can mix code and plots. In this example we'll draw some simple mathematical functions using matplotlib. Setup The first thing you need to do is tell the kernel that you would like to use matplotlib to make plots. You do this using the magic invocation: %matplotlib inline The inline option means that plots will appear directly in the output cell of the notebook. Later in this notebook we'll try other options. End of explanation from matplotlib import pyplot as plt x = [1, 2, 3] y = [2, 4, 3] plt.plot(x, y); Explanation: Draw a plot Next we'll draw a plot using the standard matplotlib commands. We'll use the pyplot module which has a similar interface to MATLAB and Octave for plotting. See the official matplotlib page for documentation on matplotlib and examples. End of explanation Simple demo with multiple subplots. import numpy as np import matplotlib.pyplot as plt x1 = np.linspace(0.0, 5.0) x2 = np.linspace(0.0, 2.0) y1 = np.cos(2 * np.pi * x1) * np.exp(-x1) y2 = np.cos(2 * np.pi * x2) plt.subplot(2, 1, 1) plt.plot(x1, y1, 'yo-') plt.title('A tale of 2 subplots') plt.ylabel('Damped oscillation') plt.subplot(2, 1, 2) plt.plot(x2, y2, 'r.-') plt.xlabel('time (s)') plt.ylabel('Undamped') plt.show() Explanation: Prettier plot Matplotlib has many options for making pretty plots. Here is an example of a simulated damped oscillator. In this example we're using numpy to generate and keep track of the numerical data. Demo source: http://matplotlib.org/users/screenshots.html End of explanation Tiny cosine plot. import numpy as np from matplotlib import pyplot as plt x = np.linspace(0, 10) y = np.cos(x) plt.figure(figsize=(2, 2)) plt.plot(x, y); Large cosine plot. import numpy as np from matplotlib import pyplot as plt x = np.linspace(0, 10) y = np.cos(x) plt.figure(figsize=(10, 10)) plt.plot(x, y); Explanation: Size To change the size of plots you can use the regular matplotlib commands. Here is a very small cosine plot. End of explanation Windowed cosine plot. %matplotlib osx import numpy as np from matplotlib import pyplot as plt x = np.linspace(0, 10) y = np.cos(x) #plt.figure(figsize=(10, 10)) plt.plot(x, y); Explanation: Large inline plots are scaled down to fit the notebook width. Plots smaller than the width are not resized. Windowed Plots You can also let matplotlib create plots in a new window so that you can independently resize and interact with the plot. Here is the cosine example in a new window. End of explanation
6,442
Given the following text description, write Python code to implement the functionality described below step by step Description: The following dictionary contains hand-curated labeled domains. Step1: The following scripts gathers generic email hosts from a list provided on a public Gist. Step2: <hr>
Python Code: domain_categories = { "generic" : [ "gmail.com", "hotmail.com", "gmx.de", "gmx.net", "gmx.at", "earthlink.net", "comcast.net", "yahoo.com", "email.com" ], "personal" : [ "mnot.net", "henriknordstrom.net", "adambarth.com", "brianrosen.net", "taugh.com", "csperkins.org", "sandelman.ca", "lowentropy.net", "gbiv.com" ], "company" : [ "apple.com", "cisco.com", "chromium.org", "microsoft.com", "oracle.com", "google.com", "facebook.com", "intel.com", "verizon.com", "verizon.net", "salesforce.com", "cloudflare.com", "broadcom.com", "juniper.net", "netflix.com", "akamai.com", "us.ibm.com", "qualcomm.com", "siemens.com", "boeing.com", "sandvine.com", "marconi.com", "trilliant.com", "huawei.com", # chinese "zte.com.cn" # chinese "chinamobile.com", "chinaunicom.cn", "chinatelecom.cn", "cnnic.cn" # registry ], # from R.N. "academic" : [ "caict.ac.cn", # chinese "scu.edu.cn", # chinese "tongji.edu.cn", # chinese "mit.edu", "ieee.org", "acm.org", "berkeley.edu", "harvard.edu", "lbl.gov" ], "sdo" : [ "isoc.org", "icann.org", "amsl.com", "iana.org", "tools.ietf.org", "w3.org" ] } df = pd.DataFrame.from_records(itertools.chain( *[[{'domain' : dom, 'category' : cat} for dom in domain_categories[cat]] for cat in domain_categories] )) Explanation: The following dictionary contains hand-curated labeled domains. End of explanation !wget https://gist.githubusercontent.com/ammarshah/f5c2624d767f91a7cbdc4e54db8dd0bf/raw/660fd949eba09c0b86574d9d3aa0f2137161fc7c/all_email_provider_domains.txt aepd = open("all_email_provider_domains.txt").read().split("\n") df = df.append([{'domain' : d, 'category' : 'generic'} for d in aepd]).drop_duplicates(subset=['domain']) df.to_csv("domain_categories.csv", index=False) Explanation: The following scripts gathers generic email hosts from a list provided on a public Gist. End of explanation pd.read_csv("domain_categories.csv", index_col='domain') Explanation: <hr> End of explanation
6,443
Given the following text description, write Python code to implement the functionality described below step by step Description: Prediction metrics This module provides a set of metrics to evaluate the quality of predictions of a model. A typical function will take a set of "prediction" and "observation" values and use them to calculate the desired metric, unless noted otherwise. Step1: Continuous variables Step2: Binary classification Create the sample data for binary classifier metrics Step3: Run the Binary Classifier metrics function and View the True Positive Rate and the False Positive Rate Step4: View all metrics at a given threshold value Step5: Run the Area Under ROC curve function Step6: Multi-class classification Create the sample data for confusion matrix.
Python Code: %load_ext sql # %sql postgresql://[email protected]:55000/madlib %sql postgresql://fmcquillan@localhost:5432/madlib %sql select madlib.version(); Explanation: Prediction metrics This module provides a set of metrics to evaluate the quality of predictions of a model. A typical function will take a set of "prediction" and "observation" values and use them to calculate the desired metric, unless noted otherwise. End of explanation %%sql DROP TABLE IF EXISTS test_set; CREATE TABLE test_set( pred FLOAT8, -- predicted values obs FLOAT8 -- actual observed values ); INSERT INTO test_set VALUES (37.5,53.1), (12.3,34.2), (74.2,65.4), (91.1,82.1); SELECT * FROM test_set; %%sql DROP TABLE IF EXISTS table_out; SELECT madlib.mean_abs_error( 'test_set', 'table_out', 'pred', 'obs'); SELECT * FROM table_out; %%sql DROP TABLE IF EXISTS table_out; SELECT madlib.mean_abs_perc_error( 'test_set', 'table_out', 'pred', 'obs'); SELECT * FROM table_out; %%sql DROP TABLE IF EXISTS table_out; SELECT madlib.mean_perc_error( 'test_set', 'table_out', 'pred', 'obs'); SELECT * FROM table_out; %%sql DROP TABLE IF EXISTS table_out; SELECT madlib.mean_squared_error( 'test_set', 'table_out', 'pred', 'obs'); SELECT * FROM table_out; %%sql DROP TABLE IF EXISTS table_out; SELECT madlib.r2_score( 'test_set', 'table_out', 'pred', 'obs'); SELECT * FROM table_out; %%sql DROP TABLE IF EXISTS table_out; SELECT madlib.adjusted_r2_score( 'test_set', 'table_out', 'pred', 'obs', 3, 100); SELECT * FROM table_out; Explanation: Continuous variables End of explanation %%sql DROP TABLE IF EXISTS test_set; CREATE TABLE test_set AS SELECT ((a*8)::integer)/8.0 pred, -- prediction probability TRUE ((a*0.5+random()*0.5)>0.5) obs -- actual observations FROM (select random() as a from generate_series(1,100)) x; SELECT * FROM test_set; Explanation: Binary classification Create the sample data for binary classifier metrics: End of explanation %%sql DROP TABLE IF EXISTS table_out; SELECT madlib.binary_classifier( 'test_set', 'table_out', 'pred', 'obs'); SELECT threshold, tpr, fpr FROM table_out ORDER BY threshold; Explanation: Run the Binary Classifier metrics function and View the True Positive Rate and the False Positive Rate: End of explanation %%sql SELECT * FROM table_out WHERE threshold=0.5; Explanation: View all metrics at a given threshold value: End of explanation %%sql DROP TABLE IF EXISTS table_out; SELECT madlib.area_under_roc( 'test_set', 'table_out', 'pred', 'obs'); SELECT * FROM table_out; Explanation: Run the Area Under ROC curve function: End of explanation %%sql DROP TABLE IF EXISTS test_set; CREATE TABLE test_set AS SELECT (x+y)%5+1 AS pred, (x*y)%5 AS obs FROM generate_series(1,5) x, generate_series(1,5) y; SELECT * FROM test_set; %%sql DROP TABLE IF EXISTS table_out; SELECT madlib.confusion_matrix( 'test_set', 'table_out', 'pred', 'obs'); SELECT * FROM table_out ORDER BY class; Explanation: Multi-class classification Create the sample data for confusion matrix. End of explanation
6,444
Given the following text description, write Python code to implement the functionality described below step by step Description: Exploratory Data Analysis with Python We will explore the NYC MTA turnstile dataset. These data files are from the New York Subway. It tracks the hourly entries and exits to turnstiles by day in the subway system. The data files are available on MTA's website. Step1: Our first step will be to create a dictionary of which the key will be the columns representing a turnstile (C/A, UNIT, SCP, STATION) and the value will be a list of the entries for that turnstile. It should look like so Step2: Now let's turn this into a time series. This time our data will be comprised of just the point in time and the cumulative count of entries. It should look like something like Step3: These counts are grouped by dataset file (e.g., by week). Let's make it a high-res timeseries by combining multiple weeks. Step4: This seems to be a good time to take a break and ignore January, March, weekends and 2016 NYC holidays (Feb 12th and 15th). The sooner we do it the faster our code will execute. Step5: Let's also further analyze the timestamps to see if we can easily filter entries between 8am and 8pm. Again, the sooner we do it the faster our code will execute. Step6: Unfortunately, with only a few turnstile samples we can see that the timestamps in which data was recorded is not regular, with some timesamples being seemingly random. Since each station has multiple turnstiles it makes it hard even to compile timestamp data on a station basis. Let's ignore the timestamps going forward and work with daily entries. We will have a single value for a single day, which is not cumulative counts but the total number of passengers that entered through this turnstile on this day. Step7: So far we've been operating on a single turnstile level, let's combine turnstiles in the same ControlArea/Unit/Station combo. Step8: Similarly, we will combine everything in each station, and come up with a time series for each station by adding up all the turnstiles in a station. Step9: We'll now make a list of the average ridership values per station and plot it to get an idea about its distribution among different stations. Step10: We can see that most stations have a small traffic and the histogram bins for large traffic volumes have small bars. Let's plot a histogram with logarithmic scale instead. Step11: Since we are interested in filtering out at least 90% of stations let's select the top 30. Step12: This could be our MVP. Let's save the traffic report as a binary pickle file and take a break to figure out how to improve the recommendations.
Python Code: !pip install wget import os, wget url_template = "http://web.mta.info/developers/data/nyct/turnstile/turnstile_%s.txt" for date in ['160206', '160213', '160220', '160227', '160305']: url = url_template % date if os.path.isfile('data/turnstile_{}.txt'.format(date)): print(date, 'file already downloaded') else: wget.download(url, out='data/') print(date, 'file downloaded') Explanation: Exploratory Data Analysis with Python We will explore the NYC MTA turnstile dataset. These data files are from the New York Subway. It tracks the hourly entries and exits to turnstiles by day in the subway system. The data files are available on MTA's website. End of explanation import csv, glob from collections import defaultdict def read_csv(csv_file_name): turnstile_to_count_reading = defaultdict(list) with open(csv_file_name, 'r') as csv_file: mta_reader = csv.reader(csv_file) for i, row in enumerate(mta_reader): if i == 0: continue turnstile_info = tuple(row[:4]) count_reading = row[4:] turnstile_to_count_reading[turnstile_info].append(count_reading) return turnstile_to_count_reading weekly_data_dicts = [read_csv(csvfile) for csvfile in glob.glob('data/turnstile_*.txt')] sample_dict = list(weekly_data_dicts[0].items())[:1] sample_dict Explanation: Our first step will be to create a dictionary of which the key will be the columns representing a turnstile (C/A, UNIT, SCP, STATION) and the value will be a list of the entries for that turnstile. It should look like so: { ('A002','R051','02-00-00','LEXINGTON AVE'): [ ['NQR456', 'BMT', '01/03/2015', '03:00:00', 'REGULAR', '0004945474', '0001675324'], ['NQR456', 'BMT', '01/03/2015', '07:00:00', 'REGULAR', '0004945478', '0001675333'], ['NQR456', 'BMT', '01/03/2015', '11:00:00', 'REGULAR', '0004945515', '0001675364'], ... ] } End of explanation from datetime import datetime from dateutil.parser import parse def convert_week_data_to_time_series(week_data_dict): turnstile_to_time_series = defaultdict(list) for i, (turnstile, row_data) in enumerate(week_data_dict.items()): if i % 200 == 0: print('Processing turnstile', turnstile) for lines, division, datestr, timestr, event, cum_entries, cum_exits in row_data: timestamp = parse('%sT%s' % (datestr, timestr)) turnstile_to_time_series[turnstile].append([timestamp, int(cum_entries)]) return turnstile_to_time_series weekly_time_series = list(map(convert_week_data_to_time_series, weekly_data_dicts)) sample_turnstile_to_time_series = list(weekly_time_series[0].items())[:2] sample_turnstile_to_time_series Explanation: Now let's turn this into a time series. This time our data will be comprised of just the point in time and the cumulative count of entries. It should look like something like: { ('A002','R051','02-00-00','LEXINGTON AVE'): [ [datetime.datetime(2013, 3, 2, 3, 0), 3788], [datetime.datetime(2013, 3, 2, 7, 0), 2585], [datetime.datetime(2013, 3, 2, 12, 0), 10653], [datetime.datetime(2013, 3, 2, 17, 0), 11016], [datetime.datetime(2013, 3, 2, 23, 0), 10666], [datetime.datetime(2013, 3, 3, 3, 0), 10814], [datetime.datetime(2013, 3, 3, 7, 0), 10229], ... ], ... } End of explanation def combine_multiple_weeks_into_single_high_res_timeseries(weekly_time_series): combined_time_series = defaultdict(list) for turnstile_to_weeklong_time_series in weekly_time_series: for turnstile, weeklong_time_series in turnstile_to_weeklong_time_series.items(): combined_time_series[turnstile] += weeklong_time_series # It's already sorted due to the nature of the files return combined_time_series turnstile_to_full_time_series = combine_multiple_weeks_into_single_high_res_timeseries( weekly_time_series) sample_turnstile_to_full_time_series = list(turnstile_to_full_time_series.items())[:2] sample_turnstile_to_full_time_series Explanation: These counts are grouped by dataset file (e.g., by week). Let's make it a high-res timeseries by combining multiple weeks. End of explanation feb_nyc_holidays = [12, 15] removed = 0 for turnstile, turnstile_data in turnstile_to_full_time_series.items(): # iterate over a copy of the list in order to be able to remove items from the original for timestamp, cum_entries in list(turnstile_data): if timestamp.month != 2 or timestamp.weekday() >= 5 or timestamp.day in feb_nyc_holidays: if not (timestamp.month == 1 and timestamp.day == 31): # leave the last of january in order to be able to make the cumulative count turnstile_data.remove([timestamp, cum_entries]) removed = removed + 1 print(removed) Explanation: This seems to be a good time to take a break and ignore January, March, weekends and 2016 NYC holidays (Feb 12th and 15th). The sooner we do it the faster our code will execute. End of explanation turnstiles_timestamps = dict() for turnstile, turnstile_data in turnstile_to_full_time_series.items(): timestamps_set = set() for timestamp, cum_entries in list(turnstile_data): timestamps_set.add(timestamp.time()) turnstiles_timestamps[turnstile] = timestamps_set turnstiles_timestamps_items_list = list(turnstiles_timestamps.items()) n_turnstiles = len(turnstiles_timestamps_items_list) n_samples = 4 sample_turnstiles_timestamps = [turnstiles_timestamps_items_list[i] for i in range(0, n_turnstiles - 1, n_turnstiles // n_samples)] sample_turnstiles_timestamps Explanation: Let's also further analyze the timestamps to see if we can easily filter entries between 8am and 8pm. Again, the sooner we do it the faster our code will execute. End of explanation sample_turnstile_to_full_time_series = list(turnstile_to_full_time_series.items())[:2] sample_turnstile_to_full_time_series from itertools import groupby def count_within_normal_bounds(count): if count is None: return True else: return 10000 > count >= 0 def convert_time_series_to_daily(high_res_time_series): daily_time_series = [] def day_of_timestamp(time_series_entry): timestamp, tot_entries = time_series_entry return timestamp.date() # groupby() requires data to be sorted. It is sorted already here. count_on_previous_day = None for day, entries_on_this_day in groupby(high_res_time_series, key=day_of_timestamp): # get the maximum cumulative count among the entries on this day cum_entry_count_on_day = max([count for time, count in entries_on_this_day]) # skip the first entry if we don't know the previous day if count_on_previous_day is None: daily_entries = None else: daily_entries = cum_entry_count_on_day - count_on_previous_day # Save today's count for tomorrow's calculation count_on_previous_day = cum_entry_count_on_day # Only append if the cumulative increased. # Otherwise there is something wrong in the data - skip with a warning. if count_within_normal_bounds(daily_entries): daily_time_series.append((day, daily_entries)) else: print('WARNING. Abnormal entry count found on day %s: %s' % (day, daily_entries)) daily_time_series.append((day, None)) return daily_time_series def convert_turnstile_to_high_res_time_series_to_daily(turnstile_to_time_series): turnstile_to_daily_time_series = {} for i, (turnstile, time_series) in enumerate(turnstile_to_time_series.items()): print('Processing turnstile', turnstile) turnstile_to_daily_time_series[turnstile] = convert_time_series_to_daily(time_series) return turnstile_to_daily_time_series turnstile_to_daily_time_series = convert_turnstile_to_high_res_time_series_to_daily( turnstile_to_full_time_series) turnstile_to_daily_time_series[('N300', 'R113', '01-00-04', '7 AV')] Explanation: Unfortunately, with only a few turnstile samples we can see that the timestamps in which data was recorded is not regular, with some timesamples being seemingly random. Since each station has multiple turnstiles it makes it hard even to compile timestamp data on a station basis. Let's ignore the timestamps going forward and work with daily entries. We will have a single value for a single day, which is not cumulative counts but the total number of passengers that entered through this turnstile on this day. End of explanation from collections import Counter def booth_of_a_time_series_item(item): turnstile, time_series = item control_area, unit, device_id, station = turnstile return (control_area, unit, station) def reduce_turnstile_time_series_to_booths(turnstile_to_daily_time_series): turnstile_time_series_items = sorted(turnstile_to_daily_time_series.items()) booth_to_time_series = {} for booth, item_list_of_booth in groupby(turnstile_time_series_items, key=booth_of_a_time_series_item): daily_counter = Counter() for turnstile, time_series in item_list_of_booth: for day, count in time_series: if count is not None: daily_counter[day] += count booth_to_time_series[booth] = sorted(daily_counter.items()) return booth_to_time_series booth_to_daily_time_series = reduce_turnstile_time_series_to_booths(turnstile_to_daily_time_series) booth_to_daily_time_series[('N300', 'R113', '7 AV')] Explanation: So far we've been operating on a single turnstile level, let's combine turnstiles in the same ControlArea/Unit/Station combo. End of explanation def station_of_a_booth(booth): control_area, unit, station = booth return station def station_of_a_time_series_item(item): booth, time_series = item return station_of_a_booth(booth) def reduce_booth_time_series_to_stations(booth_to_daily_time_series): booth_time_series_items = sorted(booth_to_daily_time_series.items()) station_to_time_series = {} for station, item_list_of_station in groupby(booth_time_series_items, key=station_of_a_time_series_item): daily_counter = Counter() for turnstile, time_series in item_list_of_station: for day, count in time_series: daily_counter[day] += count station_to_time_series[station] = sorted(daily_counter.items()) return station_to_time_series station_to_daily_time_series = reduce_booth_time_series_to_stations(booth_to_daily_time_series) station_to_daily_time_series['7 AV'] Explanation: Similarly, we will combine everything in each station, and come up with a time series for each station by adding up all the turnstiles in a station. End of explanation feb_business_days = len(station_to_daily_time_series['7 AV']) def station_time_series_item_to_station_avg_traffic(item): station, time_series = item avg_traffic = sum([count for day, count in time_series]) // feb_business_days return avg_traffic, station traffic = list(map(station_time_series_item_to_station_avg_traffic, station_to_daily_time_series.items())) traffic_report = sorted(traffic, reverse=True) for avg_traffic, station in traffic_report[:30]: print('{:<18} {:.0f}'.format(station, avg_traffic)) %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns avg_ridership_counts = [ridership for ridership, station in traffic_report] fig, ax = plt.subplots(figsize=(20, 10)) sns.distplot(avg_ridership_counts, bins=range(0, 165000, 5000), ax=ax) ax.set_xlim(0, 165000) Explanation: We'll now make a list of the average ridership values per station and plot it to get an idea about its distribution among different stations. End of explanation import math log_counts = [] for count in avg_ridership_counts: try: log_result = math.log10(count) except: pass log_counts.append(log_result) fig, ax = plt.subplots(figsize=(20, 10)) sns.distplot(log_counts, bins=15) def log_count_to_label(log_count): if log_count <= 3: return '{0:.0f} Hundred'.format(10 ** (log_count)) else: return '{0:.1f} Thousand'.format(10 ** (log_count-3)) tick_labels = map(log_count_to_label, bins) ticks = plt.xticks(bins, tick_labels, rotation=70) plt.xlabel('Average Ridership per Day (log 10)') plt.ylabel('Number of Stations with this Total Count') plt.title('Distribution of ridership among NYC Subway Stations') plt.savefig('figures/log.png', bbox_inches='tight') Explanation: We can see that most stations have a small traffic and the histogram bins for large traffic volumes have small bars. Let's plot a histogram with logarithmic scale instead. End of explanation import pandas as pd top_stations = traffic_report[:30] avgs, stations = zip(*top_stations) indices = range(len(avgs)) df = pd.DataFrame([stations, avgs]).T df.columns = ['Station', 'Average Daily Entries'] df.head(20) fig, ax = plt.subplots(figsize=(20, 10)) sns.barplot(x='Station', y='Average Daily Entries', data=df, palette=sns.diverging_palette(255, 240, n=len(df))) ticks = plt.xticks(indices, stations, rotation = 70) plt.title('Top 30 NYC Subway Stations by Traffic in February Weekdays') plt.savefig('figures/mvp.png', bbox_inches='tight') Explanation: Since we are interested in filtering out at least 90% of stations let's select the top 30. End of explanation import pandas as pd reversed_traffic_report = [reversed(t) for t in traffic_report] df_to_pickle = pd.DataFrame(reversed_traffic_report, columns=['station', 'avg_daily_traffic_feb']) df_to_pickle.head() df_to_pickle.to_pickle('pickle/stations_traffic.p') Explanation: This could be our MVP. Let's save the traffic report as a binary pickle file and take a break to figure out how to improve the recommendations. End of explanation
6,445
Given the following text description, write Python code to implement the functionality described below step by step Description: Catherine Devlin RDBMS were so last year... last year SQLAlchemy ~~object-relational mapper~~ RDBMS toolbox ddlgenerator Import data and define tables Step1: ipython_sql Access the data Step2: rdbms-subsetter Get a consistent test database.
Python Code: !head data/provinces.yaml !ddlgenerator -i -t postgresql data/provinces.yaml | head -20 # !ddlgenerator -i -t postgresql http://github.com/catherinedevlin/pycon2015_sqla_lightning/data/provinces.yaml !dropdb pycon !createdb pycon !ddlgenerator -i postgresql data/provinces.yaml | psql pycon | head -20 Explanation: Catherine Devlin RDBMS were so last year... last year SQLAlchemy ~~object-relational mapper~~ RDBMS toolbox ddlgenerator Import data and define tables End of explanation %load_ext sql %sql postgresql://:@/pycon %sql SELECT name, bird FROM provinces ORDER BY name %%sql SELECT c.name || ', ' || p.name AS name, c.population FROM cities c JOIN provinces p ON (c.provinces_id = p.provinces_id) ORDER BY c.population DESC LIMIT 10 cities = _ cities.DataFrame().plot(kind='bar', x='name') Explanation: ipython_sql Access the data End of explanation !dropdb pycon_test !createdb pycon_test !pg_dump --schema-only pycon | psql pycon_test !rdbms-subsetter -y postgresql://:@/pycon postgresql://:@/pycon_test 0.1 %%sql postgresql://:@/pycon_test SELECT c.name || ', ' || p.name AS name, c.population FROM cities c JOIN provinces p ON (c.provinces_id = p.provinces_id) ORDER BY c.population DESC Explanation: rdbms-subsetter Get a consistent test database. End of explanation
6,446
Given the following text description, write Python code to implement the functionality described below step by step Description: Integration Exercise 2 Imports Step1: Indefinite integrals Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc. Find five of these integrals and perform the following steps Step2: Integral 1 $$ I_1 = \int_0^ a \sqrt{a^2 - x^2}dx = \frac{\pi a^2}{4} $$ Step3: Integral 2 $$ I_2 = \int_0^\frac{\pi}{2} \sin^2(x)dx = \frac{\pi}{4} $$ Step4: Integral 3 $$ I_3 = \int_0^{2\pi} \frac{dx}{a+b\sin(x)} = \frac{2\pi}{\sqrt{a^2-b^2}} $$ Step5: Integral 4 $$ I_4 = \int_0^\infty e^{-ax^2} dx = \frac{1}{2}\sqrt{\frac{\pi}{2a}} $$ Step6: Integral 5 $$ I_5 = \int_{-\infty}^\infty \frac{1}{\cosh x} dx = \pi $$
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import seaborn as sns from scipy import integrate Explanation: Integration Exercise 2 Imports End of explanation #I worked with James Amarel on this assignement def integrand(x, a): return 1.0/(x**2 + a**2) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return 0.5*np.pi/a print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral Explanation: Indefinite integrals Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc. Find five of these integrals and perform the following steps: Typeset the integral using LateX in a Markdown cell. Define an integrand function that computes the value of the integrand. Define an integral_approx funciton that uses scipy.integrate.quad to peform the integral. Define an integral_exact function that computes the exact value of the integral. Call and print the return value of integral_approx and integral_exact for one set of parameters. Here is an example to show what your solutions should look like: Example Here is the integral I am performing: $$ I_1 = \int_0^\infty \frac{dx}{x^2 + a^2} = \frac{\pi}{2a} $$ End of explanation # YOUR CODE HERE #raise NotImplementedError() def integrand(x, a): return np.sqrt(a**2 - x**2) def integral_approx(a): I, e = integrate.quad(integrand, 0, a, args=(a,)) return I def integral_exact(a): return (np.pi*a**2)/4 print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral Explanation: Integral 1 $$ I_1 = \int_0^ a \sqrt{a^2 - x^2}dx = \frac{\pi a^2}{4} $$ End of explanation # YOUR CODE HERE #raise NotImplementedError() def integrand(x): return np.sin(x)**2 def integral_approx(): I, e = integrate.quad(integrand, 0, np.pi/2) return I def integral_exact(): return np.pi/4 print("Numerical: ", integral_approx()) print("Exact : ", integral_exact()) assert True # leave this cell to grade the above integral Explanation: Integral 2 $$ I_2 = \int_0^\frac{\pi}{2} \sin^2(x)dx = \frac{\pi}{4} $$ End of explanation # YOUR CODE HERE #raise NotImplementedError() def integrand(x,a,b): return 1.0/(a+b*np.sin(x)) def integral_approx(a,b): I, e = integrate.quad(integrand, 0, 2*np.pi, args=(a,b)) return I def integral_exact(a,b): return (2*np.pi)/np.sqrt(a**2-b**2) print("Numerical: ", integral_approx(10.0,1.0)) print("Exact : ", integral_exact(10.0,1.0)) assert True # leave this cell to grade the above integral Explanation: Integral 3 $$ I_3 = \int_0^{2\pi} \frac{dx}{a+b\sin(x)} = \frac{2\pi}{\sqrt{a^2-b^2}} $$ End of explanation # YOUR CODE HERE #raise NotImplementedError() def integrand(x, a): return np.e**(-1.0*a*(x**2)) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return 0.5*np.sqrt(np.pi/(a)) print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral Explanation: Integral 4 $$ I_4 = \int_0^\infty e^{-ax^2} dx = \frac{1}{2}\sqrt{\frac{\pi}{2a}} $$ End of explanation # YOUR CODE HERE #raise NotImplementedError() def integrand(x): return 1.0/np.cosh(x) def integral_approx(): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, -np.inf, np.inf) return I def integral_exact(): return np.pi print("Numerical: ", integral_approx()) print("Exact : ", integral_exact()) assert True # leave this cell to grade the above integral Explanation: Integral 5 $$ I_5 = \int_{-\infty}^\infty \frac{1}{\cosh x} dx = \pi $$ End of explanation
6,447
Given the following text description, write Python code to implement the functionality described below step by step Description: Image similarity estimation using a Siamese Network with a contrastive loss Author Step1: Hyperparameters Step2: Load the MNIST dataset Step3: Define training and validation sets Step5: Create pairs of images We will train the model to differentiate between digits of different classes. For example, digit 0 needs to be differentiated from the rest of the digits (1 through 9), digit 1 - from 0 and 2 through 9, and so on. To carry this out, we will select N random images from class A (for example, for digit 0) and pair them with N random images from another class B (for example, for digit 1). Then, we can repeat this process for all classes of digits (until digit 9). Once we have paired digit 0 with other digits, we can repeat this process for the remaining classes for the rest of the digits (from 1 until 9). Step6: We get Step7: Split the validation pairs Step8: Split the test pairs Step10: Visualize pairs and their labels Step11: Inspect training pairs Step12: Inspect validation pairs Step13: Inspect test pairs Step15: Define the model There are be two input layers, each leading to its own network, which produces embeddings. A Lambda layer then merges them using an Euclidean distance and the merged output is fed to the final network. Step18: Define the constrastive Loss Step19: Compile the model with the contrastive loss Step20: Train the model Step22: Visualize results Step23: Evaluate the model Step24: Visualize the predictions
Python Code: import random import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt Explanation: Image similarity estimation using a Siamese Network with a contrastive loss Author: Mehdi<br> Date created: 2021/05/06<br> Last modified: 2021/05/06<br> Description: Similarity learning using a siamese network trained with a contrastive loss. Introduction Siamese Networks are neural networks which share weights between two or more sister networks, each producing embedding vectors of its respective inputs. In supervised similarity learning, the networks are then trained to maximize the contrast (distance) between embeddings of inputs of different classes, while minimizing the distance between embeddings of similar classes, resulting in embedding spaces that reflect the class segmentation of the training inputs. Setup End of explanation epochs = 10 batch_size = 16 margin = 1 # Margin for constrastive loss. Explanation: Hyperparameters End of explanation (x_train_val, y_train_val), (x_test, y_test) = keras.datasets.mnist.load_data() # Change the data type to a floating point format x_train_val = x_train_val.astype("float32") x_test = x_test.astype("float32") Explanation: Load the MNIST dataset End of explanation # Keep 50% of train_val in validation set x_train, x_val = x_train_val[:30000], x_train_val[30000:] y_train, y_val = y_train_val[:30000], y_train_val[30000:] del x_train_val, y_train_val Explanation: Define training and validation sets End of explanation def make_pairs(x, y): Creates a tuple containing image pairs with corresponding label. Arguments: x: List containing images, each index in this list corresponds to one image. y: List containing labels, each label with datatype of `int`. Returns: Tuple containing two numpy arrays as (pairs_of_samples, labels), where pairs_of_samples' shape is (2len(x), 2,n_features_dims) and labels are a binary array of shape (2len(x)). num_classes = max(y) + 1 digit_indices = [np.where(y == i)[0] for i in range(num_classes)] pairs = [] labels = [] for idx1 in range(len(x)): # add a matching example x1 = x[idx1] label1 = y[idx1] idx2 = random.choice(digit_indices[label1]) x2 = x[idx2] pairs += [[x1, x2]] labels += [1] # add a non-matching example label2 = random.randint(0, num_classes - 1) while label2 == label1: label2 = random.randint(0, num_classes - 1) idx2 = random.choice(digit_indices[label2]) x2 = x[idx2] pairs += [[x1, x2]] labels += [0] return np.array(pairs), np.array(labels).astype("float32") # make train pairs pairs_train, labels_train = make_pairs(x_train, y_train) # make validation pairs pairs_val, labels_val = make_pairs(x_val, y_val) # make test pairs pairs_test, labels_test = make_pairs(x_test, y_test) Explanation: Create pairs of images We will train the model to differentiate between digits of different classes. For example, digit 0 needs to be differentiated from the rest of the digits (1 through 9), digit 1 - from 0 and 2 through 9, and so on. To carry this out, we will select N random images from class A (for example, for digit 0) and pair them with N random images from another class B (for example, for digit 1). Then, we can repeat this process for all classes of digits (until digit 9). Once we have paired digit 0 with other digits, we can repeat this process for the remaining classes for the rest of the digits (from 1 until 9). End of explanation x_train_1 = pairs_train[:, 0] # x_train_1.shape is (60000, 28, 28) x_train_2 = pairs_train[:, 1] Explanation: We get: pairs_train.shape = (60000, 2, 28, 28) We have 60,000 pairs Each pair contains 2 images Each image has shape (28, 28) Split the training pairs End of explanation x_val_1 = pairs_val[:, 0] # x_val_1.shape = (60000, 28, 28) x_val_2 = pairs_val[:, 1] Explanation: Split the validation pairs End of explanation x_test_1 = pairs_test[:, 0] # x_test_1.shape = (20000, 28, 28) x_test_2 = pairs_test[:, 1] Explanation: Split the test pairs End of explanation def visualize(pairs, labels, to_show=6, num_col=3, predictions=None, test=False): Creates a plot of pairs and labels, and prediction if it's test dataset. Arguments: pairs: Numpy Array, of pairs to visualize, having shape (Number of pairs, 2, 28, 28). to_show: Int, number of examples to visualize (default is 6) `to_show` must be an integral multiple of `num_col`. Otherwise it will be trimmed if it is greater than num_col, and incremented if if it is less then num_col. num_col: Int, number of images in one row - (default is 3) For test and train respectively, it should not exceed 3 and 7. predictions: Numpy Array of predictions with shape (to_show, 1) - (default is None) Must be passed when test=True. test: Boolean telling whether the dataset being visualized is train dataset or test dataset - (default False). Returns: None. # Define num_row # If to_show % num_col != 0 # trim to_show, # to trim to_show limit num_row to the point where # to_show % num_col == 0 # # If to_show//num_col == 0 # then it means num_col is greater then to_show # increment to_show # to increment to_show set num_row to 1 num_row = to_show // num_col if to_show // num_col != 0 else 1 # `to_show` must be an integral multiple of `num_col` # we found num_row and we have num_col # to increment or decrement to_show # to make it integral multiple of `num_col` # simply set it equal to num_row * num_col to_show = num_row * num_col # Plot the images fig, axes = plt.subplots(num_row, num_col, figsize=(5, 5)) for i in range(to_show): # If the number of rows is 1, the axes array is one-dimensional if num_row == 1: ax = axes[i % num_col] else: ax = axes[i // num_col, i % num_col] ax.imshow(tf.concat([pairs[i][0], pairs[i][1]], axis=1), cmap="gray") ax.set_axis_off() if test: ax.set_title("True: {} | Pred: {:.5f}".format(labels[i], predictions[i][0])) else: ax.set_title("Label: {}".format(labels[i])) if test: plt.tight_layout(rect=(0, 0, 1.9, 1.9), w_pad=0.0) else: plt.tight_layout(rect=(0, 0, 1.5, 1.5)) plt.show() Explanation: Visualize pairs and their labels End of explanation visualize(pairs_train[:-1], labels_train[:-1], to_show=4, num_col=4) Explanation: Inspect training pairs End of explanation visualize(pairs_val[:-1], labels_val[:-1], to_show=4, num_col=4) Explanation: Inspect validation pairs End of explanation visualize(pairs_test[:-1], labels_test[:-1], to_show=4, num_col=4) Explanation: Inspect test pairs End of explanation # Provided two tensors t1 and t2 # Euclidean distance = sqrt(sum(square(t1-t2))) def euclidean_distance(vects): Find the Euclidean distance between two vectors. Arguments: vects: List containing two tensors of same length. Returns: Tensor containing euclidean distance (as floating point value) between vectors. x, y = vects sum_square = tf.math.reduce_sum(tf.math.square(x - y), axis=1, keepdims=True) return tf.math.sqrt(tf.math.maximum(sum_square, tf.keras.backend.epsilon())) input = layers.Input((28, 28, 1)) x = tf.keras.layers.BatchNormalization()(input) x = layers.Conv2D(4, (5, 5), activation="tanh")(x) x = layers.AveragePooling2D(pool_size=(2, 2))(x) x = layers.Conv2D(16, (5, 5), activation="tanh")(x) x = layers.AveragePooling2D(pool_size=(2, 2))(x) x = layers.Flatten()(x) x = tf.keras.layers.BatchNormalization()(x) x = layers.Dense(10, activation="tanh")(x) embedding_network = keras.Model(input, x) input_1 = layers.Input((28, 28, 1)) input_2 = layers.Input((28, 28, 1)) # As mentioned above, Siamese Network share weights between # tower networks (sister networks). To allow this, we will use # same embedding network for both tower networks. tower_1 = embedding_network(input_1) tower_2 = embedding_network(input_2) merge_layer = layers.Lambda(euclidean_distance)([tower_1, tower_2]) normal_layer = tf.keras.layers.BatchNormalization()(merge_layer) output_layer = layers.Dense(1, activation="sigmoid")(normal_layer) siamese = keras.Model(inputs=[input_1, input_2], outputs=output_layer) Explanation: Define the model There are be two input layers, each leading to its own network, which produces embeddings. A Lambda layer then merges them using an Euclidean distance and the merged output is fed to the final network. End of explanation def loss(margin=1): Provides 'constrastive_loss' an enclosing scope with variable 'margin'. Arguments: margin: Integer, defines the baseline for distance for which pairs should be classified as dissimilar. - (default is 1). Returns: 'constrastive_loss' function with data ('margin') attached. # Contrastive loss = mean( (1-true_value) * square(prediction) + # true_value * square( max(margin-prediction, 0) )) def contrastive_loss(y_true, y_pred): Calculates the constrastive loss. Arguments: y_true: List of labels, each label is of type float32. y_pred: List of predictions of same length as of y_true, each label is of type float32. Returns: A tensor containing constrastive loss as floating point value. square_pred = tf.math.square(y_pred) margin_square = tf.math.square(tf.math.maximum(margin - (y_pred), 0)) return tf.math.reduce_mean( (1 - y_true) * square_pred + (y_true) * margin_square ) return contrastive_loss Explanation: Define the constrastive Loss End of explanation siamese.compile(loss=loss(margin=margin), optimizer="RMSprop", metrics=["accuracy"]) siamese.summary() Explanation: Compile the model with the contrastive loss End of explanation history = siamese.fit( [x_train_1, x_train_2], labels_train, validation_data=([x_val_1, x_val_2], labels_val), batch_size=batch_size, epochs=epochs, ) Explanation: Train the model End of explanation def plt_metric(history, metric, title, has_valid=True): Plots the given 'metric' from 'history'. Arguments: history: history attribute of History object returned from Model.fit. metric: Metric to plot, a string value present as key in 'history'. title: A string to be used as title of plot. has_valid: Boolean, true if valid data was passed to Model.fit else false. Returns: None. plt.plot(history[metric]) if has_valid: plt.plot(history["val_" + metric]) plt.legend(["train", "validation"], loc="upper left") plt.title(title) plt.ylabel(metric) plt.xlabel("epoch") plt.show() # Plot the accuracy plt_metric(history=history.history, metric="accuracy", title="Model accuracy") # Plot the constrastive loss plt_metric(history=history.history, metric="loss", title="Constrastive Loss") Explanation: Visualize results End of explanation results = siamese.evaluate([x_test_1, x_test_2], labels_test) print("test loss, test acc:", results) Explanation: Evaluate the model End of explanation predictions = siamese.predict([x_test_1, x_test_2]) visualize(pairs_test, labels_test, to_show=3, predictions=predictions, test=True) Explanation: Visualize the predictions End of explanation
6,448
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: Following-up from this question years ago, is there a "shift" function in numpy? Ideally it can be applied to 2-dimensional arrays, and the numbers of shift are different among rows.
Problem: import numpy as np a = np.array([[ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.], [1., 2., 3., 4., 5., 6., 7., 8., 9., 10.]]) shift = [-2, 3] def solution(xs, shift): e = np.empty_like(xs) for i, n in enumerate(shift): if n >= 0: e[i,:n] = np.nan e[i,n:] = xs[i,:-n] else: e[i,n:] = np.nan e[i,:n] = xs[i,-n:] return e result = solution(a, shift)
6,449
Given the following text description, write Python code to implement the functionality described below step by step Description: Summary One of the main applications of OpenPNM is simulating transport phenomena such as Fickian diffusion, advection diffusion, reactive transport, etc. In this example, we will learn how to perform Fickian diffusion on a Cubic network. The algorithm works fine with every other network type, but for now we want to keep it simple. Problem setup Generating network First, we need to generate a Cubic network. For now, we stick to a 2d network, but you might as well try it in 3d! Step1: Adding geometry Next, we need to add a geometry to the generated network. A geometry contains information about size of the pores/throats in a network. OpenPNM has tons of prebuilt geometries that represent the microstructure of different materials such as Toray090 carbon papers, sand stone, electrospun fibers, etc. For now, we stick to a sample geometry called StickAndBall that assigns random values to pore/throat diameters. Step2: Adding phase Next, we need to add a phase to our simulation. A phase object(s) contain(s) thermophysical information about the working fluid(s) in the simulation. OpenPNM has tons of prebuilt phases as well! For this simulation, we use air as our working fluid. Step3: Adding physics Finally, we need to add a physics. A physics object contains information about the working fluid in the simulation that depend on the geometry of the network. A good example is diffusive conductance, which not only depends on the thermophysical properties of the working fluid, but also depends on the geometry of pores/throats. Step4: Performing Fickian diffusion Now that everything's set up, it's time to perform our Fickian diffusion simulation. For this purpose, we need to add the FickianDiffusion algorithm to our simulation. Here's how we do it Step5: Note that network and phase are required parameters for pretty much every algorithm we add, since we need to specify on which network and for which phase do we want to run the algorithm. Adding boundary conditions Next, we need to add some boundary conditions to the simulation. By default, OpenPNM assumes zero flux for the boundary pores. Step6: set_value_BC applies the so-called "Dirichlet" boundary condition to the specified pores. Note that unless you want to apply a single value to all of the specified pores (like we just did), you must pass a list (or ndarray) as the values parameter. Running the algorithm Now, it's time to run the algorithm. This is done by calling the run method attached to the algorithm object. Step7: Post processing When an algorithm is successfully run, the results are attached to the same object. To access the results, you need to know the quantity for which the algorithm was solving. For instance, FickianDiffusion solves for the quantity pore.concentration, which is somewhat intuitive. However, if you ever forget it, or wanted to manually check the quantity, you can take a look at the algorithm settings Step8: Now that we know the quantity for which FickianDiffusion was solved, let's take a look at the results Step9: Heatmap Well, it's hard to make sense out of a bunch of numbers! Let's visualize the results. Since the network is 2d, we can simply reshape the results in form of a 2d array similar to the shape of the network and plot the heatmap of it using matplotlib. Step10: Calculating heat flux You might as well be interested in calculating the mass flux from a boundary! This is easily done in OpenPNM via calling the rate method attached to the algorithm. Let's see how it works
Python Code: import openpnm as op net = op.network.Cubic(shape=[1, 10, 10], spacing=1e-5) Explanation: Summary One of the main applications of OpenPNM is simulating transport phenomena such as Fickian diffusion, advection diffusion, reactive transport, etc. In this example, we will learn how to perform Fickian diffusion on a Cubic network. The algorithm works fine with every other network type, but for now we want to keep it simple. Problem setup Generating network First, we need to generate a Cubic network. For now, we stick to a 2d network, but you might as well try it in 3d! End of explanation geom = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts) Explanation: Adding geometry Next, we need to add a geometry to the generated network. A geometry contains information about size of the pores/throats in a network. OpenPNM has tons of prebuilt geometries that represent the microstructure of different materials such as Toray090 carbon papers, sand stone, electrospun fibers, etc. For now, we stick to a sample geometry called StickAndBall that assigns random values to pore/throat diameters. End of explanation air = op.phases.Air(network=net) Explanation: Adding phase Next, we need to add a phase to our simulation. A phase object(s) contain(s) thermophysical information about the working fluid(s) in the simulation. OpenPNM has tons of prebuilt phases as well! For this simulation, we use air as our working fluid. End of explanation phys_air = op.physics.Standard(network=net, phase=air, geometry=geom) Explanation: Adding physics Finally, we need to add a physics. A physics object contains information about the working fluid in the simulation that depend on the geometry of the network. A good example is diffusive conductance, which not only depends on the thermophysical properties of the working fluid, but also depends on the geometry of pores/throats. End of explanation fd = op.algorithms.FickianDiffusion(network=net, phase=air) Explanation: Performing Fickian diffusion Now that everything's set up, it's time to perform our Fickian diffusion simulation. For this purpose, we need to add the FickianDiffusion algorithm to our simulation. Here's how we do it: End of explanation inlet = net.pores('left') outlet = net.pores('right') fd.set_value_BC(pores=inlet, values=1.0) fd.set_value_BC(pores=outlet, values=0.0) Explanation: Note that network and phase are required parameters for pretty much every algorithm we add, since we need to specify on which network and for which phase do we want to run the algorithm. Adding boundary conditions Next, we need to add some boundary conditions to the simulation. By default, OpenPNM assumes zero flux for the boundary pores. End of explanation fd.run(); Explanation: set_value_BC applies the so-called "Dirichlet" boundary condition to the specified pores. Note that unless you want to apply a single value to all of the specified pores (like we just did), you must pass a list (or ndarray) as the values parameter. Running the algorithm Now, it's time to run the algorithm. This is done by calling the run method attached to the algorithm object. End of explanation print(fd.settings) Explanation: Post processing When an algorithm is successfully run, the results are attached to the same object. To access the results, you need to know the quantity for which the algorithm was solving. For instance, FickianDiffusion solves for the quantity pore.concentration, which is somewhat intuitive. However, if you ever forget it, or wanted to manually check the quantity, you can take a look at the algorithm settings: End of explanation c = fd['pore.concentration'] print(c) Explanation: Now that we know the quantity for which FickianDiffusion was solved, let's take a look at the results: End of explanation print('Network shape:', net._shape) c2d = c.reshape((net._shape)) import matplotlib.pyplot as plt plt.imshow(c2d[0,:,:]) plt.title('Concentration (mol/m$^3$)') plt.colorbar() Explanation: Heatmap Well, it's hard to make sense out of a bunch of numbers! Let's visualize the results. Since the network is 2d, we can simply reshape the results in form of a 2d array similar to the shape of the network and plot the heatmap of it using matplotlib. End of explanation rate_inlet = fd.rate(pores=inlet)[0] print('Mass flow rate from inlet:', rate_inlet, 'mol/s') Explanation: Calculating heat flux You might as well be interested in calculating the mass flux from a boundary! This is easily done in OpenPNM via calling the rate method attached to the algorithm. Let's see how it works: End of explanation
6,450
Given the following text description, write Python code to implement the functionality described below step by step Description: 06 - Model Deployment The purpose of this notebook is to execute a CI/CD routine to test and deploy the trained model to Vertex AI as an Endpoint for online prediction serving. The notebook covers the following steps Step1: Setup Google Cloud project Step2: Set configurations Step3: 1. Run CI/CD steps locally Step4: Run the model artifact testing Step5: Run create endpoint Step6: Run deploy model Step7: Test deployed model endpoint Step8: 2. Execute the Model Deployment CI/CD routine in Cloud Build The CI/CD routine is defined in the model-deployment.yaml file, and consists of the following steps Step10: Run CI/CD from model deployment using Cloud Build
Python Code: import os import logging logging.getLogger().setLevel(logging.INFO) Explanation: 06 - Model Deployment The purpose of this notebook is to execute a CI/CD routine to test and deploy the trained model to Vertex AI as an Endpoint for online prediction serving. The notebook covers the following steps: 1. Run the test steps locally. 2. Execute the model deployment CI/CD steps using Cloud Build. Setup Import libraries End of explanation PROJECT = '[your-project-id]' # Change to your project id. REGION = 'us-central1' # Change to your region. if PROJECT == "" or PROJECT is None or PROJECT == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT = shell_output[0] print("Project ID:", PROJECT) print("Region:", REGION) Explanation: Setup Google Cloud project End of explanation VERSION = 'v01' DATASET_DISPLAY_NAME = 'chicago-taxi-tips' MODEL_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier-{VERSION}' ENDPOINT_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier' CICD_IMAGE_NAME = 'cicd:latest' CICD_IMAGE_URI = f"gcr.io/{PROJECT}/{CICD_IMAGE_NAME}" Explanation: Set configurations End of explanation os.environ['PROJECT'] = PROJECT os.environ['REGION'] = REGION os.environ['MODEL_DISPLAY_NAME'] = MODEL_DISPLAY_NAME os.environ['ENDPOINT_DISPLAY_NAME'] = ENDPOINT_DISPLAY_NAME Explanation: 1. Run CI/CD steps locally End of explanation !py.test src/tests/model_deployment_tests.py::test_model_artifact -s Explanation: Run the model artifact testing End of explanation !python build/utils.py \ --mode=create-endpoint\ --project={PROJECT}\ --region={REGION}\ --endpoint-display-name={ENDPOINT_DISPLAY_NAME} Explanation: Run create endpoint End of explanation !python build/utils.py \ --mode=deploy-model\ --project={PROJECT}\ --region={REGION}\ --endpoint-display-name={ENDPOINT_DISPLAY_NAME}\ --model-display-name={MODEL_DISPLAY_NAME} Explanation: Run deploy model End of explanation !py.test src/tests/model_deployment_tests.py::test_model_endpoint Explanation: Test deployed model endpoint End of explanation !echo $CICD_IMAGE_URI !gcloud builds submit --tag $CICD_IMAGE_URI build/. --timeout=15m Explanation: 2. Execute the Model Deployment CI/CD routine in Cloud Build The CI/CD routine is defined in the model-deployment.yaml file, and consists of the following steps: 1. Load and test the the trained model interface. 2. Create and endpoint in Vertex AI if it doesn't exists. 3. Deploy the model to the endpoint. 4. Test the endpoint. Build CI/CD container Image for Cloud Build This is the runtime environment where the steps of testing and deploying model will be executed. End of explanation REPO_URL = "https://github.com/GoogleCloudPlatform/mlops-with-vertex-ai.git" # Change to your github repo. BRANCH = "main" SUBSTITUTIONS=f\ _REPO_URL='{REPO_URL}',\ _BRANCH={BRANCH},\ _CICD_IMAGE_URI={CICD_IMAGE_URI},\ _PROJECT={PROJECT},\ _REGION={REGION},\ _MODEL_DISPLAY_NAME={MODEL_DISPLAY_NAME},\ _ENDPOINT_DISPLAY_NAME={ENDPOINT_DISPLAY_NAME},\ !echo $SUBSTITUTIONS !gcloud builds submit --no-source --config build/model-deployment.yaml --substitutions {SUBSTITUTIONS} --timeout=30m Explanation: Run CI/CD from model deployment using Cloud Build End of explanation
6,451
Given the following text description, write Python code to implement the functionality described below step by step Description: Integer and floats You can make the following operations with integer and floats (i.e. real mumbers) | Operation | Result | | --------- | --------------- | | + | Sum | | - | Substraction | | * | Multiplication | | / | Division | | // | Integer division | | ** | Exponentiation | Some simple examples are Step1: Exercise 1.1 Compute the number of seconds since the day you were born Complex numbers It is also posible to use complex numbers. Instead of using $i$ for $\sqrt{-1}$ python uses j. Step2: Some typical math operations Very common mathematical operations, sucha as taking the logarithm, can only be used after importing the module math. For instance
Python Code: 4+2 10-42 4 * 4 10/3 10//3 10**3 Explanation: Integer and floats You can make the following operations with integer and floats (i.e. real mumbers) | Operation | Result | | --------- | --------------- | | + | Sum | | - | Substraction | | * | Multiplication | | / | Division | | // | Integer division | | ** | Exponentiation | Some simple examples are End of explanation 1j**2 Explanation: Exercise 1.1 Compute the number of seconds since the day you were born Complex numbers It is also posible to use complex numbers. Instead of using $i$ for $\sqrt{-1}$ python uses j. End of explanation import math math.log(10) #natural logarithm math.log10(10) #base 10 logarith, math.sqrt(10) # square root math.exp(2) # exponential Explanation: Some typical math operations Very common mathematical operations, sucha as taking the logarithm, can only be used after importing the module math. For instance: End of explanation
6,452
Given the following text description, write Python code to implement the functionality described below step by step Description: Tools for Game Theory in QuantEcon.py Daisuke Oyama Faculty of Economics, University of Tokyo This notebook demonstrates the functionalities of the game_theory module in QuantEcon.py. Step1: Normal Form Games An $N$-player normal form game is a triplet $g = (I, (A_i){i \in I}, (u_i){i \in I})$ where $I = {0, \ldots, N-1}$ is the set of players, $A_i = {0, \ldots, n_i-1}$ is the set of actions of player $i \in I$, and $u_i \colon A_i \times A_{i+1} \times \cdots \times A_{i+N-1} \to \mathbb{R}$ is the payoff function of player $i \in I$, where $i+j$ is understood modulo $N$. Note that we adopt the convention that the $0$-th argument of the payoff function $u_i$ is player $i$'s own action and the $j$-th argument, $j = 1, \ldots, N-1$, is player ($i+j$)'s action (modulo $N$). In our module, a normal form game and a player are represented by the classes NormalFormGame and Player, respectively. A Player carries the player's payoff function and implements in particular a method that returns the best response action(s) given an action of the opponent player, or a profile of actions of the opponents if there are more than one. A NormalFormGame is in effect a container of Player instances. Creating a NormalFormGame There are several ways to create a NormalFormGame instance. The first is to pass an array of payoffs for all the players, i.e., an $(N+1)$-dimenstional array of shape $(n_0, \ldots, n_{N-1}, N)$ whose $(a_0, \ldots, a_{N-1})$-entry contains an array of the $N$ payoff values for the action profile $(a_0, \ldots, a_{N-1})$. As an example, consider the following game ("Matching Pennies") Step2: If a square matrix (2-dimensional array) is given, then it is considered to be a symmetric two-player game. Consider the following game (symmetric $2 \times 2$ "Coordination Game") Step3: Another example ("Rock-Paper-Scissors") Step4: The second is to specify the sizes of the action sets of the players to create a NormalFormGame instance filled with payoff zeros, and then set the payoff values to each entry. Let us construct the following game ("Prisoners' Dilemma") Step5: Finally, a NormalFormGame instance can be constructed by giving an array of Player instances, as explained in the next section. Creating a Player A Player instance is created by passing an array of dimension $N$ that represents the player's payoff function ("payoff array"). Consider the following game (a variant of "Battle of the Sexes") Step6: Beware that in payoff_array[h, k], h refers to the player's own action, while k refers to the opponent player's action. Step7: Passing an array of Player instances is the third way to create a NormalFormGame instance Step9: More than two players The game_theory module also supports games with more than two players. Let us consider the following version of $N$-player Cournot Game. There are $N$ firms (players) which produce a homogeneous good with common constant marginal cost $c \geq 0$. Each firm $i$ simultaneously determines the quantity $q_i \geq 0$ (action) of the good to produce. The inverse demand function is given by the linear function $P(Q) = a - Q$, $a > 0$, where $Q = q_0 + \cdots + q_{N-1}$ is the aggregate supply. Then the profit (payoff) for firm $i$ is given by $$ u_i(q_i, q_{i+1}, \ldots, q_{i+N-1}) = P(Q) q_i - c q_i = \left(a - c - \sum_{j \neq i} q_j - q_i\right) q_i. $$ Theoretically, the set of actions, i.e., available quantities, may be the set of all nonnegative real numbers $\mathbb{R}_+$ (or a bounded interval $[0, \bar{q}]$ with some upper bound $\bar{q}$), but for computation on a computer we have to discretize the action space and only allow for finitely many grid points. The following script creates a NormalFormGame instance of the Cournot game as described above, assuming that the (common) grid of possible quantity values is stored in an array q_grid. Step10: Here's a simple example with three firms, marginal cost $20$, and inverse demand function $80 - Q$, where the feasible quantity values are assumed to be $10$ and $15$. Step11: Nash Equilibrium A Nash equilibrium of a normal form game is a profile of actions where the action of each player is a best response to the others'. The Player object has a method best_response. Consider the Matching Pennies game g_MP defined above. For example, player 0's best response to the opponent's action 1 is Step12: Player 0's best responses to the opponent's mixed action [0.5, 0.5] (we know they are 0 and 1) Step13: For this game, we know that ([0.5, 0.5], [0.5, 0.5]) is a (unique) Nash equilibrium. Step15: Finding Nash equilibria There are several algorithms implemented to compute Nash equilibria Step16: Matching Pennies Step17: Coordination game Step18: Rock-Paper-Scissors Step19: Battle of the Sexes Step20: Prisoners' Dillema Step21: Cournot game Step23: Sequential best response In some games, such as "supermodular games" and "potential games", the process of sequential best responses converges to a Nash equilibrium. Here's a script to find one pure Nash equilibrium by sequential best response, if it converges. Step24: A Cournot game with linear demand is known to be a potential game, for which sequential best response converges to a Nash equilibrium. Let us try a bigger instance Step25: The limit action profile is indeed a Nash equilibrium Step26: In fact, the game has other Nash equilibria (because of our choice of grid points and parameter values) Step27: Make it bigger Step28: Sequential best response does not converge in all games Step29: Support enumeration The routine support_enumeration, which is for two-player games, visits all equal-size support pairs and checks whether each pair has a Nash equilibrium (in mixed actions) by the indifference condition. (This should thus be used only for small games.) For nondegenerate games, this routine returns all the Nash equilibria. Matching Pennies Step30: The output list contains a pair of mixed actions as a tuple of two NumPy arrays, which constitues the unique Nash equilibria of this game. Coordination game Step31: The output contains three tuples of mixed actions, where the first two correspond to the two pure action equilibria, while the last to the unique totally mixed action equilibrium. Rock-Paper-Scissors Step32: Consider the $6 \times 6$ game by von Stengel (1997), page 12 Step35: Note that the $n \times n$ game where the payoff matrices are given by the identy matrix has $2^n−1$ equilibria. It had been conjectured that this is the maximum number of equilibria of any nondegenerate $n \times n$ game. The game above, the number of whose equilibria is $75 > 2^6 - 1 = 63$, was presented as a counter-example to this conjecture. Next, let us study the All-Pay Acution, where, unlike standard auctions, bidders pay their bids regardless of whether or not they win. Situations modeled as all-pay auctions include job promotion, R&D, and rent seeking competitions, among others. Here we consider a version of All-Pay Auction with complete information, symmetric bidders, discrete bids, bid caps, and "full dissipation" (where the prize is materialized if and only if there is only one bidder who makes a highest bid). Specifically, each of $N$ players simultaneously bids an integer from ${0, 1, \ldots, c}$, where $c$ is the common (integer) bid cap. If player $i$'s bid is higher than all the other players', then he receives the prize, whose value is $r$, common to all players, and pays his bid $b_i$. Otherwise, he pays $b_i$ and receives nothing (zero value). In particular, if there are more than one players who make the highest bid, the prize gets fully dissipated and all the players receive nothing. Thus, player $i$'s payoff function is $$ u_i(b_i, b_{i+1}, \ldots, b_{i+N-1}) = \begin{cases} r - b_i & \text{if $b_i > b_j$ for all $j \neq i$}, \ - b_i & \text{otherwise}. \end{cases} $$ The following is a script to construct a NormalFormGame instance for the All-Pay Auction game, where we use Numba to speed up the loops Step36: Consider the two-player case with the following parameter values Step37: Clearly, this game has no pure-action Nash equilibrium. Indeed Step38: As pointed out by Dechenaux et al. (2006), there are three Nash equilibria when the bid cap c is odd (so that there are an even number of actions for each player) Step39: In addition to a symmetric, totally mixed equilibrium (the third), there are two asymmetric, "alternating" equilibria (the first and the second). If c is even, there is a unique equilibrium, which is symmetric and totally mixed. For example Step40: Vertex enumeration The routine vertex_enumeration computes mixed-action Nash equilibria of a 2-player normal form game by enumeration and matching of vertices of the best response polytopes. For a non-degenerate game input, these are all the Nash equilibria. Internally, scipy.spatial.ConvexHull is used to compute vertex enumeration of the best response polytopes, or equivalently, facet enumeration of their polar polytopes. Then, for each vertex of the polytope for player 0, vertices of the polytope for player 1 are searched to find a completely labeled pair. Step41: support_enumeration and vertex_enumeration the same functionality (i.e., enumeration of Nash equilibria of a two-player game), but the latter seems to run faster than the former. Lemke-Howson The routine lemke_howson implements the Lemke-Howson algorithm (Lemke and Howson 1964), which returns one Nash equilibrium of a two-player normal form game. For the details of the algorithm, see, e.g., von Stengel (2007). Matching Pennies Step42: Coordination game Step43: The initial pivot can be specified by init_pivot, which should be an integer $k$ such that $0 \leq k \leq n_1 + n_2 - 1$ (default to 0), where $0, \ldots, n_1-1$ correspond to player 0's actions, while $n_1 \ldots, n_1+n_2-1$ to player 1's actions. Step44: All-Pay Auction Step45: Additional information is returned if the option full_output is set True Step46: lemke_howson runs fast, with a reasonable time amount for games with up to several hundreds actions. (In fact, this is the only routine among the Nash equilibrium computation routines in the game_theory submodule that scales to large-size games.) For example Step47: McLennan-Tourky The routine mclennan_tourky computes one approximate Nash equilibrium of an $N$-player normal form game by the fixed-point algorithm of McLennan and Tourky (2006) applied to the best response correspondence Consider the symmetric All-Pay Auction with full dissipation as above, but this time with three players Step48: Run mclennan_tourky Step49: This output is an $\varepsilon$-Nash equilibrium of the game, which is a profile of mixed actions $(x^_0, \ldots, x^{N-1})$ such that for all $i$, $u_i(x^_i, x^{-i}) \geq u_i(x_i, x^*_{-i}) - \varepsilon$ for all $x_i$, where the value of $\varepsilon$ is specified by the option epsilon (default to 1e-3). Step50: Additional information is returned by setting the full_output option to True Step51: For this game, mclennan_tourky returned a symmetric, totally mixed action profile (cf. Rapoport and Amaldoss 2004) with the default initial condition (0, 0, 0) (profile of pure actions). Let's try a different initial condition Step52: We obtained an asymetric "alternating" mixed action profile. While this is just an approximate Nash equilibrium, it suggests that there is an (exact) Nash equilibrium of the form $((p_0, 0, p_2, 0, p_4, 0), (p_0, 0, p_2, 0, p_4, 0), (0, q_1, 0, q_3, 0, q_5))$. In fact, a simple calculation shows that there is one such that $$ p_0 = \left(\frac{r-4}{r}\right)^{\frac{1}{2}}, p_0 + p_2 = \left(\frac{r-2}{r}\right)^{\frac{1}{2}}, p_4 = 1 - (p_0 + p_2), $$ and $$ q_1 = \frac{2}{r p_0}, q_1 + q_3 = \frac{4}{r (p_0+p_2)}, q_5 = 1 - (q_1 + q_3). $$ To verify
Python Code: import numpy as np import quantecon.game_theory as gt Explanation: Tools for Game Theory in QuantEcon.py Daisuke Oyama Faculty of Economics, University of Tokyo This notebook demonstrates the functionalities of the game_theory module in QuantEcon.py. End of explanation matching_pennies_bimatrix = [[(1, -1), (-1, 1)], [(-1, 1), (1, -1)]] g_MP = gt.NormalFormGame(matching_pennies_bimatrix) print(g_MP) print(g_MP.players[0]) # Player instance for player 0 print(g_MP.players[1]) # Player instance for player 1 g_MP.players[0].payoff_array # Player 0's payoff array g_MP.players[1].payoff_array # Player 1's payoff array g_MP[0, 0] # payoff profile for action profile (0, 0) Explanation: Normal Form Games An $N$-player normal form game is a triplet $g = (I, (A_i){i \in I}, (u_i){i \in I})$ where $I = {0, \ldots, N-1}$ is the set of players, $A_i = {0, \ldots, n_i-1}$ is the set of actions of player $i \in I$, and $u_i \colon A_i \times A_{i+1} \times \cdots \times A_{i+N-1} \to \mathbb{R}$ is the payoff function of player $i \in I$, where $i+j$ is understood modulo $N$. Note that we adopt the convention that the $0$-th argument of the payoff function $u_i$ is player $i$'s own action and the $j$-th argument, $j = 1, \ldots, N-1$, is player ($i+j$)'s action (modulo $N$). In our module, a normal form game and a player are represented by the classes NormalFormGame and Player, respectively. A Player carries the player's payoff function and implements in particular a method that returns the best response action(s) given an action of the opponent player, or a profile of actions of the opponents if there are more than one. A NormalFormGame is in effect a container of Player instances. Creating a NormalFormGame There are several ways to create a NormalFormGame instance. The first is to pass an array of payoffs for all the players, i.e., an $(N+1)$-dimenstional array of shape $(n_0, \ldots, n_{N-1}, N)$ whose $(a_0, \ldots, a_{N-1})$-entry contains an array of the $N$ payoff values for the action profile $(a_0, \ldots, a_{N-1})$. As an example, consider the following game ("Matching Pennies"): $ \begin{bmatrix} 1, -1 & -1, 1 \ -1, 1 & 1, -1 \end{bmatrix} $ End of explanation coordination_game_matrix = [[4, 0], [3, 2]] # square matrix g_Coo = gt.NormalFormGame(coordination_game_matrix) print(g_Coo) g_Coo.players[0].payoff_array # Player 0's payoff array g_Coo.players[1].payoff_array # Player 1's payoff array Explanation: If a square matrix (2-dimensional array) is given, then it is considered to be a symmetric two-player game. Consider the following game (symmetric $2 \times 2$ "Coordination Game"): $ \begin{bmatrix} 4, 4 & 0, 3 \ 3, 0 & 2, 2 \end{bmatrix} $ End of explanation RPS_matrix = [[ 0, -1, 1], [ 1, 0, -1], [-1, 1, 0]] g_RPS = gt.NormalFormGame(RPS_matrix) print(g_RPS) Explanation: Another example ("Rock-Paper-Scissors"): $ \begin{bmatrix} 0, 0 & -1, 1 & 1, -1 \ 1, -1 & 0, 0 & -1, 1 \ -1, 1 & 1, -1 & 0, 0 \end{bmatrix} $ End of explanation g_PD = gt.NormalFormGame((2, 2)) # There are 2 players, each of whom has 2 actions g_PD[0, 0] = 1, 1 g_PD[0, 1] = -2, 3 g_PD[1, 0] = 3, -2 g_PD[1, 1] = 0, 0 print(g_PD) Explanation: The second is to specify the sizes of the action sets of the players to create a NormalFormGame instance filled with payoff zeros, and then set the payoff values to each entry. Let us construct the following game ("Prisoners' Dilemma"): $ \begin{bmatrix} 1, 1 & -2, 3 \ 3, -2 & 0, 0 \end{bmatrix} $ End of explanation player0 = gt.Player([[3, 1], [0, 2]]) player1 = gt.Player([[2, 0], [1, 3]]) Explanation: Finally, a NormalFormGame instance can be constructed by giving an array of Player instances, as explained in the next section. Creating a Player A Player instance is created by passing an array of dimension $N$ that represents the player's payoff function ("payoff array"). Consider the following game (a variant of "Battle of the Sexes"): $ \begin{bmatrix} 3, 2 & 1, 1 \ 0, 0 & 2, 3 \end{bmatrix} $ End of explanation player0.payoff_array player1.payoff_array Explanation: Beware that in payoff_array[h, k], h refers to the player's own action, while k refers to the opponent player's action. End of explanation g_BoS = gt.NormalFormGame((player0, player1)) print(g_BoS) Explanation: Passing an array of Player instances is the third way to create a NormalFormGame instance: End of explanation from quantecon import cartesian def cournot(a, c, N, q_grid): Create a `NormalFormGame` instance for the symmetric N-player Cournot game with linear inverse demand a - Q and constant marginal cost c. Parameters ---------- a : scalar Intercept of the demand curve c : scalar Common constant marginal cost N : scalar(int) Number of firms q_grid : array_like(scalar) Array containing the set of possible quantities Returns ------- NormalFormGame NormalFormGame instance representing the Cournot game q_grid = np.asarray(q_grid) payoff_array = \ cartesian([q_grid]*N).sum(axis=-1).reshape([len(q_grid)]*N) * (-1) + \ (a - c) payoff_array *= q_grid.reshape([len(q_grid)] + [1]*(N-1)) payoff_array += 0 # To get rid of the minus sign of -0 player = gt.Player(payoff_array) return gt.NormalFormGame([player for i in range(N)]) Explanation: More than two players The game_theory module also supports games with more than two players. Let us consider the following version of $N$-player Cournot Game. There are $N$ firms (players) which produce a homogeneous good with common constant marginal cost $c \geq 0$. Each firm $i$ simultaneously determines the quantity $q_i \geq 0$ (action) of the good to produce. The inverse demand function is given by the linear function $P(Q) = a - Q$, $a > 0$, where $Q = q_0 + \cdots + q_{N-1}$ is the aggregate supply. Then the profit (payoff) for firm $i$ is given by $$ u_i(q_i, q_{i+1}, \ldots, q_{i+N-1}) = P(Q) q_i - c q_i = \left(a - c - \sum_{j \neq i} q_j - q_i\right) q_i. $$ Theoretically, the set of actions, i.e., available quantities, may be the set of all nonnegative real numbers $\mathbb{R}_+$ (or a bounded interval $[0, \bar{q}]$ with some upper bound $\bar{q}$), but for computation on a computer we have to discretize the action space and only allow for finitely many grid points. The following script creates a NormalFormGame instance of the Cournot game as described above, assuming that the (common) grid of possible quantity values is stored in an array q_grid. End of explanation a, c = 80, 20 N = 3 q_grid = [10, 15] # [1/3 of Monopoly quantity, Nash equilibrium quantity] g_Cou = cournot(a, c, N, q_grid) print(g_Cou) print(g_Cou.players[0]) g_Cou.nums_actions Explanation: Here's a simple example with three firms, marginal cost $20$, and inverse demand function $80 - Q$, where the feasible quantity values are assumed to be $10$ and $15$. End of explanation g_MP.players[0].best_response(1) Explanation: Nash Equilibrium A Nash equilibrium of a normal form game is a profile of actions where the action of each player is a best response to the others'. The Player object has a method best_response. Consider the Matching Pennies game g_MP defined above. For example, player 0's best response to the opponent's action 1 is: End of explanation # By default, returns the best response action with the smallest index g_MP.players[0].best_response([0.5, 0.5]) # With tie_breaking='random', returns randomly one of the best responses g_MP.players[0].best_response([0.5, 0.5], tie_breaking='random') # Try several times # With tie_breaking=False, returns an array of all the best responses g_MP.players[0].best_response([0.5, 0.5], tie_breaking=False) Explanation: Player 0's best responses to the opponent's mixed action [0.5, 0.5] (we know they are 0 and 1): End of explanation g_MP.is_nash(([0.5, 0.5], [0.5, 0.5])) g_MP.is_nash((0, 0)) g_MP.is_nash((0, [0.5, 0.5])) Explanation: For this game, we know that ([0.5, 0.5], [0.5, 0.5]) is a (unique) Nash equilibrium. End of explanation def print_pure_nash_brute(g): Print all pure Nash equilibria of a normal form game found by brute force. Parameters ---------- g : NormalFormGame NEs = gt.pure_nash_brute(g) num_NEs = len(NEs) if num_NEs == 0: msg = 'no pure Nash equilibrium' elif num_NEs == 1: msg = '1 pure Nash equilibrium:\n{0}'.format(NEs) else: msg = '{0} pure Nash equilibria:\n{1}'.format(num_NEs, NEs) print('The game has ' + msg) Explanation: Finding Nash equilibria There are several algorithms implemented to compute Nash equilibria: Brute force Find all pure-action Nash equilibria of an $N$-player game (if any). Sequential best response Find one pure-action Nash equilibrium of an $N$-player game (if any). Support enumeration Find all mixed-action Nash equilibria of a two-player nondegenerate game. Vertex enumeration Find all mixed-action Nash equilibria of a two-player nondegenerate game. Lemke-Howson Find one mixed-action Nash equilibrium of a two-player game. McLennan-Tourky Find one mixed-action Nash equilibrium of an $N$-player game. For more variety of algorithms, one should look at Gambit. Brute force For small games, we can find pure action Nash equilibria by brute force, by calling the routine pure_nash_brute. It visits all the action profiles and check whether each is a Nash equilibrium by the is_nash method. End of explanation print_pure_nash_brute(g_MP) Explanation: Matching Pennies: End of explanation print_pure_nash_brute(g_Coo) Explanation: Coordination game: End of explanation print_pure_nash_brute(g_RPS) Explanation: Rock-Paper-Scissors: End of explanation print_pure_nash_brute(g_BoS) Explanation: Battle of the Sexes: End of explanation print_pure_nash_brute(g_PD) Explanation: Prisoners' Dillema: End of explanation print_pure_nash_brute(g_Cou) Explanation: Cournot game: End of explanation def sequential_best_response(g, init_actions=None, tie_breaking='smallest', verbose=True): Find a pure Nash equilibrium of a normal form game by sequential best response. Parameters ---------- g : NormalFormGame init_actions : array_like(int), optional(default=[0, ..., 0]) The initial action profile. tie_breaking : {'smallest', 'random'}, optional(default='smallest') verbose: bool, optional(default=True) If True, print the intermediate process. N = g.N # Number of players a = np.empty(N, dtype=int) # Action profile if init_actions is None: init_actions = [0] * N a[:] = init_actions if verbose: print('init_actions: {0}'.format(a)) new_a = np.empty(N, dtype=int) max_iter = np.prod(g.nums_actions) for t in range(max_iter): new_a[:] = a for i, player in enumerate(g.players): if N == 2: a_except_i = new_a[1-i] else: a_except_i = new_a[np.arange(i+1, i+N) % N] new_a[i] = player.best_response(a_except_i, tie_breaking=tie_breaking) if verbose: print('player {0}: {1}'.format(i, new_a)) if np.array_equal(new_a, a): return a else: a[:] = new_a print('No pure Nash equilibrium found') return None Explanation: Sequential best response In some games, such as "supermodular games" and "potential games", the process of sequential best responses converges to a Nash equilibrium. Here's a script to find one pure Nash equilibrium by sequential best response, if it converges. End of explanation a, c = 80, 20 N = 3 q_grid = np.linspace(0, a-c, 13) # [0, 5, 10, ..., 60] g_Cou = cournot(a, c, N, q_grid) a_star = sequential_best_response(g_Cou) # By default, start with (0, 0, 0) print('Nash equilibrium indices: {0}'.format(a_star)) print('Nash equilibrium quantities: {0}'.format(q_grid[a_star])) # Start with the largest actions (12, 12, 12) sequential_best_response(g_Cou, init_actions=(12, 12, 12)) Explanation: A Cournot game with linear demand is known to be a potential game, for which sequential best response converges to a Nash equilibrium. Let us try a bigger instance: End of explanation g_Cou.is_nash(a_star) Explanation: The limit action profile is indeed a Nash equilibrium: End of explanation print_pure_nash_brute(g_Cou) Explanation: In fact, the game has other Nash equilibria (because of our choice of grid points and parameter values): End of explanation N = 4 q_grid = np.linspace(0, a-c, 61) # [0, 1, 2, ..., 60] g_Cou = cournot(a, c, N, q_grid) sequential_best_response(g_Cou) sequential_best_response(g_Cou, init_actions=(0, 0, 0, 30)) Explanation: Make it bigger: End of explanation print(g_MP) # Matching Pennies sequential_best_response(g_MP) Explanation: Sequential best response does not converge in all games: End of explanation gt.support_enumeration(g_MP) Explanation: Support enumeration The routine support_enumeration, which is for two-player games, visits all equal-size support pairs and checks whether each pair has a Nash equilibrium (in mixed actions) by the indifference condition. (This should thus be used only for small games.) For nondegenerate games, this routine returns all the Nash equilibria. Matching Pennies: End of explanation print(g_Coo) gt.support_enumeration(g_Coo) Explanation: The output list contains a pair of mixed actions as a tuple of two NumPy arrays, which constitues the unique Nash equilibria of this game. Coordination game: End of explanation print(g_RPS) gt.support_enumeration(g_RPS) Explanation: The output contains three tuples of mixed actions, where the first two correspond to the two pure action equilibria, while the last to the unique totally mixed action equilibrium. Rock-Paper-Scissors: End of explanation player0 = gt.Player( [[ 9504, -660, 19976, -20526, 1776, -8976], [ -111771, 31680, -130944, 168124, -8514, 52764], [ 397584, -113850, 451176, -586476, 29216, -178761], [ 171204, -45936, 208626, -263076, 14124, -84436], [ 1303104, -453420, 1227336, -1718376, 72336, -461736], [ 737154, -227040, 774576, -1039236, 48081, -300036]] ) player1 = gt.Player( [[ 72336, -461736, 1227336, -1718376, 1303104, -453420], [ 48081, -300036, 774576, -1039236, 737154, -227040], [ 29216, -178761, 451176, -586476, 397584, -113850], [ 14124, -84436, 208626, -263076, 171204, -45936], [ 1776, -8976, 19976, -20526, 9504, -660], [ -8514, 52764, -130944, 168124, -111771, 31680]] ) g_vonStengel = gt.NormalFormGame((player0, player1)) len(gt.support_enumeration(g_vonStengel)) Explanation: Consider the $6 \times 6$ game by von Stengel (1997), page 12: End of explanation from numba import jit def all_pay_auction(r, c, N, dissipation=True): Create a `NormalFormGame` instance for the symmetric N-player All-Pay Auction game with common reward `r` and common bid cap `e`. Parameters ---------- r : scalar(float) Common reward value. c : scalar(int) Common bid cap. N : scalar(int) Number of players. dissipation : bool, optional(default=True) If True, the prize fully dissipates in case of a tie. If False, the prize is equally split among the highest bidders (or given to one of the highest bidders with equal probabilities). Returns ------- NormalFormGame NormalFormGame instance representing the All-Pay Auction game. player = gt.Player(np.empty((c+1,)*N)) populate_APA_payoff_array(r, dissipation, player.payoff_array) return gt.NormalFormGame((player,)*N) @jit(nopython=True) def populate_APA_payoff_array(r, dissipation, out): Populate the payoff array for a player in an N-player All-Pay Auction game. Parameters ---------- r : scalar(float) Reward value. dissipation : bool, optional(default=True) If True, the prize fully dissipates in case of a tie. If False, the prize is equally split among the highest bidders (or given to one of the highest bidders with equal probabilities). out : ndarray(float, ndim=N) NumPy array to store the payoffs. Modified in place. Returns ------- out : ndarray(float, ndim=N) View of `out`. nums_actions = out.shape N = out.ndim for bids in np.ndindex(nums_actions): out[bids] = -bids[0] num_ties = 1 for j in range(1, N): if bids[j] > bids[0]: num_ties = 0 break elif bids[j] == bids[0]: if dissipation: num_ties = 0 break else: num_ties += 1 if num_ties > 0: out[bids] += r / num_ties return out Explanation: Note that the $n \times n$ game where the payoff matrices are given by the identy matrix has $2^n−1$ equilibria. It had been conjectured that this is the maximum number of equilibria of any nondegenerate $n \times n$ game. The game above, the number of whose equilibria is $75 > 2^6 - 1 = 63$, was presented as a counter-example to this conjecture. Next, let us study the All-Pay Acution, where, unlike standard auctions, bidders pay their bids regardless of whether or not they win. Situations modeled as all-pay auctions include job promotion, R&D, and rent seeking competitions, among others. Here we consider a version of All-Pay Auction with complete information, symmetric bidders, discrete bids, bid caps, and "full dissipation" (where the prize is materialized if and only if there is only one bidder who makes a highest bid). Specifically, each of $N$ players simultaneously bids an integer from ${0, 1, \ldots, c}$, where $c$ is the common (integer) bid cap. If player $i$'s bid is higher than all the other players', then he receives the prize, whose value is $r$, common to all players, and pays his bid $b_i$. Otherwise, he pays $b_i$ and receives nothing (zero value). In particular, if there are more than one players who make the highest bid, the prize gets fully dissipated and all the players receive nothing. Thus, player $i$'s payoff function is $$ u_i(b_i, b_{i+1}, \ldots, b_{i+N-1}) = \begin{cases} r - b_i & \text{if $b_i > b_j$ for all $j \neq i$}, \ - b_i & \text{otherwise}. \end{cases} $$ The following is a script to construct a NormalFormGame instance for the All-Pay Auction game, where we use Numba to speed up the loops: End of explanation N = 2 c = 5 # odd r = 8 g_APA_odd = all_pay_auction(r, c, N) print(g_APA_odd) Explanation: Consider the two-player case with the following parameter values: End of explanation gt.pure_nash_brute(g_APA_odd) Explanation: Clearly, this game has no pure-action Nash equilibrium. Indeed: End of explanation gt.support_enumeration(g_APA_odd) Explanation: As pointed out by Dechenaux et al. (2006), there are three Nash equilibria when the bid cap c is odd (so that there are an even number of actions for each player): End of explanation c = 6 # even g_APA_even = all_pay_auction(r, c, N) gt.support_enumeration(g_APA_even) Explanation: In addition to a symmetric, totally mixed equilibrium (the third), there are two asymmetric, "alternating" equilibria (the first and the second). If c is even, there is a unique equilibrium, which is symmetric and totally mixed. For example: End of explanation gt.vertex_enumeration(g_MP) len(gt.support_enumeration(g_vonStengel)) gt.vertex_enumeration(g_APA_odd) gt.vertex_enumeration(g_APA_even) Explanation: Vertex enumeration The routine vertex_enumeration computes mixed-action Nash equilibria of a 2-player normal form game by enumeration and matching of vertices of the best response polytopes. For a non-degenerate game input, these are all the Nash equilibria. Internally, scipy.spatial.ConvexHull is used to compute vertex enumeration of the best response polytopes, or equivalently, facet enumeration of their polar polytopes. Then, for each vertex of the polytope for player 0, vertices of the polytope for player 1 are searched to find a completely labeled pair. End of explanation gt.lemke_howson(g_MP) Explanation: support_enumeration and vertex_enumeration the same functionality (i.e., enumeration of Nash equilibria of a two-player game), but the latter seems to run faster than the former. Lemke-Howson The routine lemke_howson implements the Lemke-Howson algorithm (Lemke and Howson 1964), which returns one Nash equilibrium of a two-player normal form game. For the details of the algorithm, see, e.g., von Stengel (2007). Matching Pennies: End of explanation gt.lemke_howson(g_Coo) Explanation: Coordination game: End of explanation gt.lemke_howson(g_Coo, init_pivot=1) Explanation: The initial pivot can be specified by init_pivot, which should be an integer $k$ such that $0 \leq k \leq n_1 + n_2 - 1$ (default to 0), where $0, \ldots, n_1-1$ correspond to player 0's actions, while $n_1 \ldots, n_1+n_2-1$ to player 1's actions. End of explanation gt.lemke_howson(g_APA_odd, init_pivot=0) gt.lemke_howson(g_APA_odd, init_pivot=1) Explanation: All-Pay Auction: End of explanation NE, res = gt.lemke_howson(g_APA_odd, full_output=True) res Explanation: Additional information is returned if the option full_output is set True: End of explanation N = 2 c = 200 # 201 actions r = 500 g_APA200 = all_pay_auction(r, c, N) gt.lemke_howson(g_APA200) Explanation: lemke_howson runs fast, with a reasonable time amount for games with up to several hundreds actions. (In fact, this is the only routine among the Nash equilibrium computation routines in the game_theory submodule that scales to large-size games.) For example: End of explanation N = 3 r = 16 c = 5 g_APA3 = all_pay_auction(r, c, N) Explanation: McLennan-Tourky The routine mclennan_tourky computes one approximate Nash equilibrium of an $N$-player normal form game by the fixed-point algorithm of McLennan and Tourky (2006) applied to the best response correspondence Consider the symmetric All-Pay Auction with full dissipation as above, but this time with three players: End of explanation NE = gt.mclennan_tourky(g_APA3) NE Explanation: Run mclennan_tourky: End of explanation g_APA3.is_nash(NE, tol=1e-3) epsilon = 1e-4 NE = gt.mclennan_tourky(g_APA3, epsilon=epsilon) NE g_APA3.is_nash(NE, tol=epsilon) Explanation: This output is an $\varepsilon$-Nash equilibrium of the game, which is a profile of mixed actions $(x^_0, \ldots, x^{N-1})$ such that for all $i$, $u_i(x^_i, x^{-i}) \geq u_i(x_i, x^*_{-i}) - \varepsilon$ for all $x_i$, where the value of $\varepsilon$ is specified by the option epsilon (default to 1e-3). End of explanation NE, res = gt.mclennan_tourky(g_APA3, full_output=True) res Explanation: Additional information is returned by setting the full_output option to True: End of explanation init = ( [1/2, 0, 0, 1/2, 0, 0], [0, 1/2, 0, 0, 1/2, 0], [0, 0, 0, 0, 0, 1] ) # profile of mixed actions gt.mclennan_tourky(g_APA3, init=init) Explanation: For this game, mclennan_tourky returned a symmetric, totally mixed action profile (cf. Rapoport and Amaldoss 2004) with the default initial condition (0, 0, 0) (profile of pure actions). Let's try a different initial condition: End of explanation p0 = ((r-4)/r)**(1/2) p02 = ((r-2)/r)**(1/2) p2 = p02 - p0 p4 = 1 - p02 q1 = (2/r)/p0 q13 = (4/r)/(p02) q3 = q13 - q1 q5 = 1 - q13 a = ([p0, 0, p2, 0, p4, 0], [p0, 0, p2, 0, p4, 0], [0, q1, 0, q3, 0, q5]) a g_APA3.is_nash(a) Explanation: We obtained an asymetric "alternating" mixed action profile. While this is just an approximate Nash equilibrium, it suggests that there is an (exact) Nash equilibrium of the form $((p_0, 0, p_2, 0, p_4, 0), (p_0, 0, p_2, 0, p_4, 0), (0, q_1, 0, q_3, 0, q_5))$. In fact, a simple calculation shows that there is one such that $$ p_0 = \left(\frac{r-4}{r}\right)^{\frac{1}{2}}, p_0 + p_2 = \left(\frac{r-2}{r}\right)^{\frac{1}{2}}, p_4 = 1 - (p_0 + p_2), $$ and $$ q_1 = \frac{2}{r p_0}, q_1 + q_3 = \frac{4}{r (p_0+p_2)}, q_5 = 1 - (q_1 + q_3). $$ To verify: End of explanation
6,453
Given the following text description, write Python code to implement the functionality described below step by step Description: Eclipse Detection Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). Step1: As always, let's do imports and initialize a logger and a new Bundle. Step2: Let's just compute the mesh at a single time-point that we know should be during egress. Step3: Native The 'native' eclipse method computes what percentage (by area) of each triangle is visible at the current time. It also determines the centroid of the visible portion of each triangle. Physical quantities (temperatures, intensities, velocities, etc) are computed at the vertices of each triangle, and this centroid is then used to determine the average quantity across the visible portion of the triangle (by assuming a linear gradient across the triangle). Let's plot the visibilities (ratio of the area that is visible) as the color scale, with red being completely hidden and green being completely visible. Step4: Visible Partial The 'visible partial' eclipse method simply determines which triangles are hidden, which are visible, and which are partially visible. It then assigns a visibility of 0.5 to any partially visible triangles - meaning they will contribute half of their intensities when integrated (assume that half of the area is visible). There are no longer any centroids - values are still computed at the vertices but are then averaged to be at the geometric center of EACH triangle. Again, let's plot the visibilities (ratio of the area that is visible) as the color scale, with red being completely hidden and green being completely visible.
Python Code: #!pip install -I "phoebe>=2.3,<2.4" Explanation: Eclipse Detection Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). End of explanation import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt phoebe.devel_on() # DEVELOPER MODE REQUIRED FOR VISIBLE_PARTIAL - DON'T USE FOR SCIENCE logger = phoebe.logger() b = phoebe.default_binary() Explanation: As always, let's do imports and initialize a logger and a new Bundle. End of explanation b.add_dataset('mesh', times=[0.05], columns=['visibilities']) Explanation: Let's just compute the mesh at a single time-point that we know should be during egress. End of explanation b.run_compute(eclipse_method='native') afig, mplfig = b.plot(component='primary', fc='visibilities', xlim=(-0.5, 0.25), ylim=(-0.4, 0.4), show=True) Explanation: Native The 'native' eclipse method computes what percentage (by area) of each triangle is visible at the current time. It also determines the centroid of the visible portion of each triangle. Physical quantities (temperatures, intensities, velocities, etc) are computed at the vertices of each triangle, and this centroid is then used to determine the average quantity across the visible portion of the triangle (by assuming a linear gradient across the triangle). Let's plot the visibilities (ratio of the area that is visible) as the color scale, with red being completely hidden and green being completely visible. End of explanation b.run_compute(eclipse_method='visible_partial') afig, mplfig = b.plot(component='primary', fc='visibilities', xlim=(-0.5, 0.25), ylim=(-0.4, 0.4), show=True) Explanation: Visible Partial The 'visible partial' eclipse method simply determines which triangles are hidden, which are visible, and which are partially visible. It then assigns a visibility of 0.5 to any partially visible triangles - meaning they will contribute half of their intensities when integrated (assume that half of the area is visible). There are no longer any centroids - values are still computed at the vertices but are then averaged to be at the geometric center of EACH triangle. Again, let's plot the visibilities (ratio of the area that is visible) as the color scale, with red being completely hidden and green being completely visible. End of explanation
6,454
Given the following text description, write Python code to implement the functionality described below step by step Description: Sveučilište u Zagrebu<br> Fakultet elektrotehnike i računarstva Strojno učenje <a href="http Step1: Sadržaj Step2: Q Step3: Q Step4: Polunaivan klasifikator* Ideja Ako, na primjer, ne vrijedi $x_2\bot x_3|\mathcal{C}_j$, onda je bolje umjesto Step5: Uzajamnu informaciju lako možemo proširiti na uvjetnu uzajamnu informaciju (engl. conditional mutual information) Step6: NB Step7: Višedimenzijski slučaj Izglednost klase Step8: Granica između klasa $\mathcal{C}_1$ i $\mathcal{C}_2$ je Step9: Granca je nelinearna jer postoji član koji kvadratno ovisi o $\mathbf{x}$ (i taj član ne iščezava kada računamo razliku $h_1(x)-h_2(x)$) Step10: 2. pojednostavljenje Step11: 3. pojednostavljenje
Python Code: import scipy as sp import scipy.stats as stats import matplotlib.pyplot as plt import pandas as pd %pylab inline Explanation: Sveučilište u Zagrebu<br> Fakultet elektrotehnike i računarstva Strojno učenje <a href="http://www.fer.unizg.hr/predmet/su">http://www.fer.unizg.hr/predmet/su</a> Ak. god. 2015./2016. Bilježnica 4: Bayesov klasifikator (c) 2015 Jan Šnajder <i>Verzija: 0.7 (2015-10-31)</i> End of explanation q101 = pd.read_csv("http://www.fer.unizg.hr/_download/repository/questions101-2014.csv", comment='#') q101[:20] Explanation: Sadržaj: Bayesovska klasifikacija Naivan Bayesov klasifikator Primjer: 101 Questions Polunaivan Bayesov klasifikator* Bayesov klasifikator za kontinuirane značajke Bayesov klasifikator: komponente algoritma Sažetak Bayesovska klasfikacija Bayesovo pravilo $$ P(\mathcal{C}j|\mathbf{x}) = \frac{P(\mathbf{x},\mathcal{C}_j)}{P(\mathbf{x})} = \frac{p(\mathbf{x}|\mathcal{C}_j) P(\mathcal{C}_j)}{p(\mathbf{x})} = \frac{p(\mathbf{x}|\mathcal{C}_j)P(\mathcal{C}_j)}{\sum{k=1}^K p(\mathbf{x}|\mathcal{C}_k)P(\mathcal{C}_k)} $$ Apriorna vjerojatnost klase $\mathcal{C}_j$: Binarna ($K=2)$ klasifikacija: Bernoullijeva razdioba Višeklasna ($K>2$) klasifikacija: kategorička razdioba Izglednost klase $p(\mathbf{x}|\mathcal{C}_j)$: Diskretne značajke: Bernoullijeva/kategorička razdioba Kontinuirane značajke: Gaussova razdioba Ovo je parametarski i generativni model Q: Zašto? Klasifikacijska odluka MAP-hipoteza: \begin{align} h : \mathcal{X} &\to {\mathcal{C}1, \mathcal{C}_2,\dots, \mathcal{C}_K}\ h(\mathbf{x})&=\displaystyle\mathrm{argmax}{\mathcal{C}_k}\ p(\mathbf{x}|\mathcal{C}_k) P(\mathcal{C}_k) \end{align} Pouzdanost klasifikacije u $\mathcal{C}_j$: \begin{align} h_j : \mathcal{X} &\to [0,\infty)\ h_j(\mathbf{x})&=p(\mathbf{x}|\mathcal{C}_k) P(\mathcal{C}_k) \end{align} Vjerojatnost klasifikacije u $\mathcal{C}_j$: \begin{align} h_j : \mathcal{X} &\to [0,1]\ h_j(\mathbf{x})&=P(\mathcal{C}_k|\mathbf{x}) \end{align} Primjer $P(\mathcal{C}_1) = P(\mathcal{C}_2)=0.3$, $P(\mathcal{C}_3)=0.4$ Za neki primjer $\mathbf{x}$ imamo: $p(\mathbf{x}|\mathcal{C}_1)=0.9$, $p(\mathbf{x}|\mathcal{C}_2)=p(\mathbf{x}|\mathcal{C}_3)=0.4$ U koju klasu klasificiramo $\mathbf{x}$? Minimizacija pogreške klasifikacije* Pretpostavimo da primjeri u stvarnosti dolaze iz dva područja: $\mathcal{R}_1={\mathbf{x}\in\mathcal{X}\mid h_1(\mathbf{x})=1}$ $\mathcal{R}_2=\mathcal{X}\setminus\mathcal{R}_1$ Vjerojatnost pogrešne klasifikacije: \begin{align} P(\mathbf{x}\in\mathcal{R}1,\mathcal{C}_2) &+ P(\mathcal{x}\in\mathcal{R}_2,\mathcal{C}_1)\ \int{\mathbf{x}\in\mathcal{R}1} p(\mathbf{x},\mathcal{C}_2)\,\mathrm{d}\mathbf{x} &+ \int{\mathbf{x}\in\mathcal{R}_2} p(\mathbf{x},\mathcal{C}_1)\,\mathrm{d}\mathbf{x} \end{align} [Skica] Pogreška je minimizirana kada $\mathcal{C}j = \mathrm{argmax}{\mathcal{C}\in{\mathcal{C_1},\mathcal{C_2}}} P(\mathbf{x},\mathcal{C}_j) $ Alternativa: Minimizacija rizika* $L_{kj}$ - gubitak uslijed pogrešne klasifikacije primjera iz klase $\mathcal{C}_k$ u klasu $\mathcal{C}_j$ Očekivani gubitak (funkcija rizika): $$ \mathbb{E}[L] = \sum_{k=1}^K\sum_{j=1}^K \int_{\mathbf{x}\in\mathcal{R}j} L{kj}\,p(\mathbf{x},\mathcal{C}_k)\,\mathrm{d}\mathbf{x} $$ Očekivani rizik pri klasifikaciji $\mathbf{x}$ u $\mathcal{C}_j$: $$ R(\mathcal{C}j|\mathbf{x}) = \sum{k=1}^K L_{kj}P(\mathcal{C}_k|\mathbf{x}) $$ Optimalna klasifikacijska odluka: $$ h(\mathbf{x}) = \mathrm{argmin}_{\mathcal{C}_k} R(\mathcal{C}_k|\mathbf{x}) $$ Primjer $P(\mathcal{C}_1|\mathbf{x}) = 0.25$, $P(\mathcal{C}_2|\mathbf{x}) = 0.6$, $P(\mathcal{C}_3|\mathbf{x}) = 0.15$ $$ L = {\small \begin{pmatrix} 0 & 1 & 5 \ 1 & 0 & 5 \ 10 & 100 & 0 \end{pmatrix}} $$ Naivan Bayesov klasifikator $\mathcal{D}={(\mathbf{x}^{(i)},y^{(i)})}_{i=1}^N$ $y^{(i)}\in{\mathcal{C}_1,\dots,\mathcal{C}_K}$ Model: \begin{align} P(\mathcal{C}j|x_1,\dots,x_n)\ &\propto\ P(x_1,\dots,x_n|\mathcal{C}_j)P(\mathcal{C}_j)\ h(\mathbf{x}=x_1,\dots,x_n) &= \mathrm{argmax}{j}\ P(\mathbf{x}=x_1,\dots,x_n|y=\mathcal{C}_j)P(y = \mathcal{C}_j) \end{align} ML-procjena za $P(y)$ (kategorička razdioba): $$ \hat{P}(\mathcal{C}j)=\frac{1}{N}\sum{i=1}^N\mathbf{1}{y^{(i)}=\mathcal{C}_j} = \frac{N_j}{N} $$ Q: Broj parametara za $\hat{P}(\mathcal{C}_j)$, $j=1,\dots,K$ ? Procjena parametara za $P(x_1,\dots,x_n|\mathcal{C}_j)$? Tretirati $\mathbf{x} = (x_1,\dots,x_n)$ kao kategoričku varijablu (njezine vrijednosti su sve kombinacije vrijednosti $x_i$) ? Broj parametara? Generalizacija? Pravilo lanca (uz uvjetnu varijablu $\mathcal{C}_j$): \begin{equation} P(x_1,\dots,x_n|\mathcal{C}j) = \prod{k=1}^n P(x_k|x_1,\dots,x_{k-1},\mathcal{C}_j) \end{equation} Pretpostavka: $\color{red}{x_i\bot x_k|\mathcal{C}_j\ (i\neq k)} \ \Leftrightarrow \ \color{red}{P(x_i|x_k,\mathcal{C}_j) = P(x_i|\mathcal{C}_j)}$ \begin{equation} P(x_1,\dots,x_n|\mathcal{C}j) = \prod{k=1}^n P(x_k|x_1,\dots,x_{k-1},\mathcal{C}j) = \prod{k=1}^n P(x_k|\mathcal{C}_j) \end{equation} Naivan Bayesov klasifikator: $$ h(x_1,\dots,x_n) = \mathrm{argmax}j\ P(\mathcal{C}_j)\prod{k=1}^n P(x_k|\mathcal{C}_j) $$ ML-procjena: $$ \hat{P}(x_k|\mathcal{C}j)=\frac{\sum{i=1}^N\mathbf{1}\big{x^{(i)}k=x_k \land y^{(i)}=\mathcal{C}_j\big}} {\sum{i=1}^N \mathbf{1}{y^{(i)} = \mathcal{C}j}} = \frac{N{kj}}{N_j} $$ Laplaceov procjenitelj: $$ \hat{P}(x_k|\mathcal{C}j)=\frac{\sum{i=1}^N\mathbf{1}\big{x^{(i)}k=x_k \land y^{(i)}=\mathcal{C}_j\big} + \lambda} {\sum{i=1}^N \mathbf{1}{y^{(i)} = \mathcal{C}j} + \lambda K_k} = \frac{N{kj}+\lambda}{N_j+\lambda K_k} $$ Broj parametara: $\sum_{k=1}^n(K_k-1)K$ Binarne značajke: $nK$ Uvjetna nezavisnost? Vrijedi li općenito nezavisnost $x_i\bot x_k|\mathcal{C}_j\ (i\neq k)$? Primjer: Klasifikacija teksta Kategorija $\mathcal{C} = \text{Sport}$ $D$: tekstni dokument Značajke: $x_1=\mathbf{1}{\text{Zagreb}\in D}$, $x_2 = \mathbf{1}{\text{lopta}\in D}$, $x_3=\mathbf{1}{\text{gol}\in D}$ Q: $x_1 \bot x_2 | \mathcal{C}$ ? Q: $x_2 \bot x_3 | \mathcal{C}$ ? Primjer: Dobar SF-film $$ \begin{array}{r c c c c c } \hline & x_1 & x_2 & x_3 & x_4 & y\ i & \text{Mjesto radnje} & \text{Glavni lik} & \text{Vrijeme radnje} & \text{Vanzemaljci} & \text{Dobar film}\ \hline 1 & \text{svemir} & \text{znanstvenica} & \text{sadašnjost} & \text{da} & \text{ne} \ 2 & \text{Zemlja} & \text{kriminalac} & \text{budućnost} & \text{ne} & \text{ne} \ 3 & \text{drugdje} & \text{dijete} & \text{prošlost} & \text{da} & \text{ne} \ 4 & \text{svemir} & \text{znanstvenica} & \text{sadašnjost} & \text{ne} & \text{da} \ 5 & \text{svemir} & \text{kriminalac} & \text{prošlost} & \text{ne} & \text{ne} \ 6 & \text{Zemlja} & \text{dijete} & \text{prošlost} & \text{da} & \text{da} \ 7 & \text{Zemlja} & \text{policajac} & \text{budućnost} & \text{da} & \text{ne} \ 8 & \text{svemir} & \text{policajac} & \text{budućnost} & \text{ne} & \text{da} \ \hline \end{array} $$ Q: Koja je klasifikacija novog primjera $\mathbf{x} = (\text{svemir}, \text{dijete}, \text{sadašnjost}, \text{da})$ ? Primjer: 101 Questions End of explanation q101[['Q7','Q101','Q97','Q4']][:20] X = q101[['Q7','Q101','Q97']][:20].as_matrix() y = q101['Q4'][:20].as_matrix() # Apriorna vjerojatnost klase: P(C_j) def class_prior(y, label): N = len(y) return len(y[y==label]) / float(len(y)) # Izglednost klase: P(x_i|C_j) def class_likelihood(X, y, feature_ix, value, label): N = len(X) y_ix = y==label Nj = len(y[y_ix]) Nkj = len(X[sp.logical_and(y_ix, X[:,feature_ix]==value)]) return (Nkj + 1) / (float(Nj) + 2) # Laplace smoothed p_Psi = class_prior(y, 'Psi') p_Psi p_Macke = class_prior(y, 'Mačke') p_Macke p_Messi_Psi = class_likelihood(X, y, 0, 'Messi', 'Psi') p_Messi_Psi p_Ronaldo_Psi = class_likelihood(X, y, 0, 'Ronaldo', 'Psi') p_Ronaldo_Psi Explanation: Q: Voli li onaj tko preferira Messija, Batmana i Tenisice više pse ili mačke? End of explanation class_prior(y, 'Psi') \ * class_likelihood(X, y, 0, 'Messi', 'Psi') \ * class_likelihood(X, y, 1, 'Batman', 'Psi') \ * class_likelihood(X, y, 2, 'Tenisice', 'Psi') \ class_prior(y, 'Mačke') \ * class_likelihood(X, y, 0, 'Messi', 'Mačke') \ * class_likelihood(X, y, 1, 'Batman', 'Mačke') \ * class_likelihood(X, y, 2, 'Tenisice', 'Mačke') \ Explanation: Q: Klasifikacija za $\mathbf{x} = (\text{Messi}, \text{Batman}, \text{Tenisice})$ End of explanation from sklearn.metrics import mutual_info_score X = stats.bernoulli.rvs(0.5, size=100) Y = stats.bernoulli.rvs(0.2, size=100) mutual_info_score(X, Y) mutual_info_score(X, X) X = stats.bernoulli.rvs(0.5, size=100) Y = [(sp.random.randint(2) if x==1 else 0) for x in X ] mutual_info_score(X, Y) Explanation: Polunaivan klasifikator* Ideja Ako, na primjer, ne vrijedi $x_2\bot x_3|\mathcal{C}_j$, onda je bolje umjesto: $$ P(\mathcal{C}_j|x_1,x_2,x_3)\ \propto\ P(x_1|\mathcal{C}_j)P(\color{red}{x_2}|\mathcal{C}_j)P(\color{red}{x_3}|\mathcal{C}_j)P(\mathcal{C}_j) $$ faktorizirati kao: $$ P(\mathcal{C}_j|x_1,x_2,x_3)\ \propto\ P(x_1|\mathcal{C}_j)P(\color{red}{x_2,x_3}|\mathcal{C}_j)P(\mathcal{C}_j) $$ što je jednako: $$ P(\mathcal{C}_j|x_1,x_2,x_3)\ \propto\ P(x_1|\mathcal{C}_j)P(x_2|\mathcal{C}_j)P(\color{red}{x_3|x_2},\mathcal{C}_j)P(\mathcal{C}_j) $$ Q: Prednosti? Q: Broj parametara? Koje varijable združiti? Problem pretraživanja prostora stanja: $$ \begin{align} &{ {a}, {b}, {c} }\ &{ {a}, {b, c} }\ &{ {b}, {a, c} }\ &{ {c}, {a, b} }\ &{ {a, b, c} } \end{align} $$ Bellov broj: $B_3=5, B_{4} = 15, B_{5} = 52, \dots, B_{10} = 115975, \dots$ Treba nam heurističko pretraživanje koje će naći optimalno združivanje (broj stanja = broj particija) Kriterij združivanja varijabli? Dvije mogućnosti: Mjerimo zavisnost varijabli i združujemo one varijable koje su najviše zavisne Algoritmi TAN i $k$-DB Unakrsna provjera: Isprobavamo točnost modela na skupu za provjeru i združujemo one varijable koje povećavaju točnost Algoritam FSSJ Q: Veza s odabirom modela? Mjerenje zavisnosti varijabli: Uzajamna informacija Mjera uzajamne informacije (uzajamnog sadržaja informacije) (engl. mutual information) Entropija $$ H(P) = -\sum_x P(x) \ln P(x) $$ Unakrsna entropija: $$ H(P,Q) = -\sum_x P(x) \ln Q(x) $$ Relativa entropija $P(x)$ u odnosu na $Q(x)$: $$ \begin{align} H(P,Q) - H(P) =& -\sum_x P(x)\ln Q(x) - \big(-\sum_x P(x)\ln P(x) \big) =\ &-\sum_x P(x)\ln Q(x) + \sum_x P(x)\ln P(x) =\ &-\sum_x P(x)\ln \frac{P(x)}{Q(x)} = \color{red}{D_{\mathrm{KL}}(P||Q)}\ \end{align} $$ $\Rightarrow$ Kullback-Leiblerova divergencija Uzajamna informacija ili uzajamni sadržaj informacije (engl. mutual information): $$ I(x,y) = D_\mathrm{KL}\big(P(x,y) || P(x) P(y)\big) = \sum_{x,y} P(x,y) \ln\frac{P(x,y)}{P(x)P(y)} $$ $I(x, y) = 0$ akko su $x$ i $y$ nezavisne varijable, inače $I(x,y) > 0$ End of explanation likelihood_c1 = stats.norm(110, 5) likelihood_c2 = stats.norm(150, 20) likelihood_c3 = stats.norm(180, 10) xs = linspace(70, 200, 200) plt.plot(xs, likelihood_c1.pdf(xs), label='p(x|C_1)') plt.plot(xs, likelihood_c2.pdf(xs), label='p(x|C_2)') plt.plot(xs, likelihood_c3.pdf(xs), label='p(x|C_3)') plt.legend() plt.show() Explanation: Uzajamnu informaciju lako možemo proširiti na uvjetnu uzajamnu informaciju (engl. conditional mutual information): $$ I(x,y\color{red}{|z}) = \sum_z P(z_k) I(x,y|z) = \color{red}{\sum_z}\sum_x\sum_y P(x,y,\color{red}{z}) \ln\frac{P(x,y\color{red}{|z})}{P(x\color{red}{|z})P(y\color{red}{|z})} $$ Bayesov klasifikator za kontinuirane značajke Jednodimenzijski slučaj Izglednost klase $p(\mathbf{x}|\mathcal{C}_j)$ modeliramo Gaussovom razdiobom: $$ \mathbf{x}|\mathcal{C}_j \sim \mathcal{N}(\mu_j,\sigma^2_j) $$ \begin{equation} p(x|\mathcal{C}_j) = \frac{1}{\sqrt{2\pi}\sigma_j}\exp\Big{-\frac{(x-\mu_j)^2}{2\sigma^2_j}\Big} \end{equation} End of explanation likelihood_c1 = stats.norm(100, 5) likelihood_c2 = stats.norm(150, 20) plt.plot(xs, likelihood_c1.pdf(xs), label='p(x|C_1)') plt.plot(xs, likelihood_c2.pdf(xs), label='p(x|C_2)') plt.legend() plt.show() p_c1 = 0.3 p_c2 = 0.7 def joint_x_c1(x) : return likelihood_c1.pdf(x) * p_c1 def joint_x_c2(x) : return likelihood_c2.pdf(x) * p_c2 plt.plot(xs, joint_x_c1(xs), label='p(x, C_1)') plt.plot(xs, joint_x_c2(xs), label='p(x, C_2)') plt.legend() plt.show() def p_x(x) : return joint_x_c1(x) + joint_x_c2(x) plt.plot(xs, p_x(xs), label='p(x)') plt.legend() plt.show() def posterior_c1(x) : return joint_x_c1(x) / p_x(x) def posterior_c2(x) : return joint_x_c2(x) / p_x(x) plt.plot(xs, posterior_c1(xs), label='p(C_1|x)') plt.plot(xs, posterior_c2(xs), label='p(C_2|x)') plt.legend() plt.show() Explanation: NB: Pretpostavljamo da je razdioba primjera unutar svake klase unimodalna (modelirana jednom Gaussovom razdiobom) Inače nam treba mješavina Gaussovih razdiobi (GMM) Model: $$ h_j(x) = p(x,\mathcal{C}_j) = p(x|\mathcal{C}_j)P(\mathcal{C}_j) $$ Radi matematičke jednostavnosti, prelazimo u logaritamsku domenu: \begin{align*} h_j(x) & = \ln p(x|\mathcal{C}_j) + \ln P(\mathcal{C}_j)\ &= \color{gray}{-\frac{1}{2}\ln 2\pi} \ln\sigma_j \frac{(x-\mu_j)^2}{2\sigma^2_j} + \ln P(\mathcal{C}_j)\ \end{align*} Uklanajnje konstante (ne utječe na maksimizaciju): $$ h_j(x|\boldsymbol{\theta}_j) = - \ln\hat{\sigma}_j \frac{(x-\hat{\mu}_j)^2}{2\hat{\sigma}^2_j} + \ln\hat{P}(\mathcal{C}_j) $$ gdje je vektor parametara jednak $$ \boldsymbol{\theta}_j=(\mu_j, \sigma_j, P(\mathcal{C}_j)) $$ ML-procjene parametara: \begin{align} \hat{\mu}j &= \frac{1}{N_j}\sum{i=1}^N \mathbf{1}{y^{(i)} = \mathcal{C}j} x^{(i)}\ \hat{\sigma}^2_j &= \frac{1}{N_j}\sum{i=1}^N\mathbf{1}{y^{(i)} = \mathcal{C}_j}(x^{(i)}-\hat{\mu}_j)^2 \ \hat{P}(\mathcal{C}_j) &= \frac{N_j}{N}\ \end{align} End of explanation mu_1 = [-2, 1] mu_2 = [2, 0] covm_1 = sp.array([[1, 1], [1, 3]]) covm_2 = sp.array([[2, -0.5], [-0.5, 1]]) p_c1 = 0.4 p_c2 = 0.6 likelihood_c1 = stats.multivariate_normal(mu_1, covm_1) likelihood_c2 = stats.multivariate_normal(mu_2, covm_2) x = np.linspace(-5, 5) y = np.linspace(-5, 5) X, Y = np.meshgrid(x, y) XY = np.dstack((X,Y)) plt.contour(X, Y, likelihood_c1.pdf(XY) * p_c1) plt.contour(X, Y, likelihood_c2.pdf(XY) * p_c2); Explanation: Višedimenzijski slučaj Izglednost klase: \begin{equation} p(\mathbf{x}|\mathcal{C}_j) = \frac{1}{(2\pi)^{n/2}|\boldsymbol{\Sigma}_j|^{1/2}} \exp\Big{-\frac{1}{2}(\mathbf{x}^{(i)}-\boldsymbol{\mu}_j)^{\mathrm{T}}\boldsymbol{\Sigma}_j^{-1}(\mathbf{x}^{(i)}-\boldsymbol{\mu}_j)\Big} \end{equation} Model: $$ \begin{align*} h_j(\mathbf{x}) &= \ln p(\mathbf{x}|\mathcal{C}_j) + \ln P(\mathcal{C}_j)\ &= \color{gray}{-\frac{n}{2}\ln 2\pi} \frac{1}{2}\ln|\boldsymbol{\Sigma}_j| \frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_j)^\mathrm{T}\boldsymbol{\Sigma}_j^{-1}(\mathbf{x}-\boldsymbol{\mu}_j) + \ln P(\mathcal{C}_j)\ &\Rightarrow \frac{1}{2}\ln|\boldsymbol{\Sigma}_j| \frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_j)^\mathrm{T}\boldsymbol{\Sigma}_j^{-1}(\mathbf{x}-\boldsymbol{\mu}_j) + \ln P(\mathcal{C}_j)\ \end{align*} $$ Interpretacija za $\boldsymbol{\mu}$ i $\boldsymbol{\Sigma}$: $\boldsymbol{\mu}_j$ - prototipna vrijednost primjera u klasi $\mathcal{C}_j$ $\boldsymbol{\Sigma}_j$ - količina šuma i korelacija između izvora šuma unutar $\mathcal{C}_j$ Q: Broj parametara? ML-procjene parametara: \begin{align} \hat{\boldsymbol{\mu}}j &= \frac{1}{N_j}\sum{i=1}^N\mathbf{1}{y^{(i)}=\mathcal{C}j}\mathbf{x}^{(i)}\ \hat{\boldsymbol{\Sigma}}_j &= \frac{1}{N_j}\sum{i=1}^N \mathbf{1}{y^{(i)}=\mathcal{C}_j}(\mathbf{x}^{(i)}-\hat{\boldsymbol{\mu}}_j)(\mathbf{x}^{(i)}-\hat{\boldsymbol{\mu}}_j)^\mathrm{T}\ \hat{P}(\mathcal{C}_j) &= \frac{N_j}{N} \end{align} O kovarijacijskoj matrici \begin{equation} \boldsymbol{\Sigma} = \begin{pmatrix} \mathrm{Var}(x_1) & \mathrm{Cov}(x_1, x_2) & \dots & \mathrm{Cov}(x_1,x_n)\ \mathrm{Cov}(x_2, x_1) & \mathrm{Var}(x_2) & \dots & \mathrm{Cov}(x_2,x_n)\ \vdots & \vdots & \ddots & \vdots \ \mathrm{Cov}(x_n,x_1) & \mathrm{Cov}(x_n,x_2) & \dots & \mathrm{Var}(x_n)\ \end{pmatrix} \end{equation} $\boldsymbol{\Sigma}$ je simetrična $\boldsymbol{\Sigma}$ uvijek pozitivno semidefinitna $\Delta^2 = \mathbf{x}^{\mathrm{T}}\boldsymbol{\Sigma}\mathbf{x}\geq 0$ za Mahalanobisovu udaljenost vrijedi $\Delta\geq 0$ Ali, da bi PDF bila dobro definirana, $\boldsymbol{\Sigma}$ mora biti pozitivno definitna: $\Delta^2 = \mathbf{x}^\mathrm{T}\boldsymbol{\Sigma}\mathbf{x} > 0$ za ne-nul vektor $\mathbf{x}$ $\boldsymbol{\Sigma}$ je pozitivno definitna $\Rightarrow$ $\boldsymbol{\Sigma}$ je nesingularna: $|\boldsymbol{\Sigma}|>0$ i postoji $\boldsymbol{\Sigma}^{-1}$ (obrat ne vrijedi!) Ako $\boldsymbol{\Sigma}$ nije pozitivno definitna, najčešći uzroci su $\mathrm{Var}(x_i)=0$ (beskorisna značajka) $\mathrm{Cov}(x_i,x_j)=1$ (redundantan par značajki) End of explanation plt.contour(X, Y, likelihood_c1.pdf(XY) * p_c1, cmap='gray_r') plt.contour(X, Y, likelihood_c2.pdf(XY) * p_c2, cmap='gray_r') plt.contour(X, Y, likelihood_c1.pdf(XY) * p_c1 - likelihood_c2.pdf(XY) * p_c2, levels=[0], colors='r', linewidths=2); Explanation: Granica između klasa $\mathcal{C}_1$ i $\mathcal{C}_2$ je: $$ h_1(\mathbf{x}) = h_2(\mathbf{x}) $$ tj. $$ h_1(\mathbf{x}) - h_2(\mathbf{x}) = 0 $$ End of explanation mu_1 = [-2, 1] mu_2 = [2, 0] covm_1 = sp.array([[1, 1], [1, 3]]) covm_2 = sp.array([[2, -0.5], [-0.5, 1]]) p_c1 = 0.4 p_c2 = 0.6 covm_shared = (p_c1 * covm_1 + p_c2 * covm_2) / 2 likelihood_c1 = stats.multivariate_normal(mu_1, covm_shared) likelihood_c2 = stats.multivariate_normal(mu_2, covm_shared) plt.contour(X, Y, likelihood_c1.pdf(XY) * p_c1, cmap='gray_r') plt.contour(X, Y, likelihood_c2.pdf(XY) * p_c2, cmap='gray_r') plt.contour(X, Y, likelihood_c1.pdf(XY) * p_c1 - likelihood_c2.pdf(XY) * p_c2, levels=[0], colors='r', linewidths=2); Explanation: Granca je nelinearna jer postoji član koji kvadratno ovisi o $\mathbf{x}$ (i taj član ne iščezava kada računamo razliku $h_1(x)-h_2(x)$): \begin{align} h_j(\mathbf{x}) &= \color{gray}{-\frac{n}{2}\ln 2\pi} - \frac{1}{2}\ln|\boldsymbol{\Sigma}_j| - \frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_j)^\mathrm{T}\boldsymbol{\Sigma}_j^{-1}(\mathbf{x}-\mathbf{\mu}_j) + \ln P(\mathcal{C}_j)\ &\Rightarrow - \frac{1}{2}\ln|\boldsymbol{\Sigma}_j| - \frac{1}{2}\big(\color{red}{\mathbf{x}^\mathrm{T}\boldsymbol{\Sigma}_j^{-1}\mathbf{x}} -2\mathbf{x}^\mathrm{T}\boldsymbol{\Sigma}_j^{-1}\boldsymbol{\mu}_j +\boldsymbol{\mu}_j^\mathrm{T}\boldsymbol{\Sigma}_j^{-1}\boldsymbol{\mu}_j\big) + \ln P(\mathcal{C}_j) \end{align} Kvadratni model ima previše parametara: $\mathcal{O}(n^2)$ Pojednostavljenja $\Rightarrow$ dodatne induktivne pretpostavke? 1. pojednostavljenje: dijeljena kovarijacijska matrica $$ \hat{\boldsymbol{\Sigma}} = \sum_j \hat{P}(\mathcal{C}_j)\hat{\boldsymbol{\Sigma}}_j $$ \begin{align} h_j(\mathbf{x}) &= \color{gray}{- \frac{1}{2}\ln|\boldsymbol{\Sigma}|} - \frac{1}{2}(\color{gray}{\mathbf{x}^\mathrm{T}\boldsymbol{\Sigma}^{-1}\mathbf{x}} -2\mathbf{x}^\mathrm{T}\boldsymbol{\Sigma}^{-1}\boldsymbol{\mu}_j +\boldsymbol{\mu}_j^\mathrm{T}\boldsymbol{\Sigma}^{-1}\boldsymbol{\mu}_j) + \ln P(\mathcal{C}_j)\ &\Rightarrow \mathbf{x}^\mathrm{T}\boldsymbol{\Sigma}^{-1}\boldsymbol{\mu}_j -\frac{1}{2}\boldsymbol{\mu}_j^\mathrm{T}\boldsymbol{\Sigma}^{-1}\boldsymbol{\mu}_j + \ln P(\mathcal{C}_j) \end{align} Član $\mathbf{x}^\mathrm{T}\boldsymbol{\Sigma}^{-1}\mathbf{x}$ isti je za svaki $h_j$, pa iščezava kada računamo granicu između klasa Dakle, dobivamo linearan model! Broj parametara? End of explanation mu_1 = [-2, 1] mu_2 = [2, 0] p_c1 = 0.4 p_c2 = 0.6 covm_shared_diagonal = [[2,0],[0,1]] likelihood_c1 = stats.multivariate_normal(mu_1, covm_shared_diagonal) likelihood_c2 = stats.multivariate_normal(mu_2, covm_shared_diagonal) plt.contour(X, Y, likelihood_c1.pdf(XY) * p_c1, cmap='gray_r') plt.contour(X, Y, likelihood_c2.pdf(XY) * p_c2, cmap='gray_r') plt.contour(X, Y, likelihood_c1.pdf(XY) * p_c1 - likelihood_c2.pdf(XY) * p_c2, levels=[0], colors='r', linewidths=2); Explanation: 2. pojednostavljenje: dijagonalna kovarijacijska matrica $$ \boldsymbol{\Sigma} = \mathrm{diag}(\sigma_i^2) \quad \Rightarrow \quad |\boldsymbol{\Sigma}|=\prod_i\sigma_i,\quad \boldsymbol{\Sigma}^{-1}=\mathrm{diag}(1/\sigma_i^2) $$ Izglednost klase: \begin{align} p(\mathbf{x}|\mathcal{C}j) &= \frac{1}{(2\pi)^{n/2}|\boldsymbol{\Sigma}|^{1/2}} \exp\Big{-\frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_j)^\mathrm{T}\boldsymbol{\Sigma}^{-1}(\mathbf{x}-\boldsymbol{\mu}_j)\Big}\ &= \frac{1}{(2\pi)^{n/2}\color{red}{\prod{i=1}^n\sigma_i}} \exp\Big{-\frac{1}{2}\sum_{i=1}^n\Big(\frac{x_i-\mu_{ij}}{\color{red}{\sigma_i}}\Big)^2\Big}\ &= \prod_{i=1}^n \frac{1}{\sqrt{2\pi}\sigma_i} \exp\Big{-\frac{1}{2}(\frac{x_i-\mu_{ij}}{\sigma_i}\Big)^2\Big} = \prod_{i=1}^n\mathcal{N}(\mu_{ij},\sigma_i^2) \end{align} Dobili smo umnožak univarijatnih Gaussovih distribucija NB: Ovo je naivan Bayesov klasifikator (za kontinuirane značajke)! $x_i\bot x_k|\mathcal{C}_j\ \Rightarrow\ \mathrm{Cov}(x_i|\mathcal{C}_j, x_k|\mathcal{C}_j)=0$ $p(x|\mathcal{C}_j) = \prod_i p(x_i|\mathcal{C}_j)$ Model: \begin{align} h_j(\mathbf{x}) &= \ln p(\mathbf{x}|\mathcal{C}j) + \ln P(\mathcal{C}_j)\ &\Rightarrow -\frac{1}{2}\sum{i=1}^n\Big(\frac{x_i-\mu_{ij}}{\sigma_i}\Big)^2 + \ln \Pr{\mathcal{C}_j} \end{align} NB: Računamo normirane euklidske udaljenosti (normirane sa $\sigma_i$) Q: Broj parametara? End of explanation mu_1 = [-2, 1] mu_2 = [2, 0] p_c1 = 0.4 p_c2 = 0.6 covm_shared_diagonal = [[1,0],[0,1]] likelihood_c1 = stats.multivariate_normal(mu_1, covm_shared_diagonal) likelihood_c2 = stats.multivariate_normal(mu_2, covm_shared_diagonal) plt.contour(X, Y, likelihood_c1.pdf(XY) * p_c1, cmap='gray_r') plt.contour(X, Y, likelihood_c2.pdf(XY) * p_c2, cmap='gray_r') plt.contour(X, Y, likelihood_c1.pdf(XY) * p_c1 - likelihood_c2.pdf(XY) * p_c2, levels=[0], colors='r', linewidths=2); Explanation: 3. pojednostavljenje: izotropna kovarijacijska matrica $$ \boldsymbol{\Sigma}=\sigma^2\mathbf{I} $$ \begin{equation} h_j(\mathbf{x}) = -\frac{1}{2\sigma^2}\sum_{i=1}^n(x_i-\mu_{ij})^2 + \ln P(\mathcal{C}_j) \end{equation} Broj parametara? End of explanation
6,455
Given the following text description, write Python code to implement the functionality described below step by step Description: Deploy a BigQuery ML user churn propensity model to Vertex AI for online predictions Learning objectives Explore and preprocess a Google Analytics 4 data sample in BigQuery for machine learning. Train a BigQuery ML (BQML) XGBoost classifier to predict user churn on a mobile gaming application. Tune a BQML XGBoost classifier using BQML hyperparameter tuning features. Evaluate the performance of a BQML XGBoost classifier. Explain your XGBoost model with BQML Explainable AI global feature attributions. Generate batch predictions with your BQML XGBoost model. Export a BQML XGBoost model to a Google Cloud Storage. Upload and deploy a BQML XGBoost model to a Vertex AI Prediction Endpoint for online predictions. Introduction In this lab, you will train, evaluate, explain, and generate batch and online predictions with a BigQuery ML (BQML) XGBoost model. You will use a Google Analytics 4 dataset from a real mobile application, Flood it! (Android app, iOS app), to determine the likelihood of users returning to the application. You will generate batch predictions with your BigQuery ML model as well as export and deploy it to Vertex AI for online predictions. BigQuery ML lets you train and do batch inference with machine learning models in BigQuery using standard SQL queries faster by eliminating the need to move data with fewer lines of code. Vertex AI is Google Cloud's complimentary next generation, unified platform for machine learning development. By developing and deploying BQML machine learning solutions on Vertex AI, you can leverage a scalable online prediction service and MLOps tools for model retraining and monitoring to significantly enhance your development productivity, the ability to scale your workflow and decision making with your data, and accelerate time to value. Note Step1: Import libraries Step2: Create a GCS bucket for artifact storage Create a globally unique Google Cloud Storage bucket for artifact storage. You will use this bucket to export your BQML model later in the lab and upload it to Vertex AI. Step3: Create a BigQuery dataset Next, create a BigQuery dataset from this notebook using the Python-based bq command line utility. This dataset will group your feature views, model, and predictions table together. You can view it in the BigQuery console. Step4: Initialize the Vertex Python SDK client Import the Vertex SDK for Python into your Python environment and initialize it. Step5: Exploratory Data Analysis (EDA) in BigQuery This lab uses a public BigQuery dataset that contains raw event data from a real mobile gaming app called Flood it! (Android app, iOS app). The data schema originates from Google Analytics for Firebase but is the same schema as Google Analytics 4. Take a look at a sample of the raw event dataset using the query below Step6: Note Step7: Dataset preparation in BigQuery Now that you have a better sense for the dataset you will be working with, you will walk through transforming raw event data into a dataset suitable for machine learning using SQL commands in BigQuery. Specifically, you will Step8: Review how many of the 15k users bounced and returned below Step9: For the training data, you will only end up using data where bounced = 0. Based on the 15k users, you can see that 5,557 ( about 41%) users bounced within the first ten minutes of their first engagement with the app. Of the remaining 8,031 users, 1,883 users ( about 23%) churned after 24 hours which you can validate with the query below Step10: Extract user demographic features There is various user demographic information included in this dataset, including app_info, device, ecommerce, event_params, and geo. Demographic features can help the model predict whether users on certain devices or countries are more likely to churn. Note that a user's demographics may occasionally change (e.g. moving countries). For simplicity, you will use the demographic information that Google Analytics 4 provides when the user first engaged with the app as indicated by MIN(event_timestamp) in the query below. This enables every unique user to be represented by a single row. Step11: Aggregate user behavioral features Behavioral data in the raw event data spans across multiple events -- and thus rows -- per user. The goal of this section is to aggregate and extract behavioral data for each user, resulting in one row of behavioral data per unique user. As a first step, you can explore all the unique events that exist in this dataset, based on event_name Step12: For this lab, to predict whether a user will churn or return, you can start by counting the number of times a user engages in the following event types Step13: Prepare your train/eval/test datasets for machine learning In this section, you can now combine these three intermediary views (user_churn, user_demographics, and user_behavior) into the final training data view called ml_features. Here you can also specify bounced = 0, in order to limit the training data only to users who did not "bounce" within the first 10 minutes of using the app. Note in the query below that a manual data_split column is created in your BigQuery ML table using BigQuery's hashing functions for repeatable sampling. It specifies a 80% train | 10% eval | 20% test split to evaluate your model's performance and generalization. Step14: Validate feature splits Run the query below to validate the number of examples in each data partition for the 80% train |10% eval |10% test split. Step15: Train and tune a BQML XGBoost propensity model to predict customer churn The following code trains and tunes the hyperparameters for an XGBoost model. TO provide a minimal demonstration of BQML hyperparameter tuning in this lab, this model will take about 18 min to train and tune with its restricted search space and low number of trials. In practice, you would generally want at least 10 trials per hyperparameter to achieve improved results. For more information on the default hyperparameters used, you can read the documentation Step16: Evaluate BQML XGBoost model performance Once training is finished, you can run ML.EVALUATE to return model evaluation metrics. By default, all model trials will be returned so the below query just returns the model performance for optimal first trial. Step17: ML.EVALUATE generates the precision, recall, accuracy, log_loss, f1_score and roc_auc using the default classification threshold of 0.5, which can be modified by using the optional THRESHOLD parameter. Next, use the ML.CONFUSION_MATRIX function to return a confusion matrix for the input classification model and input data. For more information on confusion matrices, you can read through a detailed explanation here. Step18: You can also plot the AUC-ROC curve by using ML.ROC_CURVE to return the metrics for different threshold values for the model. Step19: Inspect global feature attributions To provide further context to your model performance, you can use the ML.GLOBAL_EXPLAIN function which leverages Vertex Explainable AI as a back-end. Vertex Explainable AI helps you understand your model's outputs for classification and regression tasks. Specifically, Vertex AI tells you how much each feature in the data contributed to your model's predicted result. You can then use this information to verify that the model is behaving as expected, identify and mitigate biases in your models, and get ideas for ways to improve your model and your training data. Step20: Generate batch predictions You can generate batch predictions for your BQML XGBoost model using ML.PREDICT. Step21: The following query returns the probability that the user will return after 24 hrs. The higher the probability and closer it is to 1, the more likely the user is predicted to churn, and the closer it is to 0, the more likely the user is predicted to return. Step22: Export a BQML model to Vertex AI for online predictions See the official BigQuery ML Guide Step23: Navigate to Google Cloud Storage in Google Cloud Console to "gs Step24: Deploy a Vertex Endpoint for online predictions Before you use your model to make predictions, you need to deploy it to an Endpoint object. When you deploy a model to an Endpoint, you associate physical (machine) resources with that model to enable it to serve online predictions. Online predictions have low latency requirements; providing resources to the model in advance reduces latency. You can do this by calling the deploy function on the Model resource. This will do two things Step27: Query model for online predictions XGBoost only takes numerical feature inputs. When you trained your BQML model above with CREATE MODEL statement, it automatically handled encoding of categorical features such as user country, operating system, and language into numeric representations. In order for our exported model to generate online predictions, you will use the categorical feature vocabulary files exported under the assets/ folder of your model directory and the Scikit-Learn preprocessing code below to map your test instances to numeric values. Step28: Next steps Congratulations! In this lab, you trained, tuned, explained, and deployed a BigQuery ML user churn model to generate high business impact batch and online churn predictions to target customers likely to churn with interventions such as in-game rewards and reminder notifications. In this lab, you used user_psuedo_id as a user identifier. As next steps, you can extend this code further by having your application return a user_id to Google Analytics so you can join your model's predictions with additional first-party data such as purchase history and marketing engagement data. This enables you to integrate batch predictions into Looker dashboards to help product teams prioritize user experience improvements and marketing teams create targeted user interventions such as reminder emails to improve retention. Through having your model in Vertex AI Prediction, you also have a scalable prediction service to call from your application to directly integrate online predictions in order to to tailor personalized user game experiences and allow for targeted habit-building notifications. As you collect more data from your users, you may want to regularly evaluate your model on fresh data and re-train the model if you notice that the model quality is decaying. Vertex Pipelines can help you to automate, monitor, and govern your ML solutions by orchestrating your BQML workflow in a serverless manner, and storing your workflow's artifacts using Vertex ML Metadata. For another alternative for continuous BQML models, checkout the blog post Continuous model evaluation with BigQuery ML, Stored Procedures, and Cloud Scheduler. License
Python Code: # Retrieve and set PROJECT_ID and REGION environment variables. PROJECT_ID = !(gcloud config get-value core/project) PROJECT_ID = PROJECT_ID[0] BQ_LOCATION = 'US' REGION = 'us-central1' Explanation: Deploy a BigQuery ML user churn propensity model to Vertex AI for online predictions Learning objectives Explore and preprocess a Google Analytics 4 data sample in BigQuery for machine learning. Train a BigQuery ML (BQML) XGBoost classifier to predict user churn on a mobile gaming application. Tune a BQML XGBoost classifier using BQML hyperparameter tuning features. Evaluate the performance of a BQML XGBoost classifier. Explain your XGBoost model with BQML Explainable AI global feature attributions. Generate batch predictions with your BQML XGBoost model. Export a BQML XGBoost model to a Google Cloud Storage. Upload and deploy a BQML XGBoost model to a Vertex AI Prediction Endpoint for online predictions. Introduction In this lab, you will train, evaluate, explain, and generate batch and online predictions with a BigQuery ML (BQML) XGBoost model. You will use a Google Analytics 4 dataset from a real mobile application, Flood it! (Android app, iOS app), to determine the likelihood of users returning to the application. You will generate batch predictions with your BigQuery ML model as well as export and deploy it to Vertex AI for online predictions. BigQuery ML lets you train and do batch inference with machine learning models in BigQuery using standard SQL queries faster by eliminating the need to move data with fewer lines of code. Vertex AI is Google Cloud's complimentary next generation, unified platform for machine learning development. By developing and deploying BQML machine learning solutions on Vertex AI, you can leverage a scalable online prediction service and MLOps tools for model retraining and monitoring to significantly enhance your development productivity, the ability to scale your workflow and decision making with your data, and accelerate time to value. Note: this lab is inspired by and extends Churn prediction for game developers using Google Analytics 4 (GA4) and BigQuery ML. See that blog post and accompanying tutorial for additional depth on this use case and BigQuery ML. In this lab, you will go one step further and focus on how Vertex AI extends BQML's capabilities through online prediction so you can incorporate both customer churn predictions into decision making UIs such as Looker dashboards but also online predictions directly into customer applications to power targeted interventions such as targeted incentives. Use case: user churn propensity modeling in the mobile gaming industry According to a 2019 study on 100K mobile games by the Mobile Gaming Industry Analysis, most mobile games only see a 25% retention rate for users after the first 24 hours, known and any game "below 30% retention generally needs improvement". For mobile game developers, improving user retention is critical to revenue stability and increasing profitability. In fact, Bain & Company research found that 5% growth in retention rate can result in a 25-95% increase in profits. With lower costs to retain existing customers, the business objective for game developers is clear: reduce churn and improve customer loyalty to drive long-term profitability. Your task in this lab: use machine learning to predict user churn propensity after day 1, a crucial user onboarding window, and serve these online predictions to inform interventions such as targeted in-game rewards and notifications. Setup Define constants End of explanation from google.cloud import bigquery from google.cloud import aiplatform as vertexai import numpy as np import pandas as pd Explanation: Import libraries End of explanation GCS_BUCKET = f"{PROJECT_ID}-bqmlga4" !gsutil mb -l $REGION gs://$GCS_BUCKET Explanation: Create a GCS bucket for artifact storage Create a globally unique Google Cloud Storage bucket for artifact storage. You will use this bucket to export your BQML model later in the lab and upload it to Vertex AI. End of explanation BQ_DATASET = f"{PROJECT_ID}:bqmlga4" !bq mk --location={BQ_LOCATION} --dataset {BQ_DATASET} Explanation: Create a BigQuery dataset Next, create a BigQuery dataset from this notebook using the Python-based bq command line utility. This dataset will group your feature views, model, and predictions table together. You can view it in the BigQuery console. End of explanation vertexai.init(project=PROJECT_ID, location=REGION, staging_bucket=f"gs://{GCS_BUCKET}") Explanation: Initialize the Vertex Python SDK client Import the Vertex SDK for Python into your Python environment and initialize it. End of explanation %%bigquery --project $PROJECT_ID SELECT * FROM `firebase-public-project.analytics_153293282.events_*` TABLESAMPLE SYSTEM (1 PERCENT) Explanation: Exploratory Data Analysis (EDA) in BigQuery This lab uses a public BigQuery dataset that contains raw event data from a real mobile gaming app called Flood it! (Android app, iOS app). The data schema originates from Google Analytics for Firebase but is the same schema as Google Analytics 4. Take a look at a sample of the raw event dataset using the query below: End of explanation %%bigquery --project $PROJECT_ID SELECT COUNT(DISTINCT user_pseudo_id) as count_distinct_users, COUNT(event_timestamp) as count_events FROM `firebase-public-project.analytics_153293282.events_*` Explanation: Note: in the cell above, Jupyterlab runs cells starting with %%bigquery as SQL queries. Google Analytics 4 uses an event based measurement model and each row in this dataset is an event. View the complete schema and details about each column. As you can see above, certain columns are nested records and contain detailed information such as: app_info device ecommerce event_params geo traffic_source user_properties items* web_info* This dataset contains 5.7M events from 15K+ users. End of explanation %%bigquery --project $PROJECT_ID CREATE OR REPLACE VIEW bqmlga4.user_churn AS ( WITH firstlasttouch AS ( SELECT user_pseudo_id, MIN(event_timestamp) AS user_first_engagement, MAX(event_timestamp) AS user_last_engagement FROM `firebase-public-project.analytics_153293282.events_*` WHERE event_name="user_engagement" GROUP BY user_pseudo_id ) SELECT user_pseudo_id, user_first_engagement, user_last_engagement, EXTRACT(MONTH from TIMESTAMP_MICROS(user_first_engagement)) as month, EXTRACT(DAYOFYEAR from TIMESTAMP_MICROS(user_first_engagement)) as julianday, EXTRACT(DAYOFWEEK from TIMESTAMP_MICROS(user_first_engagement)) as dayofweek, #add 24 hr to user's first touch (user_first_engagement + 86400000000) AS ts_24hr_after_first_engagement, #churned = 1 if last_touch within 24 hr of app installation, else 0 IF (user_last_engagement < (user_first_engagement + 86400000000), 1, 0 ) AS churned, #bounced = 1 if last_touch within 10 min, else 0 IF (user_last_engagement <= (user_first_engagement + 600000000), 1, 0 ) AS bounced, FROM firstlasttouch GROUP BY user_pseudo_id, user_first_engagement, user_last_engagement ); SELECT * FROM bqmlga4.user_churn LIMIT 100; Explanation: Dataset preparation in BigQuery Now that you have a better sense for the dataset you will be working with, you will walk through transforming raw event data into a dataset suitable for machine learning using SQL commands in BigQuery. Specifically, you will: Aggregate events so that each row represents a separate unique user ID. Define the user churn label feature to train your model to prediction (e.g. 1 = churned, 0 = returned). Create user demographic features. Create user behavioral features from aggregated application events. Defining churn for each user There are many ways to define user churn, but for the purposes of this lab, you will predict 1-day churn as users who do not come back and use the app again after 24 hr of the user's first engagement. This is meant to capture churn after a user's "first impression" of the application or onboarding experience. In other words, after 24 hr of a user's first engagement with the app: if the user shows no event data thereafter, the user is considered churned. if the user does have at least one event datapoint thereafter, then the user is considered returned. You may also want to remove users who were unlikely to have ever returned anyway after spending just a few minutes with the app, which is sometimes referred to as "bouncing". For example, you will build your model on only on users who spent at least 10 minutes with the app (users who didn't bounce). The query below defines a churned user with the following definition: Churned = "any user who spent at least 10 minutes on the app, but after 24 hour from when they first engaged with the app, never used the app again" You will use the raw event data, from their first touch (app installation) to their last touch, to identify churned and bounced users in the user_churn view query below: End of explanation %%bigquery --project $PROJECT_ID SELECT bounced, churned, COUNT(churned) as count_users FROM bqmlga4.user_churn GROUP BY bounced, churned ORDER BY bounced Explanation: Review how many of the 15k users bounced and returned below: End of explanation %%bigquery --project $PROJECT_ID SELECT COUNTIF(churned=1)/COUNT(churned) as churn_rate FROM bqmlga4.user_churn WHERE bounced = 0 Explanation: For the training data, you will only end up using data where bounced = 0. Based on the 15k users, you can see that 5,557 ( about 41%) users bounced within the first ten minutes of their first engagement with the app. Of the remaining 8,031 users, 1,883 users ( about 23%) churned after 24 hours which you can validate with the query below: End of explanation %%bigquery --project $PROJECT_ID CREATE OR REPLACE VIEW bqmlga4.user_demographics AS ( WITH first_values AS ( SELECT user_pseudo_id, geo.country as country, device.operating_system as operating_system, device.language as language, ROW_NUMBER() OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp DESC) AS row_num FROM `firebase-public-project.analytics_153293282.events_*` WHERE event_name="user_engagement" ) SELECT * EXCEPT (row_num) FROM first_values WHERE row_num = 1 ); SELECT * FROM bqmlga4.user_demographics LIMIT 10 Explanation: Extract user demographic features There is various user demographic information included in this dataset, including app_info, device, ecommerce, event_params, and geo. Demographic features can help the model predict whether users on certain devices or countries are more likely to churn. Note that a user's demographics may occasionally change (e.g. moving countries). For simplicity, you will use the demographic information that Google Analytics 4 provides when the user first engaged with the app as indicated by MIN(event_timestamp) in the query below. This enables every unique user to be represented by a single row. End of explanation %%bigquery --project $PROJECT_ID SELECT event_name, COUNT(event_name) as event_count FROM `firebase-public-project.analytics_153293282.events_*` GROUP BY event_name ORDER BY event_count DESC Explanation: Aggregate user behavioral features Behavioral data in the raw event data spans across multiple events -- and thus rows -- per user. The goal of this section is to aggregate and extract behavioral data for each user, resulting in one row of behavioral data per unique user. As a first step, you can explore all the unique events that exist in this dataset, based on event_name: End of explanation %%bigquery --project $PROJECT_ID CREATE OR REPLACE VIEW bqmlga4.user_behavior AS ( WITH events_first24hr AS ( # Select user data only from first 24 hr of using the app. SELECT e.* FROM `firebase-public-project.analytics_153293282.events_*` e JOIN bqmlga4.user_churn c ON e.user_pseudo_id = c.user_pseudo_id WHERE e.event_timestamp <= c.ts_24hr_after_first_engagement ) SELECT user_pseudo_id, SUM(IF(event_name = 'user_engagement', 1, 0)) AS cnt_user_engagement, SUM(IF(event_name = 'level_start_quickplay', 1, 0)) AS cnt_level_start_quickplay, SUM(IF(event_name = 'level_end_quickplay', 1, 0)) AS cnt_level_end_quickplay, SUM(IF(event_name = 'level_complete_quickplay', 1, 0)) AS cnt_level_complete_quickplay, SUM(IF(event_name = 'level_reset_quickplay', 1, 0)) AS cnt_level_reset_quickplay, SUM(IF(event_name = 'post_score', 1, 0)) AS cnt_post_score, SUM(IF(event_name = 'spend_virtual_currency', 1, 0)) AS cnt_spend_virtual_currency, SUM(IF(event_name = 'ad_reward', 1, 0)) AS cnt_ad_reward, SUM(IF(event_name = 'challenge_a_friend', 1, 0)) AS cnt_challenge_a_friend, SUM(IF(event_name = 'completed_5_levels', 1, 0)) AS cnt_completed_5_levels, SUM(IF(event_name = 'use_extra_steps', 1, 0)) AS cnt_use_extra_steps, FROM events_first24hr GROUP BY user_pseudo_id ); SELECT * FROM bqmlga4.user_behavior LIMIT 10 Explanation: For this lab, to predict whether a user will churn or return, you can start by counting the number of times a user engages in the following event types: user_engagement level_start_quickplay level_end_quickplay level_complete_quickplay level_reset_quickplay post_score spend_virtual_currency ad_reward challenge_a_friend completed_5_levels use_extra_steps In the SQL query below, you will aggregate the behavioral data by calculating the total number of times when each of the above event_names occurred in the data set per user. End of explanation %%bigquery --project $PROJECT_ID CREATE OR REPLACE VIEW bqmlga4.ml_features AS ( SELECT dem.user_pseudo_id, IFNULL(dem.country, "Unknown") AS country, IFNULL(dem.operating_system, "Unknown") AS operating_system, IFNULL(REPLACE(dem.language, "-", "X"), "Unknown") AS language, IFNULL(beh.cnt_user_engagement, 0) AS cnt_user_engagement, IFNULL(beh.cnt_level_start_quickplay, 0) AS cnt_level_start_quickplay, IFNULL(beh.cnt_level_end_quickplay, 0) AS cnt_level_end_quickplay, IFNULL(beh.cnt_level_complete_quickplay, 0) AS cnt_level_complete_quickplay, IFNULL(beh.cnt_level_reset_quickplay, 0) AS cnt_level_reset_quickplay, IFNULL(beh.cnt_post_score, 0) AS cnt_post_score, IFNULL(beh.cnt_spend_virtual_currency, 0) AS cnt_spend_virtual_currency, IFNULL(beh.cnt_ad_reward, 0) AS cnt_ad_reward, IFNULL(beh.cnt_challenge_a_friend, 0) AS cnt_challenge_a_friend, IFNULL(beh.cnt_completed_5_levels, 0) AS cnt_completed_5_levels, IFNULL(beh.cnt_use_extra_steps, 0) AS cnt_use_extra_steps, chu.user_first_engagement, chu.month, chu.julianday, chu.dayofweek, chu.churned, # https://towardsdatascience.com/ml-design-pattern-5-repeatable-sampling-c0ccb2889f39 # BQML Hyperparameter tuning requires STRING 3 partition data_split column. # 80% 'TRAIN' | 10%'EVAL' | 10% 'TEST' CASE WHEN ABS(MOD(FARM_FINGERPRINT(dem.user_pseudo_id), 10)) <= 7 THEN 'TRAIN' WHEN ABS(MOD(FARM_FINGERPRINT(dem.user_pseudo_id), 10)) = 8 THEN 'EVAL' WHEN ABS(MOD(FARM_FINGERPRINT(dem.user_pseudo_id), 10)) = 9 THEN 'TEST' ELSE '' END AS data_split FROM bqmlga4.user_churn chu LEFT OUTER JOIN bqmlga4.user_demographics dem ON chu.user_pseudo_id = dem.user_pseudo_id LEFT OUTER JOIN bqmlga4.user_behavior beh ON chu.user_pseudo_id = beh.user_pseudo_id WHERE chu.bounced = 0 ); SELECT * FROM bqmlga4.ml_features LIMIT 10 Explanation: Prepare your train/eval/test datasets for machine learning In this section, you can now combine these three intermediary views (user_churn, user_demographics, and user_behavior) into the final training data view called ml_features. Here you can also specify bounced = 0, in order to limit the training data only to users who did not "bounce" within the first 10 minutes of using the app. Note in the query below that a manual data_split column is created in your BigQuery ML table using BigQuery's hashing functions for repeatable sampling. It specifies a 80% train | 10% eval | 20% test split to evaluate your model's performance and generalization. End of explanation %%bigquery --project $PROJECT_ID SELECT data_split, COUNT(*) AS n_examples FROM bqmlga4.ml_features GROUP BY data_split Explanation: Validate feature splits Run the query below to validate the number of examples in each data partition for the 80% train |10% eval |10% test split. End of explanation MODEL_NAME="churn_xgb" %%bigquery --project $PROJECT_ID CREATE OR REPLACE MODEL bqmlga4.churn_xgb OPTIONS( MODEL_TYPE="BOOSTED_TREE_CLASSIFIER", # Declare label column. INPUT_LABEL_COLS=["churned"], # Specify custom data splitting using the `data_split` column. DATA_SPLIT_METHOD="CUSTOM", DATA_SPLIT_COL="data_split", # Enable Vertex Explainable AI aggregated feature attributions. ENABLE_GLOBAL_EXPLAIN=True, # Hyperparameter tuning arguments. num_trials=8, max_parallel_trials=4, HPARAM_TUNING_OBJECTIVES=["roc_auc"], EARLY_STOP=True, # Hyperpameter search space. LEARN_RATE=HPARAM_RANGE(0.01, 0.1), MAX_TREE_DEPTH=HPARAM_CANDIDATES([5,6]) ) AS SELECT * EXCEPT(user_pseudo_id) FROM bqmlga4.ml_features %%bigquery --project $PROJECT_ID SELECT * FROM ML.TRIAL_INFO(MODEL `bqmlga4.churn_xgb`); Explanation: Train and tune a BQML XGBoost propensity model to predict customer churn The following code trains and tunes the hyperparameters for an XGBoost model. TO provide a minimal demonstration of BQML hyperparameter tuning in this lab, this model will take about 18 min to train and tune with its restricted search space and low number of trials. In practice, you would generally want at least 10 trials per hyperparameter to achieve improved results. For more information on the default hyperparameters used, you can read the documentation: CREATE MODEL statement for Boosted Tree models using XGBoost |Model | BQML model_type | Advantages | Disadvantages| |:-------|:----------:|:----------:|-------------:| |XGBoost | BOOSTED_TREE_CLASSIFIER (documentation) | High model performance with feature importances and explainability | Slower to train than BQML LOGISTIC_REG | Note: When you run the CREATE MODEL statement, BigQuery ML can automatically split your data into training and test so you can immediately evaluate your model's performance after training. This is a great option for fast model prototyping. In this lab, however, you split your data manually above using hashing for reproducible data splits that can be used comparing model evaluations across different runs. End of explanation %%bigquery --project $PROJECT_ID SELECT * FROM ML.EVALUATE(MODEL bqmlga4.churn_xgb) WHERE trial_id=1; Explanation: Evaluate BQML XGBoost model performance Once training is finished, you can run ML.EVALUATE to return model evaluation metrics. By default, all model trials will be returned so the below query just returns the model performance for optimal first trial. End of explanation %%bigquery --project $PROJECT_ID SELECT expected_label, _0 AS predicted_0, _1 AS predicted_1 FROM ML.CONFUSION_MATRIX(MODEL bqmlga4.churn_xgb) WHERE trial_id=1; Explanation: ML.EVALUATE generates the precision, recall, accuracy, log_loss, f1_score and roc_auc using the default classification threshold of 0.5, which can be modified by using the optional THRESHOLD parameter. Next, use the ML.CONFUSION_MATRIX function to return a confusion matrix for the input classification model and input data. For more information on confusion matrices, you can read through a detailed explanation here. End of explanation %%bigquery df_roc --project $PROJECT_ID SELECT * FROM ML.ROC_CURVE(MODEL bqmlga4.churn_xgb) WHERE trial_id=1; df_roc.plot(x="false_positive_rate", y="recall", title="AUC-ROC curve") Explanation: You can also plot the AUC-ROC curve by using ML.ROC_CURVE to return the metrics for different threshold values for the model. End of explanation %%bigquery --project $PROJECT_ID SELECT * FROM ML.GLOBAL_EXPLAIN(MODEL bqmlga4.churn_xgb) ORDER BY attribution DESC; Explanation: Inspect global feature attributions To provide further context to your model performance, you can use the ML.GLOBAL_EXPLAIN function which leverages Vertex Explainable AI as a back-end. Vertex Explainable AI helps you understand your model's outputs for classification and regression tasks. Specifically, Vertex AI tells you how much each feature in the data contributed to your model's predicted result. You can then use this information to verify that the model is behaving as expected, identify and mitigate biases in your models, and get ideas for ways to improve your model and your training data. End of explanation %%bigquery --project $PROJECT_ID SELECT * FROM ML.PREDICT(MODEL bqmlga4.churn_xgb, (SELECT * FROM bqmlga4.ml_features WHERE data_split = "TEST")) Explanation: Generate batch predictions You can generate batch predictions for your BQML XGBoost model using ML.PREDICT. End of explanation %%bigquery --project $PROJECT_ID CREATE OR REPLACE TABLE bqmlga4.churn_predictions AS ( SELECT user_pseudo_id, churned, predicted_churned, predicted_churned_probs[OFFSET(0)].prob as probability_churned FROM ML.PREDICT(MODEL bqmlga4.churn_xgb, (SELECT * FROM bqmlga4.ml_features)) ); Explanation: The following query returns the probability that the user will return after 24 hrs. The higher the probability and closer it is to 1, the more likely the user is predicted to churn, and the closer it is to 0, the more likely the user is predicted to return. End of explanation BQ_MODEL = f"{BQ_DATASET}.{MODEL_NAME}" BQ_MODEL_EXPORT_DIR = f"gs://{GCS_BUCKET}/{MODEL_NAME}" !bq --location=$BQ_LOCATION extract \ --destination_format ML_XGBOOST_BOOSTER \ --model $BQ_MODEL \ $BQ_MODEL_EXPORT_DIR Explanation: Export a BQML model to Vertex AI for online predictions See the official BigQuery ML Guide: Exporting a BigQuery ML model for online prediction for additional details. Export BQML model to GCS You will use the bq extract command in the bq command-line tool to export your BQML XGBoost model assets to Google Cloud Storage for persistence. See the documentation for additional model export options. End of explanation IMAGE_URI='us-docker.pkg.dev/vertex-ai/prediction/xgboost-cpu.1-4:latest' model = vertexai.Model.upload( display_name=MODEL_NAME, artifact_uri=BQ_MODEL_EXPORT_DIR, serving_container_image_uri=IMAGE_URI, ) Explanation: Navigate to Google Cloud Storage in Google Cloud Console to "gs://{GCS_BUCKET}/{MODEL_NAME}". Validate that you see your exported model assets in the below format: |--/{GCS_BUCKET}/{MODEL_NAME}/ |--/assets/ # Contains preprocessing code. |--0_categorical_label.txt # Contains country vocabulary. |--1_categorical_label.txt # Contains operating_system vocabulary. |--2_categorical_label.txt # Contains language vocabulary. |--model_metadata.json # contains model feature and label mappings. |--main.py # Can be called for local training runs. |--model.bst # XGBoost saved model format. |--xgboost_predictor-0.1.tar.gz # Compress XGBoost model with prediction function. Upload BQML model to Vertex AI from GCS Vertex AI contains optimized pre-built training and prediction containers for popular ML frameworks such as TensorFlow, Pytorch, as well as XGBoost. You will upload your XGBoost from GCS to Vertex AI and provide the latest pre-built Vertex XGBoost prediction container to execute your model code to generate predictions in the cells below. End of explanation endpoint = model.deploy( traffic_split={"0": 100}, machine_type="n1-standard-2", ) Explanation: Deploy a Vertex Endpoint for online predictions Before you use your model to make predictions, you need to deploy it to an Endpoint object. When you deploy a model to an Endpoint, you associate physical (machine) resources with that model to enable it to serve online predictions. Online predictions have low latency requirements; providing resources to the model in advance reduces latency. You can do this by calling the deploy function on the Model resource. This will do two things: Create an Endpoint resource for deploying the Model resource to. Deploy the Model resource to the Endpoint resource. The deploy() function takes the following parameters: deployed_model_display_name: A human readable name for the deployed model. traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs. If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic. machine_type: The type of machine to use for training. accelerator_type: The hardware accelerator type. accelerator_count: The number of accelerators to attach to a worker replica. starting_replica_count: The number of compute instances to initially provision. max_replica_count: The maximum number of compute instances to scale to. In this lab, only one instance is provisioned. explanation_parameters: Metadata to configure the Explainable AI learning method. explanation_metadata: Metadata that describes your TensorFlow model for Explainable AI such as features, input and output tensors. Note: this can take about 3-5 minutes to provision prediction resources for your model. End of explanation CATEGORICAL_FEATURES = ['country', 'operating_system', 'language'] from sklearn.preprocessing import OrdinalEncoder def _build_cat_feature_encoders(cat_feature_list, gcs_bucket, model_name, na_value='Unknown'): Build categorical feature encoders for mapping text to integers for XGBoost inference. Args: cat_feature_list (list): List of string feature names. gcs_bucket (str): A string path to your Google Cloud Storage bucket. model_name (str): A string model directory in GCS where your BQML model was exported to. na_value (str): default is 'Unknown'. String value to replace any vocab NaN values prior to encoding. Returns: feature_encoders (dict): A dictionary containing OrdinalEncoder objects for integerizing categorical features that has the format [feature] = feature encoder. feature_encoders = {} for idx, feature in enumerate(cat_feature_list): feature_encoder = OrdinalEncoder(handle_unknown="use_encoded_value", unknown_value=-1) feature_vocab_file = f"gs://{gcs_bucket}/{model_name}/assets/{idx}_categorical_label.txt" feature_vocab_df = pd.read_csv(feature_vocab_file, delimiter = "\t", header=None).fillna(na_value) feature_encoder.fit(feature_vocab_df.values) feature_encoders[feature] = feature_encoder return feature_encoders def preprocess_xgboost(instances, cat_feature_list, feature_encoders): Transform instances to numerical values for inference. Args: instances (list[dict]): A list of feature dictionaries with the format feature: value. cat_feature_list (list): A list of string feature names. feature_encoders (dict): A dictionary with the format feature: feature_encoder. Returns: transformed_instances (list[list]): A list of lists containing numerical feature values needed for Vertex XGBoost inference. transformed_instances = [] for instance in instances: for feature in cat_feature_list: feature_int = feature_encoders[feature].transform([[instance[feature]]]).item() instance[feature] = feature_int instance_list = list(instance.values()) transformed_instances.append(instance_list) return transformed_instances # Build a dictionary of ordinal categorical feature encoders. feature_encoders = _build_cat_feature_encoders(CATEGORICAL_FEATURES, GCS_BUCKET, MODEL_NAME) %%bigquery test_df --project $PROJECT_ID SELECT* EXCEPT (user_pseudo_id, churned, data_split) FROM bqmlga4.ml_features WHERE data_split="TEST" LIMIT 3; # Convert dataframe records to feature dictionaries for preprocessing by feature name. test_instances = test_df.astype(str).to_dict(orient='records') # Apply preprocessing to transform categorical features and return numerical instances for prediction. transformed_test_instances = preprocess_xgboost(test_instances, CATEGORICAL_FEATURES, feature_encoders) # Generate predictions from model deployed to Vertex AI Endpoint. predictions = endpoint.predict(instances=transformed_test_instances) for idx, prediction in enumerate(predictions.predictions): # Class labels [1,0] retrieved from model_metadata.json in GCS model dir. # BQML binary classification default is 0.5 with above "Churn" and below "Not Churn". is_churned = "Churn" if prediction[0] >= 0.5 else "Not Churn" print(f"Prediction: Customer {idx} - {is_churned} {prediction}") print(test_df.iloc[idx].astype(str).to_json() + "\n") Explanation: Query model for online predictions XGBoost only takes numerical feature inputs. When you trained your BQML model above with CREATE MODEL statement, it automatically handled encoding of categorical features such as user country, operating system, and language into numeric representations. In order for our exported model to generate online predictions, you will use the categorical feature vocabulary files exported under the assets/ folder of your model directory and the Scikit-Learn preprocessing code below to map your test instances to numeric values. End of explanation # Copyright 2021 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Next steps Congratulations! In this lab, you trained, tuned, explained, and deployed a BigQuery ML user churn model to generate high business impact batch and online churn predictions to target customers likely to churn with interventions such as in-game rewards and reminder notifications. In this lab, you used user_psuedo_id as a user identifier. As next steps, you can extend this code further by having your application return a user_id to Google Analytics so you can join your model's predictions with additional first-party data such as purchase history and marketing engagement data. This enables you to integrate batch predictions into Looker dashboards to help product teams prioritize user experience improvements and marketing teams create targeted user interventions such as reminder emails to improve retention. Through having your model in Vertex AI Prediction, you also have a scalable prediction service to call from your application to directly integrate online predictions in order to to tailor personalized user game experiences and allow for targeted habit-building notifications. As you collect more data from your users, you may want to regularly evaluate your model on fresh data and re-train the model if you notice that the model quality is decaying. Vertex Pipelines can help you to automate, monitor, and govern your ML solutions by orchestrating your BQML workflow in a serverless manner, and storing your workflow's artifacts using Vertex ML Metadata. For another alternative for continuous BQML models, checkout the blog post Continuous model evaluation with BigQuery ML, Stored Procedures, and Cloud Scheduler. License End of explanation
6,456
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Lecture 11 Step2: To define a new class, you need a class keyword, followed by the name (in this case, Car). The parentheses are important, but for now we'll leave them empty. Like loops and functions and conditionals, everything that belongs to the class--variables, methods, etc--are indented underneath. We can then instantiate this class using the following Step3: Now my_car holds an instance of the Car class! It doesn't do much, but it's a valid object. Constructors The first step in making an interesting class is by creating a constructor. It's a special kind of function that provides a customized recipe for how an instance of that class is built. It takes a special form, too Step4: Let's look at this method in more detail. Step5: The def is normal Step6: These attributes are accessible from anywhere inside the class, but direct access to them from outside (as did in the print(my_car.year) statement) is heavily frowned upon. Instead, good object-oriented design stipulates that these attributes be treated as private variables to the class. To be modified or otherwise used, the classes should have public methods that expose very specific avenues for interaction with the class attributes. This is the concept of encapsulation Step7: Classes can have as many methods as you want, named whatever you'd like (though usually named so they reflect their purpose). Methods are what are ultimately allowed to edit the class attributes (the self. variables), as per the concept of encapsulation. For example, the self.mileage attribute in the previous example that stores the total mileage driven by that instance. Like the constructor, all the class methods must have self as the first argument in their headers, even though you don't explicitly supply it when you call the methods. Inheritance Inheritance is easily the most complicated aspect of object-oriented programming, but is most certainly where OOP derives its power for modular design. When considering cars, certainly most are very similar and can be modeled effectively with one class, but eventually there are enough differences to necessitate the creation of a separate class. For example, a class for gas-powered cars and one for EVs. But considering how much overlap they still share, it'd be highly redundant to make wholly separate classes for both. Step8: Enter inheritance Step9: Hopefully you noticed--we could call tesla.drive() and it worked as it was defined in the parent Car class, without us having to write it again! This is the power of inheritance Step10: Using inheritance, you can build an entire hierarchy of classes and subclasses, inheriting functionality where needed and overriding it where necessary. This illustrates the concept of polymorphism (meaning "many forms")
Python Code: class Car(): A simple representation of a car. pass Explanation: Lecture 11: Objects and Classes CSCI 1360: Foundations for Informatics and Analytics Overview and Objectives In this lecture, we'll delve into the realm of "object-oriented programming," or OOP. This is a programming paradigm in which concepts and actions are "packaged" using the abstraction of objects: modeling the system after real-world phenomena, both to aid our own understanding of the program and to enforce a good design paradigm. By the end of this lecture, you should be able to: Understand the core concepts of encapsulation and abstraction that make object-oriented programming so powerful Implement your own class hierarchy, using inheritance to avoid redundant code Explain how Python's OOP mechanism differs from that in other languages, such as Java Part 1: Object-oriented Programming Poll: how many people have programmed in Java or C++? How many have heard of object-oriented programming? Up until now (and after this lecture), we've stuck mainly with procedural programming: focusing on the actions. Object-oriented programming, by contrast, focuses on the objects. Sort of a "verbs" (procedural programming) versus "nouns" (object-oriented programming) thing. Objects versus Classes Main idea: you design objects, usually modeled after real-world constructs, that interact with each other. These designs are called classes. Think of them as a blueprint that detail out the various properties and capabilities of your object. From the designs (classes), you can create an object by instantiating the class, or creating an instance of the class. If the class is the blueprint, the object is the physical manifestation from the blueprint. A car can have several properties--steering column, fuel injector, brake pads, airbags--but an instantiation of car would be a 2015 Honda Accord. Another instantiation would be a 2016 Tesla Model S. These are instances of a car. In some [abstract] sense, these both derive from a common blueprint of a car, but their specific details differ. This is precisely how to think of the difference between classes and objects. Part 2: Objects in Python Every object in Python has certain things in common. Methods: Remember when we covered the difference between functions and methods? This is where that difference comes into play. Methods are the way the object interacts with the outside world. They're functions, but they're attached directly to object instances. Constructors: These are specialized methods that deal specifically with how an object instance is created. Every single object has a constructor, whether you explicitly write one or not. Attributes: These are the physical properties of the object; maybe they change, maybe they don't. For a car, this could be color, make, model, or name. These are the things that distinguish one instance of the class from another. Inheritance: This is where the power of object-oriented programming really comes into play. Quite often, our understanding of physical objects in the world is hierarchical: there are cars; then there are race cars, sedans, and SUVs; then there are gas-powered sedans, hybrid sedans, and electric sedans; then there are 2015 Honda Accords and 2016 Honda Accords. Wouldn't it be great if our class design reflected this hierarchy? Defining Classes Let's start with the first step of designing a class: its actual definition. We'll stick with the car example. End of explanation my_car = Car() print(my_car) Explanation: To define a new class, you need a class keyword, followed by the name (in this case, Car). The parentheses are important, but for now we'll leave them empty. Like loops and functions and conditionals, everything that belongs to the class--variables, methods, etc--are indented underneath. We can then instantiate this class using the following: End of explanation class Car(): def __init__(self): print("This is the constructor!") my_car = Car() Explanation: Now my_car holds an instance of the Car class! It doesn't do much, but it's a valid object. Constructors The first step in making an interesting class is by creating a constructor. It's a special kind of function that provides a customized recipe for how an instance of that class is built. It takes a special form, too: End of explanation def __init__(self): pass Explanation: Let's look at this method in more detail. End of explanation class Car(): def __init__(self, year, make, model): # All three of these are class attributes. self.year = year self.make = make self.model = model my_car = Car(2015, "Honda", "Accord") # Again, note that we don't specify something for "self" here. print(my_car.year) Explanation: The def is normal: the Python keyword we use to identify a function definition. __init__ is the name of our method. It's an interesting name for sure, and turns out this is a very specific name Python is looking for: whenever you instantiate an object, this is the method that's run. If you don't explicitly write a constructor, Python implicitly supplies a "default" one (where basically nothing really happens). The method argument is strange; what is this mysterious self, and why--if an argument is required--didn't we supply one when we executed my_car = Car()? A note on self This is how the object refers to itself from inside the object. We'll see this in greater detail once we get to attributes. Every method in a class must have self as the first argument. Even though you don't actually supply this argument yourself when you call the method, it still has to be in the function definition. Otherwise, you'll get some weird error messages: Attributes Attributes are variables contained inside a class, and which take certain values when the class is instantiated. The most common practice is to define these attributes within the constructor of the class. End of explanation class Car(): def __init__(self, year, make, model): self.year = year self.make = make self.model = model self.mileage = 0 def drive(self, mileage = 0): if mileage == 0: print("Driving!") else: self.mileage += mileage print("Driven {} miles total.".format(self.mileage)) my_car = Car(2016, "Tesla", "Model S") my_car.drive(100) my_car.drive() my_car.drive(50) Explanation: These attributes are accessible from anywhere inside the class, but direct access to them from outside (as did in the print(my_car.year) statement) is heavily frowned upon. Instead, good object-oriented design stipulates that these attributes be treated as private variables to the class. To be modified or otherwise used, the classes should have public methods that expose very specific avenues for interaction with the class attributes. This is the concept of encapsulation: restricting direct access to attributes, and instead encouraging the use of class methods to interact with the attributes in very specific ways. Methods Methods are functions attached to the class, but which are accessible from outside the class, and define the ways in which the instances of the class can interact with the outside world. Whereas classes are usually nouns, the methods are typically the verbs. For example, what would a Car class do? End of explanation class GasCar(): def __init__(self, make, model, year, tank_size): # Set up attributes. pass def drive(self, mileage = 0): # Driving functionality. pass class ElectricCar(): def __init__(self, make, model, year, battery_cycles): # Set up attributes. pass def drive(self, mileage = 0): # Driving functionality, probably identical to GasCar. pass Explanation: Classes can have as many methods as you want, named whatever you'd like (though usually named so they reflect their purpose). Methods are what are ultimately allowed to edit the class attributes (the self. variables), as per the concept of encapsulation. For example, the self.mileage attribute in the previous example that stores the total mileage driven by that instance. Like the constructor, all the class methods must have self as the first argument in their headers, even though you don't explicitly supply it when you call the methods. Inheritance Inheritance is easily the most complicated aspect of object-oriented programming, but is most certainly where OOP derives its power for modular design. When considering cars, certainly most are very similar and can be modeled effectively with one class, but eventually there are enough differences to necessitate the creation of a separate class. For example, a class for gas-powered cars and one for EVs. But considering how much overlap they still share, it'd be highly redundant to make wholly separate classes for both. End of explanation class Car(): # Parent class. def __init__(self, make, model, year): self.make = make self.model = model self.year = year self.mileage = 0 def drive(self, mileage = 0): self.mileage += mileage print("Driven {} miles.".format(self.mileage)) class EV(Car): # Child class--explicitly mentions "Car" as the parent! def __init__(self, make, model, year, charge_range): Car.__init__(self, make, model, year) self.charge_range = charge_range def charge_remaining(self): if self.mileage < self.charge_range: print("Still {} miles left.".format(self.charge_range - self.mileage)) else: print("Battery depleted! Find a SuperCharger station.") tesla = EV(2016, "Tesla", "Model S", 250) tesla.drive(100) tesla.charge_remaining() tesla.drive(150) tesla.charge_remaining() Explanation: Enter inheritance: the ability to create subclasses of existing classes that retain all the functionality of the parent, while requiring the implementation only of the things that differentiate the child from the parent. End of explanation class Hybrid(Car): def drive(self, mileage, mpg): self.mileage += mileage print("Driven {} miles at {:.1f} MPG.".format(self.mileage, mpg)) hybrid = Hybrid(2015, "Toyota", "Prius") hybrid.drive(100, 35.5) Explanation: Hopefully you noticed--we could call tesla.drive() and it worked as it was defined in the parent Car class, without us having to write it again! This is the power of inheritance: every child class inherits all the functionality of the parent class. With ONE exception: if you override a parent attribute or method in the child class, then that takes precedence. End of explanation class DerivedClassName(EV, Hybrid): pass Explanation: Using inheritance, you can build an entire hierarchy of classes and subclasses, inheriting functionality where needed and overriding it where necessary. This illustrates the concept of polymorphism (meaning "many forms"): all cars are vehicles; therefore, any functions a vehicle has, a car will also have. All transporters are vehicles--and also cars--and have all the associated functionality defined in those classes. However, it does NOT work in reverse: not all vehicles are motorcycles! Thus, as you move down the hierarchy, the objects become more specialized. Multiple Inheritance Just a quick note on this, for all the Java converts-- Python does support multiple inheritance, meaning a child class can directly inherit from multiple parent classes. End of explanation
6,457
Given the following text description, write Python code to implement the functionality described below step by step Description: define PID weights PID controller minimizes error by adjusting a control variable (eg power supplied) to a new value determined by a weighted sum of present (P), past (I), and future (D) error values. Step1: how quickly does it converge? green is desired value; blue is actual
Python Code: P = 1.2 # weight current errors more I = 1 D = 0.0 # ignore future potential errors L = 50 # number of iterations pid = PID.PID(P, I, D) pid.SetPoint=0.0 pid.setSampleTime(0.01) END = L feedback = 0 feedback_list = [] time_list = [] setpoint_list = [] for i in range(1, END): pid.update(feedback) output = pid.output if pid.SetPoint > 0: feedback += (output - (1/i)) if i>9: pid.SetPoint = 1 time.sleep(0.02) feedback_list.append(feedback) setpoint_list.append(pid.SetPoint) time_list.append(i) time_sm = np.array(time_list) time_smooth = np.linspace(time_sm.min(), time_sm.max(), 300) feedback_smooth = spline(time_list, feedback_list, time_smooth) Explanation: define PID weights PID controller minimizes error by adjusting a control variable (eg power supplied) to a new value determined by a weighted sum of present (P), past (I), and future (D) error values. End of explanation plt.plot(time_smooth, feedback_smooth) plt.plot(time_list, setpoint_list) plt.xlim((0, L)) plt.ylim((min(feedback_list)-0.5, max(feedback_list)+0.5)) plt.xlabel('time (s)') plt.ylabel('PID (PV)') plt.title('TEST PID') plt.ylim((1-0.5, 1+0.5)) Explanation: how quickly does it converge? green is desired value; blue is actual End of explanation
6,458
Given the following text description, write Python code to implement the functionality described below step by step Description: Step5: Table of Contents <p><div class="lev1 toc-item"><a href="#Benchmark-of-the-SHA256-hash-function,-with-Python,-Cython-and-Numba" data-toc-modified-id="Benchmark-of-the-SHA256-hash-function,-with-Python,-Cython-and-Numba-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Benchmark of the SHA256 hash function, with Python, Cython and Numba</a></div><div class="lev2 toc-item"><a href="#What-is-a-hash-function?" data-toc-modified-id="What-is-a-hash-function?-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>What is a hash function?</a></div><div class="lev2 toc-item"><a href="#Common-API-for-the-different-classes" data-toc-modified-id="Common-API-for-the-different-classes-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Common API for the different classes</a></div><div class="lev2 toc-item"><a href="#Checking-the-the-hashlib-module-in-Python-standard-library" data-toc-modified-id="Checking-the-the-hashlib-module-in-Python-standard-library-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Checking the <a href="https Step6: Checking the the hashlib module in Python standard library Step7: We can check the available algorithms, some of them being guaranteed to be on any platform, some are not. Step8: I will need at least this one Step9: Lets check that they have the block size and digest size announced Step12: Pure Python code for the SHA-2 hashing function Let now study and implement a last hashing function, again slightly harder to write but more secure Step15: As SHA-2 plays with big-endian and little-endian integers, and at the end it requires a leftshift to combine the 5 hash pieces into one. Step17: The SHA2 class I will use a simple class, very similar to the class used for the SHA-1 algorithm (see above). It is a direct implementation of the pseudo-code, as given for instance on the Wikipedia page. I will only implement the simpler one, SHA-256, of digest size of 256 bits. Other variants are SHA-224, SHA-384, SHA-512 (and others include SHA-512/224, SHA-512/256). Step19: We can also write a function to directly compute the hex digest from some bytes data. Step20: Check on SHA-2 Let try the example from SHA-2 Wikipedia page Step21: Even a small change in the message will (with overwhelming probability) result in a mostly different hash, due to the avalanche effect. For example, adding a period at the end of the sentence Step22: The hash of the zero-length string is Step23: $\implies$ We obtained the same result, OK our function works! Trying 1000 random examples On a small sentence Step24: It starts to look good. Step25: On some random data Step26: Numba-powered code for the SHA-2 hashing function Requirements You need numba to be installed. Step31: Useful functions the SHA-2 algorithm Let just add the numba.jit decorator to every function we defined before Step33: The SHA2_Numba class And similarly for the SHA2 class, with the numba.jit decorator to the update function. Step35: We can also write a function to directly compute the hex digest from some bytes data. Step36: Check on SHA-2 Let try the example from SHA-2 Wikipedia page Step37: I failed to make numba.jit work on that function Step40: Useful functions the SHA-2 algorithm For the functions defined before, we rewrite them with type annotations in %%cython cells. All variables are int, i.e., 32-bits integer (64-bits are long). Step43: On basic functions like this, of course we don't get any speedup with Cython Step44: On basic functions like this, of course we don't get any speedup with Cython Step49: The SHA2_Cython class And similarly for the SHA2 class, we write it in a %%cython cell, and we type everything. Step51: We can also write a function to directly compute the hex digest from some bytes data. Step52: Check on SHA-2 Let try the example from SHA-2 Wikipedia page
Python Code: class Hash(object): Common class for all hash methods. It copies the one of the hashlib module (https://docs.python.org/3.5/library/hashlib.html). def __init__(self, *args, **kwargs): Create the Hash object. self.name = self.__class__.__name__ # https://docs.python.org/3.5/library/hashlib.html#hashlib.hash.name self.byteorder = 'little' self.digest_size = 0 # https://docs.python.org/3.5/library/hashlib.html#hashlib.hash.digest_size self.block_size = 0 # https://docs.python.org/3.5/library/hashlib.html#hashlib.hash.block_size def __str__(self): return self.name def update(self, arg): Update the hash object with the object arg, which must be interpretable as a buffer of bytes. pass def digest(self): Return the digest of the data passed to the update() method so far. This is a bytes object of size digest_size which may contain bytes in the whole range from 0 to 255. return b"" def hexdigest(self): Like digest() except the digest is returned as a string object of double length, containing only hexadecimal digits. This may be used to exchange the value safely in email or other non-binary environments. digest = self.digest() raw = digest.to_bytes(self.digest_size, byteorder=self.byteorder) format_str = '{:0' + str(2 * self.digest_size) + 'x}' return format_str.format(int.from_bytes(raw, byteorder='big')) Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Benchmark-of-the-SHA256-hash-function,-with-Python,-Cython-and-Numba" data-toc-modified-id="Benchmark-of-the-SHA256-hash-function,-with-Python,-Cython-and-Numba-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Benchmark of the SHA256 hash function, with Python, Cython and Numba</a></div><div class="lev2 toc-item"><a href="#What-is-a-hash-function?" data-toc-modified-id="What-is-a-hash-function?-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>What is a hash function?</a></div><div class="lev2 toc-item"><a href="#Common-API-for-the-different-classes" data-toc-modified-id="Common-API-for-the-different-classes-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Common API for the different classes</a></div><div class="lev2 toc-item"><a href="#Checking-the-the-hashlib-module-in-Python-standard-library" data-toc-modified-id="Checking-the-the-hashlib-module-in-Python-standard-library-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Checking the <a href="https://docs.python.org/3/library/hashlib.html" target="_blank">the <code>hashlib</code> module in Python standard library</a></a></div><div class="lev2 toc-item"><a href="#Pure-Python-code-for-the-SHA-2-hashing-function" data-toc-modified-id="Pure-Python-code-for-the-SHA-2-hashing-function-14"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Pure Python code for the SHA-2 hashing function</a></div><div class="lev3 toc-item"><a href="#Useful-functions-the-SHA-2-algorithm" data-toc-modified-id="Useful-functions-the-SHA-2-algorithm-141"><span class="toc-item-num">1.4.1&nbsp;&nbsp;</span>Useful functions the SHA-2 algorithm</a></div><div class="lev3 toc-item"><a href="#The-SHA2-class" data-toc-modified-id="The-SHA2-class-142"><span class="toc-item-num">1.4.2&nbsp;&nbsp;</span>The <code>SHA2</code> class</a></div><div class="lev3 toc-item"><a href="#Check-on-SHA-2" data-toc-modified-id="Check-on-SHA-2-143"><span class="toc-item-num">1.4.3&nbsp;&nbsp;</span>Check on SHA-2</a></div><div class="lev3 toc-item"><a href="#Trying-1000-random-examples" data-toc-modified-id="Trying-1000-random-examples-144"><span class="toc-item-num">1.4.4&nbsp;&nbsp;</span>Trying 1000 random examples</a></div><div class="lev2 toc-item"><a href="#Numba-powered-code-for-the-SHA-2-hashing-function" data-toc-modified-id="Numba-powered-code-for-the-SHA-2-hashing-function-15"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Numba-powered code for the SHA-2 hashing function</a></div><div class="lev3 toc-item"><a href="#Requirements" data-toc-modified-id="Requirements-151"><span class="toc-item-num">1.5.1&nbsp;&nbsp;</span>Requirements</a></div><div class="lev3 toc-item"><a href="#Useful-functions-the-SHA-2-algorithm" data-toc-modified-id="Useful-functions-the-SHA-2-algorithm-152"><span class="toc-item-num">1.5.2&nbsp;&nbsp;</span>Useful functions the SHA-2 algorithm</a></div><div class="lev3 toc-item"><a href="#The-SHA2_Numba-class" data-toc-modified-id="The-SHA2_Numba-class-153"><span class="toc-item-num">1.5.3&nbsp;&nbsp;</span>The <code>SHA2_Numba</code> class</a></div><div class="lev3 toc-item"><a href="#Check-on-SHA-2" data-toc-modified-id="Check-on-SHA-2-154"><span class="toc-item-num">1.5.4&nbsp;&nbsp;</span>Check on SHA-2</a></div><div class="lev2 toc-item"><a href="#Cython-power-code-for-the-SHA-2-hashing-function" data-toc-modified-id="Cython-power-code-for-the-SHA-2-hashing-function-16"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Cython-power code for the <code>SHA-2</code> hashing function</a></div><div class="lev3 toc-item"><a href="#Requirements" data-toc-modified-id="Requirements-161"><span class="toc-item-num">1.6.1&nbsp;&nbsp;</span>Requirements</a></div><div class="lev3 toc-item"><a href="#Useful-functions-the-SHA-2-algorithm" data-toc-modified-id="Useful-functions-the-SHA-2-algorithm-162"><span class="toc-item-num">1.6.2&nbsp;&nbsp;</span>Useful functions the SHA-2 algorithm</a></div><div class="lev3 toc-item"><a href="#The-SHA2_Cython-class" data-toc-modified-id="The-SHA2_Cython-class-163"><span class="toc-item-num">1.6.3&nbsp;&nbsp;</span>The <code>SHA2_Cython</code> class</a></div><div class="lev3 toc-item"><a href="#Check-on-SHA-2" data-toc-modified-id="Check-on-SHA-2-164"><span class="toc-item-num">1.6.4&nbsp;&nbsp;</span>Check on SHA-2</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-17"><span class="toc-item-num">1.7&nbsp;&nbsp;</span>Conclusion</a></div><div class="lev3 toc-item"><a href="#Bonus" data-toc-modified-id="Bonus-171"><span class="toc-item-num">1.7.1&nbsp;&nbsp;</span>Bonus</a></div> # Benchmark of the SHA256 hash function, with Python, Cython and Numba This small [Jupyter notebook](https://www.Jupyter.org/) is a short experiment, to compare the time complexity of three different implementations of the [SHA-256 hash function](https://en.wikipedia.org/wiki/SHA-2), in pure [Python](https://www.Python.org/), with [Cython](http://Cython.org/), and with [Numba](http://Numba.PyData.org/). - *Reference*: Wikipedia pages on [Hash functions](https://en.wikipedia.org/wiki/Hash_function) and [SHA-2](https://en.wikipedia.org/wiki/SHA-2). - *Date*: 21 June 2017. - *Author*: [Lilian Besson](https://GitHub.com/Naereen/notebooks). - *License*: [MIT Licensed](https://LBesson.MIT-License.org/). ---- ## What is a hash function? > TL;DR : [Hash functions](https://en.wikipedia.org/wiki/Hash_function) and [cryptographic hashing functions](https://en.wikipedia.org/wiki/Cryptographic_hash_function) on Wikipedia. ---- ## Common API for the different classes I will copy the API proposed by [the `hashlib` module in Python standard library](https://docs.python.org/3/library/hashlib.html), so it will be very easy to compare my implementations with the one provided with your default [Python](https://www.Python.org/) installation. End of explanation import hashlib Explanation: Checking the the hashlib module in Python standard library End of explanation list(hashlib.algorithms_available) Explanation: We can check the available algorithms, some of them being guaranteed to be on any platform, some are not. End of explanation assert 'SHA256' in hashlib.algorithms_available Explanation: I will need at least this one: End of explanation name = 'SHA256' s = hashlib.sha256() print("For {:<8} : the block size is {:<3} and the digest size is {:<2}.".format(name, s.block_size, s.digest_size)) Explanation: Lets check that they have the block size and digest size announced: End of explanation def leftrotate(x, c): Left rotate the number x by c bytes. x &= 0xFFFFFFFF return ((x << c) | (x >> (32 - c))) & 0xFFFFFFFF def rightrotate(x, c): Right rotate the number x by c bytes. x &= 0xFFFFFFFF return ((x >> c) | (x << (32 - c))) & 0xFFFFFFFF Explanation: Pure Python code for the SHA-2 hashing function Let now study and implement a last hashing function, again slightly harder to write but more secure: SHA-2, "Secure Hash Algorithm, version 2". See the SHA-2 hashing function on Wikipedia, if needed. <center><span style="font-size: large; color: green;"><i>Remark</i>: it is not (yet) considered broken, and it is the military standard for security and cryptographic hashing. SHA-3 is preferred for security purposes.</span></center> Useful functions the SHA-2 algorithm This is exactly like for MD5. But SHA-2 requires right-rotate as well. End of explanation def leftshift(x, c): Left shift the number x by c bytes. return x << c def rightshift(x, c): Right shift the number x by c bytes. return x >> c Explanation: As SHA-2 plays with big-endian and little-endian integers, and at the end it requires a leftshift to combine the 5 hash pieces into one. End of explanation class SHA2(Hash): SHA256 hashing, see https://en.wikipedia.org/wiki/SHA-2#Pseudocode. def __init__(self): self.name = "SHA256" self.byteorder = 'big' self.block_size = 64 self.digest_size = 32 # Note 2: For each round, there is one round constant k[i] and one entry in the message schedule array w[i], 0 ≤ i ≤ 63 # Note 3: The compression function uses 8 working variables, a through h # Note 4: Big-endian convention is used when expressing the constants in this pseudocode, # and when parsing message block data from bytes to words, for example, # the first word of the input message "abc" after padding is 0x61626380 # Initialize hash values: # (first 32 bits of the fractional parts of the square roots of the first 8 primes 2..19): h0 = 0x6a09e667 h1 = 0xbb67ae85 h2 = 0x3c6ef372 h3 = 0xa54ff53a h4 = 0x510e527f h5 = 0x9b05688c h6 = 0x1f83d9ab h7 = 0x5be0cd19 # Initialize array of round constants: # (first 32 bits of the fractional parts of the cube roots of the first 64 primes 2..311): self.k = [ 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5, 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174, 0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da, 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967, 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85, 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3, 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070, 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3, 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 ] # Store them self.hash_pieces = [h0, h1, h2, h3, h4, h5, h6, h7] def update(self, arg): h0, h1, h2, h3, h4, h5, h6, h7 = self.hash_pieces # 1. Pre-processing, exactly like MD5 data = bytearray(arg) orig_len_in_bits = (8 * len(data)) & 0xFFFFFFFFFFFFFFFF # 1.a. Add a single '1' bit at the end of the input bits data.append(0x80) # 1.b. Padding with zeros as long as the input bits length ≡ 448 (mod 512) while len(data) % 64 != 56: data.append(0) # 1.c. append original length in bits mod (2 pow 64) to message data += orig_len_in_bits.to_bytes(8, byteorder='big') assert len(data) % 64 == 0, "Error in padding" # 2. Computations # Process the message in successive 512-bit = 64-bytes chunks: for offset in range(0, len(data), 64): # 2.a. 512-bits = 64-bytes chunks chunks = data[offset : offset + 64] w = [0 for i in range(64)] # 2.b. Break chunk into sixteen 32-bit = 4-bytes words w[i], 0 ≤ i ≤ 15 for i in range(16): w[i] = int.from_bytes(chunks[4*i : 4*i + 4], byteorder='big') # 2.c. Extend the first 16 words into the remaining 48 # words w[16..63] of the message schedule array: for i in range(16, 64): s0 = (rightrotate(w[i-15], 7) ^ rightrotate(w[i-15], 18) ^ rightshift(w[i-15], 3)) & 0xFFFFFFFF s1 = (rightrotate(w[i-2], 17) ^ rightrotate(w[i-2], 19) ^ rightshift(w[i-2], 10)) & 0xFFFFFFFF w[i] = (w[i-16] + s0 + w[i-7] + s1) & 0xFFFFFFFF # 2.d. Initialize hash value for this chunk a, b, c, d, e, f, g, h = h0, h1, h2, h3, h4, h5, h6, h7 # 2.e. Main loop, cf. https://tools.ietf.org/html/rfc6234 for i in range(64): S1 = (rightrotate(e, 6) ^ rightrotate(e, 11) ^ rightrotate(e, 25)) & 0xFFFFFFFF ch = ((e & f) ^ ((~e) & g)) & 0xFFFFFFFF temp1 = (h + S1 + ch + self.k[i] + w[i]) & 0xFFFFFFFF S0 = (rightrotate(a, 2) ^ rightrotate(a, 13) ^ rightrotate(a, 22)) & 0xFFFFFFFF maj = ((a & b) ^ (a & c) ^ (b & c)) & 0xFFFFFFFF temp2 = (S0 + maj) & 0xFFFFFFFF new_a = (temp1 + temp2) & 0xFFFFFFFF new_e = (d + temp1) & 0xFFFFFFFF # Rotate the 8 variables a, b, c, d, e, f, g, h = new_a, a, b, c, new_e, e, f, g # Add this chunk's hash to result so far: h0 = (h0 + a) & 0xFFFFFFFF h1 = (h1 + b) & 0xFFFFFFFF h2 = (h2 + c) & 0xFFFFFFFF h3 = (h3 + d) & 0xFFFFFFFF h4 = (h4 + e) & 0xFFFFFFFF h5 = (h5 + f) & 0xFFFFFFFF h6 = (h6 + g) & 0xFFFFFFFF h7 = (h7 + h) & 0xFFFFFFFF # 3. Conclusion self.hash_pieces = [h0, h1, h2, h3, h4, h5, h6, h7] def digest(self): # h0 append h1 append h2 append h3 append h4 append h5 append h6 append h7 return sum(leftshift(x, 32 * i) for i, x in enumerate(self.hash_pieces[::-1])) Explanation: The SHA2 class I will use a simple class, very similar to the class used for the SHA-1 algorithm (see above). It is a direct implementation of the pseudo-code, as given for instance on the Wikipedia page. I will only implement the simpler one, SHA-256, of digest size of 256 bits. Other variants are SHA-224, SHA-384, SHA-512 (and others include SHA-512/224, SHA-512/256). End of explanation def hash_SHA2(data): Shortcut function to directly receive the hex digest from SHA2(data). h = SHA2() if isinstance(data, str): data = bytes(data, encoding='utf8') h.update(data) return h.hexdigest() Explanation: We can also write a function to directly compute the hex digest from some bytes data. End of explanation hash_SHA2("The quick brown fox jumps over the lazy dog") assert hash_SHA2("The quick brown fox jumps over the lazy dog") == 'd7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592' Explanation: Check on SHA-2 Let try the example from SHA-2 Wikipedia page : End of explanation hash_SHA2("The quick brown fox jumps over the lazy dog.") assert hash_SHA2("The quick brown fox jumps over the lazy dog.") == 'ef537f25c895bfa782526529a9b63d97aa631564d5d789c2b765448c8635fb6c' Explanation: Even a small change in the message will (with overwhelming probability) result in a mostly different hash, due to the avalanche effect. For example, adding a period at the end of the sentence: End of explanation hash_SHA2("") assert hash_SHA2("") == 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855' Explanation: The hash of the zero-length string is: End of explanation hash_SHA2("My name is Zorro !") h = hashlib.sha256() h.update(b"My name is Zorro !") h.hexdigest() Explanation: $\implies$ We obtained the same result, OK our function works! Trying 1000 random examples On a small sentence: End of explanation def true_hash_SHA2(data): h = hashlib.sha256() if isinstance(data, str): data = bytes(data, encoding='utf8') h.update(data) return h.hexdigest() Explanation: It starts to look good. End of explanation import numpy.random as nr alphabets = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" def random_string(size=10000): return ''.join(alphabets[nr.randint(len(alphabets))] for _ in range(size)) random_string(10) from tqdm import tqdm_notebook as tqdm %%time for _ in tqdm(range(1000)): x = random_string() assert hash_SHA2(x) == true_hash_SHA2(x), "Error: x = {} gave two different SHA2 hashes: my implementation = {} != hashlib implementation = {}...".format(x, hash_SHA2(x), true_hash_SHA2(x)) Explanation: On some random data: End of explanation from numba import jit, jitclass Explanation: Numba-powered code for the SHA-2 hashing function Requirements You need numba to be installed. End of explanation @jit def leftrotate_numba(x, c): Left rotate the number x by c bytes. x &= 0xFFFFFFFF return ((x << c) | (x >> (32 - c))) & 0xFFFFFFFF @jit def rightrotate_numba(x, c): Right rotate the number x by c bytes. x &= 0xFFFFFFFF return ((x >> c) | (x << (32 - c))) & 0xFFFFFFFF @jit def leftshift_numba(x, c): Left shift the number x by c bytes. return x << c @jit def rightshift_numba(x, c): Right shift the number x by c bytes. return x >> c Explanation: Useful functions the SHA-2 algorithm Let just add the numba.jit decorator to every function we defined before: End of explanation class SHA2_Numba(Hash): SHA256 hashing, speed-up with Numba.jit, see https://en.wikipedia.org/wiki/SHA-2#Pseudocode. def __init__(self): self.name = "SHA256" self.byteorder = 'big' self.block_size = 64 self.digest_size = 32 # Note 2: For each round, there is one round constant k[i] and one entry in the message schedule array w[i], 0 ≤ i ≤ 63 # Note 3: The compression function uses 8 working variables, a through h # Note 4: Big-endian convention is used when expressing the constants in this pseudocode, # and when parsing message block data from bytes to words, for example, # the first word of the input message "abc" after padding is 0x61626380 # Initialize hash values: # (first 32 bits of the fractional parts of the square roots of the first 8 primes 2..19): h0 = 0x6a09e667 h1 = 0xbb67ae85 h2 = 0x3c6ef372 h3 = 0xa54ff53a h4 = 0x510e527f h5 = 0x9b05688c h6 = 0x1f83d9ab h7 = 0x5be0cd19 # Initialize array of round constants: # (first 32 bits of the fractional parts of the cube roots of the first 64 primes 2..311): self.k = [ 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5, 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174, 0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da, 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967, 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85, 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3, 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070, 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3, 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 ] # Store them self.hash_pieces = [h0, h1, h2, h3, h4, h5, h6, h7] @jit def update(self, arg): h0, h1, h2, h3, h4, h5, h6, h7 = self.hash_pieces # 1. Pre-processing, exactly like MD5 data = bytearray(arg) orig_len_in_bits = (8 * len(data)) & 0xFFFFFFFFFFFFFFFF # 1.a. Add a single '1' bit at the end of the input bits data.append(0x80) # 1.b. Padding with zeros as long as the input bits length ≡ 448 (mod 512) while len(data) % 64 != 56: data.append(0) # 1.c. append original length in bits mod (2 pow 64) to message data += orig_len_in_bits.to_bytes(8, byteorder='big') assert len(data) % 64 == 0, "Error in padding" # 2. Computations # Process the message in successive 512-bit = 64-bytes chunks: for offset in range(0, len(data), 64): # 2.a. 512-bits = 64-bytes chunks chunks = data[offset : offset + 64] w = [0 for i in range(64)] # 2.b. Break chunk into sixteen 32-bit = 4-bytes words w[i], 0 ≤ i ≤ 15 for i in range(16): w[i] = int.from_bytes(chunks[4*i : 4*i + 4], byteorder='big') # 2.c. Extend the first 16 words into the remaining 48 # words w[16..63] of the message schedule array: for i in range(16, 64): s0 = (rightrotate(w[i-15], 7) ^ rightrotate(w[i-15], 18) ^ rightshift(w[i-15], 3)) & 0xFFFFFFFF s1 = (rightrotate(w[i-2], 17) ^ rightrotate(w[i-2], 19) ^ rightshift(w[i-2], 10)) & 0xFFFFFFFF w[i] = (w[i-16] + s0 + w[i-7] + s1) & 0xFFFFFFFF # 2.d. Initialize hash value for this chunk a, b, c, d, e, f, g, h = h0, h1, h2, h3, h4, h5, h6, h7 # 2.e. Main loop, cf. https://tools.ietf.org/html/rfc6234 for i in range(64): S1 = (rightrotate(e, 6) ^ rightrotate(e, 11) ^ rightrotate(e, 25)) & 0xFFFFFFFF ch = ((e & f) ^ ((~e) & g)) & 0xFFFFFFFF temp1 = (h + S1 + ch + self.k[i] + w[i]) & 0xFFFFFFFF S0 = (rightrotate(a, 2) ^ rightrotate(a, 13) ^ rightrotate(a, 22)) & 0xFFFFFFFF maj = ((a & b) ^ (a & c) ^ (b & c)) & 0xFFFFFFFF temp2 = (S0 + maj) & 0xFFFFFFFF new_a = (temp1 + temp2) & 0xFFFFFFFF new_e = (d + temp1) & 0xFFFFFFFF # Rotate the 8 variables a, b, c, d, e, f, g, h = new_a, a, b, c, new_e, e, f, g # Add this chunk's hash to result so far: h0 = (h0 + a) & 0xFFFFFFFF h1 = (h1 + b) & 0xFFFFFFFF h2 = (h2 + c) & 0xFFFFFFFF h3 = (h3 + d) & 0xFFFFFFFF h4 = (h4 + e) & 0xFFFFFFFF h5 = (h5 + f) & 0xFFFFFFFF h6 = (h6 + g) & 0xFFFFFFFF h7 = (h7 + h) & 0xFFFFFFFF # 3. Conclusion self.hash_pieces = [h0, h1, h2, h3, h4, h5, h6, h7] def digest(self): # h0 append h1 append h2 append h3 append h4 append h5 append h6 append h7 return sum(leftshift(x, 32 * i) for i, x in enumerate(self.hash_pieces[::-1])) Explanation: The SHA2_Numba class And similarly for the SHA2 class, with the numba.jit decorator to the update function. End of explanation def hash_SHA2_Numba(data): Shortcut function to directly receive the hex digest from SHA2_Numba(data). h = SHA2_Numba() if isinstance(data, str): data = bytes(data, encoding='utf8') h.update(data) return h.hexdigest() Explanation: We can also write a function to directly compute the hex digest from some bytes data. End of explanation hash_SHA2_Numba("The quick brown fox jumps over the lazy dog") assert hash_SHA2_Numba("The quick brown fox jumps over the lazy dog") == 'd7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592' Explanation: Check on SHA-2 Let try the example from SHA-2 Wikipedia page : End of explanation %load_ext cython Explanation: I failed to make numba.jit work on that function :-( Cython-power code for the SHA-2 hashing function Requirements You need cython and the cython Jupyter extension to be installed. End of explanation %%cython cpdef int leftrotate_cython(int x, int c): Left rotate the number x by c bytes. return (x << c) | (x >> (32 - c)) cpdef int rightrotate_cython(int x, int c): Right rotate the number x by c bytes. return (x >> c) | (x << (32 - c)) leftrotate_cython? rightrotate_cython? Explanation: Useful functions the SHA-2 algorithm For the functions defined before, we rewrite them with type annotations in %%cython cells. All variables are int, i.e., 32-bits integer (64-bits are long). End of explanation from numpy.random import randint %timeit leftrotate(randint(0, 100000), 5) %timeit leftrotate_cython(randint(0, 100000), 5) %timeit rightrotate(randint(0, 100000), 5) %timeit rightrotate_cython(randint(0, 100000), 5) %%cython cpdef int leftshift_cython(int x, int c): Left shift the number x by c bytes. return x << c cpdef int rightshift_cython(int x, int c): Right shift the number x by c bytes. return x >> c leftshift_cython? rightshift_cython? Explanation: On basic functions like this, of course we don't get any speedup with Cython: End of explanation %timeit leftshift(randint(0, 100000), 5) %timeit leftshift_cython(randint(0, 100000), 5) %timeit rightshift(randint(0, 100000), 5) %timeit rightshift_cython(randint(0, 100000), 5) Explanation: On basic functions like this, of course we don't get any speedup with Cython: End of explanation %%cython # cython: c_string_type=unicode, c_string_encoding=utf8 cdef int rightrotate_cython(int x, int c): Right rotate the number x by c bytes. return (x >> c) | (x << (32 - c)) cdef int rightshift_cython(int x, int c): Right shift the number x by c bytes. return x >> c # See http://cython.readthedocs.io/en/latest/src/tutorial/array.html from cpython cimport array import array cdef array.array empty_64 = array.array('i', [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) cdef int[:] view_empty_64 = empty_64 cpdef void update_cython(int[:] hash_pieces, int[:] k, bytearray arg): One pass of the SHA-256 algorithm, update hash_pieces on place. # Extract the 8 variables cdef int h0 = hash_pieces[0], h1 = hash_pieces[1], h2 = hash_pieces[2], h3 = hash_pieces[3], h4 = hash_pieces[4], h5 = hash_pieces[5], h6 = hash_pieces[6], h7 = hash_pieces[7] # 1. Pre-processing, exactly like MD5 cdef bytearray data = arg cdef long orig_len_in_bits = 8 * len(data) # 1.a. Add a single '1' bit at the end of the input bits data.append(0x80) # 1.b. Padding with zeros as long as the input bits length ≡ 448 (mod 512) while len(data) % 64 != 56: data.append(0x0) # 1.c. append original length in bits mod (2 pow 64) to message data += orig_len_in_bits.to_bytes(8, byteorder='big') assert len(data) % 64 == 0, "Error in padding" # Declare loop indexes and variables cdef int offset, i cdef int a, b, c, d, e, f, g, h cdef int temp1, temp2 # 2. Computations # Process the message in successive 512-bit = 64-bytes chunks: cdef int[:] w = view_empty_64 for offset in range(0, len(data), 64): # 2.a. 512-bits = 64-bytes chunks # 2.b. Break chunk into sixteen 32-bit = 4-bytes words w[i], 0 ≤ i ≤ 15 for i in range(16): w[i] = int.from_bytes(data[offset : offset + 64][4*i : 4*i + 4], byteorder='big') # 2.c. Extend the first 16 words into the remaining 48 # words w[16..63] of the message schedule array: for i in range(16, 64): w[i] = w[i-16] + (rightrotate_cython(w[i-15], 7) ^ rightrotate_cython(w[i-15], 18) ^ rightshift_cython(w[i-15], 3)) + w[i-7] + (rightrotate_cython(w[i-2], 17) ^ rightrotate_cython(w[i-2], 19) ^ rightshift_cython(w[i-2], 10)) # 2.d. Initialize hash value for this chunk a = h0 b = h1 c = h2 d = h3 e = h4 f = h5 g = h6 h = h7 # 2.e. Main loop, cf. https://tools.ietf.org/html/rfc6234 for i in range(64): temp1 = h + (rightrotate_cython(e, 6) ^ rightrotate_cython(e, 11) ^ rightrotate_cython(e, 25)) + ((e & f) ^ ((~e) & g)) + k[i] + w[i] temp2 = (rightrotate_cython(a, 2) ^ rightrotate_cython(a, 13) ^ rightrotate_cython(a, 22)) + ((a & b) ^ (a & c) ^ (b & c)) # Rotate the 8 variables a, b, c, d, e, f, g, h = temp1 + temp2, a, b, c, d + temp1, e, f, g # Add this chunk's hash to result so far: h0 += a h1 += b h2 += c h3 += d h4 += e h5 += f h6 += g h7 += h # 3. Conclusion hash_pieces[0] = h0 hash_pieces[1] = h1 hash_pieces[2] = h2 hash_pieces[3] = h3 hash_pieces[4] = h4 hash_pieces[5] = h5 hash_pieces[6] = h6 hash_pieces[7] = h7 class SHA2_Cython(Hash): SHA256 hashing, speed-up with Numba.jit, see https://en.wikipedia.org/wiki/SHA-2#Pseudocode. def __init__(self): self.name = "SHA256" self.byteorder = 'big' self.block_size = 64 self.digest_size = 32 # Note 2: For each round, there is one round constant k[i] and one entry in the message schedule array w[i], 0 ≤ i ≤ 63 # Note 3: The compression function uses 8 working variables, a through h # Note 4: Big-endian convention is used when expressing the constants in this pseudocode, # and when parsing message block data from bytes to words, for example, # the first word of the input message "abc" after padding is 0x61626380 # Initialize hash values: # (first 32 bits of the fractional parts of the square roots of the first 8 primes 2..19): h0 = 0x6a09e667 h1 = 0xbb67ae85 h2 = 0x3c6ef372 h3 = 0xa54ff53a h4 = 0x510e527f h5 = 0x9b05688c h6 = 0x1f83d9ab h7 = 0x5be0cd19 # Initialize array of round constants: # (first 32 bits of the fractional parts of the cube roots of the first 64 primes 2..311): self.k = [ 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5, 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174, 0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da, 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967, 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85, 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3, 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070, 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3, 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 ] # Store them self.hash_pieces = [h0, h1, h2, h3, h4, h5, h6, h7] def update(self, data): update_cython(self.hash_pieces, self.k, data) def digest(self): # h0 append h1 append h2 append h3 append h4 append h5 append h6 append h7 return sum(leftshift(x, 32 * i) for i, x in enumerate(self.hash_pieces[::-1])) Explanation: The SHA2_Cython class And similarly for the SHA2 class, we write it in a %%cython cell, and we type everything. End of explanation def hash_SHA2_Cython(data): Shortcut function to directly receive the hex digest from SHA2_Cython(data). h = SHA2_Cython() if isinstance(data, str): data = bytes(data, encoding='utf8') print("type(data) =", type(data)) h.update(data) return h.hexdigest() data = bytes("", encoding='utf8') h = SHA2_Cython() h.hash_pieces[:1] type(h.hash_pieces) h.k[:1] type(h.k) data type(data) update_cython(h.hash_pieces, h.k, bytearray(data)) Explanation: We can also write a function to directly compute the hex digest from some bytes data. End of explanation hash_SHA2_Cython("The quick brown fox jumps over the lazy dog") assert hash_SHA2_Cython("The quick brown fox jumps over the lazy dog") == 'd7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592' Explanation: Check on SHA-2 Let try the example from SHA-2 Wikipedia page : End of explanation
6,459
Given the following text description, write Python code to implement the functionality described below step by step Description: Optimal Control Problem Minimize $$\int_0^Tf(t,x,u)~dt$$ subject to $$ \begin{cases} x'(t) = b(t,x,u)\ x(0) = x_0 \end{cases} $$ Step1: Hamilton-Jacobi-Bellman Equation Step2: Successive Approximation
Python Code: t, x, u= symbols('t x u') Vt, Vx = symbols('V_t V_x') f = x + 0.5 * u**2 b = x + u Explanation: Optimal Control Problem Minimize $$\int_0^Tf(t,x,u)~dt$$ subject to $$ \begin{cases} x'(t) = b(t,x,u)\ x(0) = x_0 \end{cases} $$ End of explanation hjbeq = r'\frac{\partial V}{\partial t} + \min_u \left[' + latex(f) + r'+\frac{\partial V}{\partial x}\left(' + latex(b)+ r'\right)\right] = 0' display(Math(hjbeq)) hjbbd = r'V(T,x)=0' display(Math(hjbbd)) Explanation: Hamilton-Jacobi-Bellman Equation End of explanation sahjbeq = r'\frac{\partial V_k}{\partial t} + ' + latex(f) + r'+\frac{\partial V_k}{\partial x} \left(' + latex(b) + r'\right)= 0' display(Math(sahjbeq.replace('u', 'u_k'))) sahjbbd = r'V_k(T,x)=0' display(Math(sahjbbd)) update = r'u_{k+1} = \arg\min_{u_k}\left[' + latex(f) + r'+\frac{\partial V_k}{\partial x} \left(' + latex(b) + r'\right)\right]' display(Math(update)) Explanation: Successive Approximation End of explanation
6,460
Given the following text description, write Python code to implement the functionality described below step by step Description: Python Basics Syntax Python is an object oriented scripting language and does not require a specific first or last line (such as <code>public static void main</code> in Java or <code>return</code> in C). There are no curly braces {} to define code blocks or semi-colons ; to end a line. Instead of braces, indentation is rigidly enforced to create a block of code. Step1: Arbitrary indentation can be used within a code block, as long as the indentation is consistent. Step2: Variables and Types Variables can be given alphanumeric names beginning with an underscore or letter. Variable types do not have to be declared and are inferred at run time. Step3: Strings can be declared with either single or double quotes. Step4: The scope of variables is local to the function, class, and file in that increasing order of scope. Global variables can also be declared. Step5: Modules and Import Files with a .py extension are known as Modules in Python. Modules are used to store functions, variables, and class definitions. Modules that are not part of the standard Python library are included in your program using the <code>import</code> statement. Step6: Whoops. Importing the <code>math</code> module allows us access to all of its functions, but we must call them in this way Step7: Alternatively, you can use the <code>from</code> keyword Step8: Using the <code>from</code> statement we can import everything from the math module. Disclaimer Step9: Strings As you may expect, Python has a powerful, full featured string module. Substrings Python strings can be substringed using bracket syntax Step10: Python is a 0-index based language. Generally whenever forming a range of values in Python, the first argument is inclusive whereas the second is not, i.e. <code>mystring[11 Step11: Using negative values, you can count positions backwards Step12: String Functions Here are some more useful string functions find Step13: Looks like nothing was found. -1 is returned by default. Step14: lower and upper Step15: replace Step16: Notice that replace returned a new string. Nothing was modified in place Step17: split Step18: join The <code>join</code> is useful for building strings from lists or other iterables. Call <code>join</code> on the desired separator Step19: For more information on string functions Step20: For more information on Lists Step21: Sets Python includes the set data structure which is an unordered collection with no duplicates Step22: Dictionaries Python supports dictionaries which can be thought of as an unordered list of key, value pairs. Keys can be any immutable type and are typically integers or strings. Values can be any object, even dictionaries. Dictionaries are created with curly braces {} Step23: Conditionals Python supports the standard if-else-if conditional expression Step24: Loops Python supports for, foreach, and while loops For (counting) Traditional counting loops are accomplished in Python with a combination of the <code>for</code> key word and the <code>range</code> function Step25: Foreach As it turns out, counting loops are just foreach loops in Python. The <code>range</code> function returns a list of integers over which <code>for in</code> iterates. This can be extended to any other iterable type Step26: While Python supports standard <code>while</code> loops Step27: Python does not have a construct for a do-while loop, though it can be accomplished using the <code>break</code> statement Step28: Functions Functions in Python do not have a distinction between those that do and do not return a value. If a value is returned, the type is not declared. Functions can be declared in any module without any distinction between static and non-static. Functions can even be declared within other functions The syntax is the following Step29: Functions can have optional arguments if a default value is provided in the function signature Step30: Python functions can be called using named arguments, instead of positional Step31: *args and **kwargs In Python, there is a special deferencing scheme that allows for defining and calling functions with argument lists or dictionaries. *args Step32: Argument lists can also be used in defining a function as such Step33: **kwargs Similarly, we can define a dictionary of named parameters Step34: Just as before, we can define a function taking an arbitrary dictionary Step35: return In Python functions, an arbitrary number of values can be returned Step36: Data Science Tutorial Now that we've covered some Python basics, we will begin a tutorial going through many tasks a data scientist may perform. We will obtain real world data and go through the process of auditing, analyzing, visualing, and building classifiers from the data. We will use a database of breast cancer data obtained from the University of Wisconsin Hospitals, Madison from Dr. William H. Wolberg. The data is a collection of samples from Dr. Wolberg's clinical cases with attributes pertaining to tumors and a class labeling the sample as benign or malignant. | Attribute | Domain | |--------------------------------|---------------------------------| | 1. Sample code number | id number | | 2. Clump Thickness | 1 - 10 | | 3. Uniformity of Cell Size | 1 - 10 | | 4. Uniformity of Cell Shape | 1 - 10 | | 5. Marginal Adhesion | 1 - 10 | | 6. Single Epithelial Cell Size | 1 - 10 | | 7. Bare Nuclei | 1 - 10 | | 8. Bland Chromatin | 1 - 10 | | 9. Normal Nucleoli | 1 - 10 | | 10. Mitoses | 1 - 10 | | 11. Class | (2 for benign, 4 for malignant) | For more information on this data set Step37: Now we'll specify the url of the file and the file name we will save to Step38: And make a call to <code>download_file</code> Step39: Now this might seem like overkill for downloading a single, small csv file, but we can use this same function to access countless APIs available on the World Wide Web by building an API request in the url. Wrangling the Data Now that we have some data, lets get it into a useful form. For this task we will use a package called pandas. pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for Python. The most fundamental data structure in pandas is the dataframe, which is similar to the data.frame data structure found in the R statistical programming language. For more information Step40: Whoops, looks like our csv file did not contain a header row. <code>read_csv</code> assumes the first row of the csv is the header by default. Lets check out the file located here Step41: Lets try the import again, this time specifying the names. When specifying names, the <code>read_csv</code> function requires us to set the <code>header</code> row number to <code>None</code> Step42: Lets take a look at some simple statistics for the clump_thickness column Step43: Referring to the documentation link above about the data, the count, range of values (min = 1, max = 10), and data type (dtype = float64) look correct. Lets take a look at another column, this time bare_nuclei Step44: Well at least the count is correct. We were expecting no more than 10 unique values and now the data type is an object. Whats up with our data? We have arrived at arguably the most important part of performing data science Step45: Using <code>unique</code> we can see that '?' is one of the distinct values that appears in this series. Looking again at the documentation for this data set, we find the following Step46: Here we have attempted to convert the bare_nuclei series to a numeric type. Lets see what the unique values are now. Step47: The decimal point after each number means that it is an integer value being represented by a floating point number. Now instead of our pesky '?' we have <code>nan</code> (not a number). <code>nan</code> is a construct used by pandas to represent the absence of value. It is a data type that comes from the package numpy, used internally by pandas, and is not part of the standard Python library. Now that we have <code>nan</code> values in place of '?', we can use some nice features in pandas to deal with these missing values. What we are about to do is what is called "imputing" or providing a replacement for missing values so the data set becomes easier to work with. There are a number of strategies for imputing missing values, all with their own pitfalls. In general, imputation introduces some degree of bias to the data, so the imputation strategy taken should be in an attempt to minimize that bias. Here, we will simply use the mean of all of the non-nan values in the series as a replacement. Since we already know that the data is integer in possible values, we will round the mean to the nearest whole number. Step48: <code>fillna</code> is a dataframe function that replaces all nan values with either a scalar value, a series of values with the same indices as found in the dataframe, or a dataframe that is indexed by the columns of the target dataframe. <code>cancer_data.mean().round()</code> will take the mean of each column (this computation ignores the currently present nan values), then round, and return a dataframe indexed by the columns of the original dataframe Step49: <code>inplace=True</code> allows us to make this modification directly on the dataframe, without having to do any assignment. Now that we have figured out how to impute these missing values in a single column, lets start over and quickly apply this technique to the entire dataframe. Step50: Structurally, Pandas dataframes are a collection of Series objects sharing a common index. In general, the Series object and Dataframe object share a large number of functions with some behavioral differences. In other words, whatever computation you can do on a single column can generally be applied to the entire dataframe. Now we can use the dataframe version of <code>describe</code> to get an overview of all of our data Step51: Visualizing the Data Another important tool in the data scientist's toolbox is the ability to create visualizations from data. Visualizing data is often the most logical place to start getting a deeper intuition of the data. This intuition will shape and drive your analysis. Even more important than visualizing data for your own personal benefit, it is often the job of the data scientist to use the data to tell a story. Creating illustrative visuals that succinctly convey an idea are the best way to tell that story, especially to stakeholders with less technical skillsets. Here we will be using a Python package called ggplot (https Step52: So we enabled plotting in IPython and imported everything from the ggplot package. Now we'll create a plot and then break down the components Step53: A plot begins with the <code>ggplot</code> function. Here, we pass in the cancer_data pandas dataframe and a special function called <code>aes</code> (short for aesthetic). The values provided to <code>aes</code> change depending on which type of plot is being used. Here we are going to make a histogram from the clump_thickness column in cancer_data, so that column name needs to be passed as the x parameter to <code>aes</code>. The grammar of graphics is based off of a concept of "geoms" (short for geometric objects). These geoms provide granular control of the plot and are progressively added to the base call to <code>ggplot</code> with + syntax. Lets say we wanted to show the mean clump_thickness on this plot. We could do something like the following Step54: As you can see, each geom has its own set of parameters specific to the appearance of that geom (also called aesthetics). Lets try a scatter plot to get some multi-variable action Step55: Sometimes when working with integer data, or data that takes on a limited range of values, it is easier to visualize the plot with added jitter to the points. We can do that by adding an aesthetic to <code>geom_point</code>. Step56: With a simple aesthetic addition, we can see how these two variables play into our cancer classification Step57: By adding <code>color = 'class'</code> as a parameter to the aes function, we now give a color to each unique value found in that column and automatically get a legend. Remember, 2 is benign and 4 is malignant. We can also do things such as add a title or change the axis labeling with geoms Step58: There is definitely some patterning going on in that plot. A slightly different way to convey this idea is to use faceting. Faceting is the creation of multiple related plots arranged by the values of a given faceted variable Step59: Rather than set the color equal to the class, we have created two plots based off of the class. With a facet, we can get very detailed. Lets through some more variables into the mix Step60: Unfortunately, legends for faceting are not yet implemented in the Python ggplot package. In this example we faceted on the x-axis with clump_thickness and along the y-axis with marginal_adhesion, then created 100 plots of uniformity_cell_shape vs. bare_nuclei effect on class. I highly encourage you to check out https Step61: Here we call <code>values</code> on the dataframe to extract the values stored in the dataframe as an array of numpy arrays with the same dimensions as our subsetted dataframe. Numpy is a powerful, high performance scientific computing package that implements arrays. It is used internally by pandas. We will use <code>labels</code> and <code>features</code> later on in our machine learning classifier Step62: An important concept in machine learning is to split the data set into training data and testing data. The machine learning algorithm will use the subset of training data to build a classifier to predict labels. We then test the accuracy of this classifier on the subset of testing data. This is done in order to prevent overfitting the classifier to one given set of data. Overfitting is a major concern in the design of machine learning algorithms. Conceptually, overfitting is when a classifier is really good at predicting the data used to build it, but isn't robust or general enough to predict new, yet unseen data all that well. To perform machine learning, we will use a package called sci-kit learn (sklearn for short). The sklearn cross_validation module contains a function called <code>train_test_split</code> that will take in features and labels, and randomly select values into the training and testing subsets Step63: For this example, we will build a Decision Tree Classifier. The goal of a decision tree is to create a prediction by outlining a simple tree of decision rules. These rules are built from the training data by slicing the data on simple boundaries and trying to minimize the prediction error of that boundary. More details on decision trees can be found here Step64: Next, we create a variable to store the classifier Step65: Then we have to fit the classifier to the training data. Both the training features (uniformity_cell_shape and bare_nuclei) and the labels (benign vs. malignant) are passed to the fit function Step66: The classifier is now ready to make some predictions. We can use the score function to see how accurate the classifier is on the test data. The score function will take the data in <code>features_test</code>, make a prediction of benign or malignant based on the decision tree that was fit to the training data, and compare that prediction to the true values in <code>labels_test</code> Step67: Nearly all classifiers, decision trees included, will have paremeters that can be tuned to build a more accurate model. Without any parameter tuning and using just two features we have made a pretty accurate prediction. Good job! To get a better idea of what is going on, I have included a helper function to plot our test data along with the decision boundary
Python Code: # This is a comment if (3 < 2): print "True" # Another Comment. This print syntax only works in Python 2, not 3 else: print "False" Explanation: Python Basics Syntax Python is an object oriented scripting language and does not require a specific first or last line (such as <code>public static void main</code> in Java or <code>return</code> in C). There are no curly braces {} to define code blocks or semi-colons ; to end a line. Instead of braces, indentation is rigidly enforced to create a block of code. End of explanation if (1 == 1): print "We're in " print "Deep Trouble:" if (0 > -1): print "This works " print "just fine." Explanation: Arbitrary indentation can be used within a code block, as long as the indentation is consistent. End of explanation a = 1 print type(a) # Built in function b = 2.5 print type(b) Explanation: Variables and Types Variables can be given alphanumeric names beginning with an underscore or letter. Variable types do not have to be declared and are inferred at run time. End of explanation c1 = "Go " c2 = 'Gators' c3 = c1 + c2 print c3 print type(c3) Explanation: Strings can be declared with either single or double quotes. End of explanation print "b used to be", b # Prints arguments with a space separator # Our first function declaration def sum(): global b b = a + b sum() # calling sum # using this syntax, the arguments can be of any type that supports a string representation. No casting needed. print "Now b is", b Explanation: The scope of variables is local to the function, class, and file in that increasing order of scope. Global variables can also be declared. End of explanation # To use Math, we must import it import math print cos(0) Explanation: Modules and Import Files with a .py extension are known as Modules in Python. Modules are used to store functions, variables, and class definitions. Modules that are not part of the standard Python library are included in your program using the <code>import</code> statement. End of explanation print math.cos(0) Explanation: Whoops. Importing the <code>math</code> module allows us access to all of its functions, but we must call them in this way End of explanation from math import cos print cos(math.pi) # we only imported cos, not the pi constant Explanation: Alternatively, you can use the <code>from</code> keyword End of explanation from math import * print sin(pi/2) # now we don't have to make a call to math Explanation: Using the <code>from</code> statement we can import everything from the math module. Disclaimer: many Pythonistas discourage doing this for performance reasons. Just import what you need End of explanation mystring = "Go Gators, Come on Gators, Get up and go!" print mystring[11:25] Explanation: Strings As you may expect, Python has a powerful, full featured string module. Substrings Python strings can be substringed using bracket syntax End of explanation print mystring[:9] # all characters before the 9th index print mystring[27:] # all characters at or after the 27th print mystring[:] # you can even omit both arguments Explanation: Python is a 0-index based language. Generally whenever forming a range of values in Python, the first argument is inclusive whereas the second is not, i.e. <code>mystring[11:25]</code> returns characters 11 through 24. You can omit the first or second argument End of explanation print mystring[-3:-1] Explanation: Using negative values, you can count positions backwards End of explanation print mystring.find("Gators") # returns the index of the first occurence of Gators print mystring.find("Gators", 4) # specify an index on which to begin searching print mystring.find("Gators", 4, 19) # specify both begin and end indexes to search Explanation: String Functions Here are some more useful string functions find End of explanation print mystring.find("Seminoles") # no Seminoles here Explanation: Looks like nothing was found. -1 is returned by default. End of explanation print mystring.lower() print mystring.upper() Explanation: lower and upper End of explanation print mystring.replace("Gators", "Seminoles") # replaces all occurences of Gators with Seminoles print mystring Explanation: replace End of explanation print mystring.replace("Gators", "Seminoles", 1) # limit the number of replacements Explanation: Notice that replace returned a new string. Nothing was modified in place End of explanation print mystring.split() # returns a list of strings broken by a space by default print mystring.split(',') # you can also define the separator Explanation: split End of explanation print ' '.join(["Go", "Gators"]) Explanation: join The <code>join</code> is useful for building strings from lists or other iterables. Call <code>join</code> on the desired separator End of explanation mylist = [1, 2, 3, 4, 'five'] print mylist mylist.append(6.0) # add an item to the end of the list print mylist mylist.extend([8, 'nine']) # extend the list with the contents of another list print mylist mylist.insert(6, 7) # insert the number 7 at index 6 print mylist mylist.remove('five') # removes the first matching occurence print mylist popped = mylist.pop() # by default, the last item in the list is removed and returned print popped print mylist popped2 = mylist.pop(4) # pops at at index print popped2 print mylist print len(mylist) # returns the length of any iterable such as lists and strings mylist.extend(range(-3, 0)) # the range function returns a list from -3 inclusive to 0 non inclusive print mylist # default list sorting. When more complex objects are in the list, arguments can be used to customize how to sort mylist.sort() print mylist mylist.reverse() # reverse the list print mylist Explanation: For more information on string functions: https://docs.python.org/2/library/stdtypes.html#string-methods Data Structures Lists The Python standard library does not have traditional C-style fixed-memory fixed-type arrays. Instead, lists are used and can contain a mix of any type. Lists are created with square brackets [] End of explanation mytuple = 'Tim', 'Tebow', 15 # Created with commas print mytuple print type(mytuple) print mytuple[1] # access an item mytuple[1] = "Winston" # results in error Explanation: For more information on Lists: https://docs.python.org/2/tutorial/datastructures.html#more-on-lists Tuples Python supports n-tuple sequences. These are non-mutable End of explanation schools = ['Florida', 'Florida State', 'Miami', 'Florida'] myset = set(schools) # the set is built from the schools list print myset print 'Georgia' in myset # membership test print 'Florida' in myset badschools = set(['Florida State', 'Miami']) print myset - badschools # set arithmetic print myset & badschools # AND print myset | set(['Miami', 'Stetson']) # OR print myset ^ set(['Miami', 'Stetson']) # XOR Explanation: Sets Python includes the set data structure which is an unordered collection with no duplicates End of explanation mydict = {'Florida' : 1, 'Georgia' : 2, 'Tennessee' : 3} print mydict print mydict['Florida'] # access the value with key = 'Florida' del mydict['Tennessee'] # funky syntax to delete a key, value pair print mydict mydict['Georgia'] = 7 # assignment print mydict mydict['Kentucky'] = 6 # you can append a new key print mydict print mydict.keys() # get a list of keys Explanation: Dictionaries Python supports dictionaries which can be thought of as an unordered list of key, value pairs. Keys can be any immutable type and are typically integers or strings. Values can be any object, even dictionaries. Dictionaries are created with curly braces {} End of explanation a = 2; b = 1; if a > b: print "a is greater than b" if b > a: print "b is greater than a" else: print "b is less than or equal to a" b = 2 if a > b: print "a is greater than b" elif a < b: print "a is less than b" else: print "a is equal to b" Explanation: Conditionals Python supports the standard if-else-if conditional expression End of explanation for x in range(10): # with one argument, range produces integers from 0 to 9 print x for y in range(5, 12): # with two argumentts, range produces integers from 5 to 11 print y for z in range(1, 12, 3): # with three arguments, range starts at 1 and goes in steps of 3 until greater than 12 print z for a in range(10, 1, -5): # can use a negative step size as well print a for b in range(2, 1, 1): # with a positive step, all values are less than 1. No integers are produced print b for c in range(1, 2, -1): # same goes for a negative step as all values are less than 2 print c Explanation: Loops Python supports for, foreach, and while loops For (counting) Traditional counting loops are accomplished in Python with a combination of the <code>for</code> key word and the <code>range</code> function End of explanation for i in ['foo', 'bar']: # iterate over a list of strings print i anotherdict = {'one' : 1, 'two' : 2, 'three' : 3} for key in anotherdict.keys(): # iterate over a dictionary. Order is not guaranteed print key, anotherdict[key] Explanation: Foreach As it turns out, counting loops are just foreach loops in Python. The <code>range</code> function returns a list of integers over which <code>for in</code> iterates. This can be extended to any other iterable type End of explanation a = 1; b = 4; c = 7; d = 5; while (a < b) and (c > d): # example of and condition print c - a a += 1 # example of incrementing c -= 1 # decrementing Explanation: While Python supports standard <code>while</code> loops End of explanation a = 1; b = 10 while True: # short circuit the while condition a *= 2 print a if a > b: break Explanation: Python does not have a construct for a do-while loop, though it can be accomplished using the <code>break</code> statement End of explanation def hello(): print "Hello there!" hello() def player(name, number): # use some arguments print "#" + str(number), name # cast number to a string when concatenating player("Kasey Hill", 0) Explanation: Functions Functions in Python do not have a distinction between those that do and do not return a value. If a value is returned, the type is not declared. Functions can be declared in any module without any distinction between static and non-static. Functions can even be declared within other functions The syntax is the following End of explanation def player(name, number, team = 'Florida'): # optional team argument print "#" + str(number), name, team player("Kasey Hill", 0) # no team argument supplied player("Aaron Harrison", 2, "Kentucky") # supplying all three arguments Explanation: Functions can have optional arguments if a default value is provided in the function signature End of explanation player(number = 23, name = 'Chris Walker') Explanation: Python functions can be called using named arguments, instead of positional End of explanation args = ['Michael Frazier II', 20, 'Florida'] player(*args) # calling player with the dereferenced argument list Explanation: *args and **kwargs In Python, there is a special deferencing scheme that allows for defining and calling functions with argument lists or dictionaries. *args End of explanation def foo(*args): for someFoo in args: print someFoo foo('la', 'dee', 'da') # supports an arbitrary number of arguments Explanation: Argument lists can also be used in defining a function as such End of explanation kwargs = {'name' : 'Michael Frazier II', 'number' : 20} player(**kwargs) # calling player with the dereferenced kwargs dictionary. The team argument will be defaulted Explanation: **kwargs Similarly, we can define a dictionary of named parameters End of explanation def foo(**kwargs): for key in kwargs.keys(): print key, kwargs[key] foo(**kwargs) Explanation: Just as before, we can define a function taking an arbitrary dictionary End of explanation def sum(x,y): return x + y # return a single value print sum(1,2) def sum_and_product(x,y): return x + y, x * y # return two values mysum, myproduct = sum_and_product(1,2) print mysum, myproduct Explanation: return In Python functions, an arbitrary number of values can be returned End of explanation def download_file(url, local_filename): import requests # stream = True allows downloading of large files; prevents loading entire file into memory r = requests.get(url, stream = True) with open(local_filename, 'wb') as f: for chunk in r.iter_content(chunk_size=1024): if chunk: # filter out keep-alive new chunks f.write(chunk) f.flush() Explanation: Data Science Tutorial Now that we've covered some Python basics, we will begin a tutorial going through many tasks a data scientist may perform. We will obtain real world data and go through the process of auditing, analyzing, visualing, and building classifiers from the data. We will use a database of breast cancer data obtained from the University of Wisconsin Hospitals, Madison from Dr. William H. Wolberg. The data is a collection of samples from Dr. Wolberg's clinical cases with attributes pertaining to tumors and a class labeling the sample as benign or malignant. | Attribute | Domain | |--------------------------------|---------------------------------| | 1. Sample code number | id number | | 2. Clump Thickness | 1 - 10 | | 3. Uniformity of Cell Size | 1 - 10 | | 4. Uniformity of Cell Shape | 1 - 10 | | 5. Marginal Adhesion | 1 - 10 | | 6. Single Epithelial Cell Size | 1 - 10 | | 7. Bare Nuclei | 1 - 10 | | 8. Bland Chromatin | 1 - 10 | | 9. Normal Nucleoli | 1 - 10 | | 10. Mitoses | 1 - 10 | | 11. Class | (2 for benign, 4 for malignant) | For more information on this data set: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29 Obtaining the Data Lets begin by programmatically obtaining the data. Here I'll define a function we can use to make HTTP requests and download the data End of explanation url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data' filename = 'breast-cancer-wisconsin.csv' Explanation: Now we'll specify the url of the file and the file name we will save to End of explanation download_file(url, filename) Explanation: And make a call to <code>download_file</code> End of explanation import pandas as pd # import the module and alias it as pd cancer_data = pd.read_csv('breast-cancer-wisconsin.csv') cancer_data.head() # show the first few rows of the data Explanation: Now this might seem like overkill for downloading a single, small csv file, but we can use this same function to access countless APIs available on the World Wide Web by building an API request in the url. Wrangling the Data Now that we have some data, lets get it into a useful form. For this task we will use a package called pandas. pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for Python. The most fundamental data structure in pandas is the dataframe, which is similar to the data.frame data structure found in the R statistical programming language. For more information: http://pandas.pydata.org pandas dataframes are a 2-dimensional labeled data structures with columns of potentially different types. Dataframes can be thought of as similar to a spreadsheet or SQL table. There are numerous ways to build a dataframe with pandas. Since we have already attained a csv file, we can use a parser built into pandas called <code>read_csv</code> which will read the contents of a csv file directly into a data frame. For more information: http://pandas.pydata.org/pandas-docs/dev/generated/pandas.io.parsers.read_csv.html End of explanation # \ allows multi line wrapping cancer_header = [ \ 'sample_code_number', \ 'clump_thickness', \ 'uniformity_cell_size', \ 'uniformity_cell_shape', \ 'marginal_adhesion', \ 'single_epithelial_cell_size', \ 'bare_nuclei', \ 'bland_chromatin', \ 'normal_nucleoli', \ 'mitoses', \ 'class'] Explanation: Whoops, looks like our csv file did not contain a header row. <code>read_csv</code> assumes the first row of the csv is the header by default. Lets check out the file located here: https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.names This contains information about the data set including the names of the attributes. Lets create a list of these attribute names to use when reading the csv file End of explanation cancer_data = pd.read_csv('breast-cancer-wisconsin.csv', header=None, names=cancer_header) cancer_data.head() Explanation: Lets try the import again, this time specifying the names. When specifying names, the <code>read_csv</code> function requires us to set the <code>header</code> row number to <code>None</code> End of explanation cancer_data["clump_thickness"].describe() Explanation: Lets take a look at some simple statistics for the clump_thickness column End of explanation cancer_data["bare_nuclei"].describe() Explanation: Referring to the documentation link above about the data, the count, range of values (min = 1, max = 10), and data type (dtype = float64) look correct. Lets take a look at another column, this time bare_nuclei End of explanation cancer_data["bare_nuclei"].unique() Explanation: Well at least the count is correct. We were expecting no more than 10 unique values and now the data type is an object. Whats up with our data? We have arrived at arguably the most important part of performing data science: dealing with messy data. One of most important tools in a data scientist's toolbox is the ability to audit, clean, and reshape data. The real world is full of messy data and your sources may not always have data in the exact format you desire. In this case we are working with csv data, which is a relatively straightforward format, but this will not always be the case when performing real world data science. Data comes in all varieties from csv all the way to something as unstructured as a collection of emails or documents. A data scientist must be versed in a wide variety of technologies and methodologies in order to be successful. Now, lets do a little bit of digging into why were are not getting a numeric pandas column End of explanation cancer_data["bare_nuclei"] = cancer_data["bare_nuclei"].convert_objects(convert_numeric=True) Explanation: Using <code>unique</code> we can see that '?' is one of the distinct values that appears in this series. Looking again at the documentation for this data set, we find the following: Missing attribute values: 16 There are 16 instances in Groups 1 to 6 that contain a single missing (i.e., unavailable) attribute value, now denoted by "?". It was so nice of them to tell us to expect these missing values, but as a data scientist that will almost never be the case. Lets see what we can do with these missing values. End of explanation cancer_data["bare_nuclei"].unique() Explanation: Here we have attempted to convert the bare_nuclei series to a numeric type. Lets see what the unique values are now. End of explanation cancer_data.fillna(cancer_data.mean().round(), inplace=True) cancer_data["bare_nuclei"].unique() Explanation: The decimal point after each number means that it is an integer value being represented by a floating point number. Now instead of our pesky '?' we have <code>nan</code> (not a number). <code>nan</code> is a construct used by pandas to represent the absence of value. It is a data type that comes from the package numpy, used internally by pandas, and is not part of the standard Python library. Now that we have <code>nan</code> values in place of '?', we can use some nice features in pandas to deal with these missing values. What we are about to do is what is called "imputing" or providing a replacement for missing values so the data set becomes easier to work with. There are a number of strategies for imputing missing values, all with their own pitfalls. In general, imputation introduces some degree of bias to the data, so the imputation strategy taken should be in an attempt to minimize that bias. Here, we will simply use the mean of all of the non-nan values in the series as a replacement. Since we already know that the data is integer in possible values, we will round the mean to the nearest whole number. End of explanation cancer_data.mean().round() Explanation: <code>fillna</code> is a dataframe function that replaces all nan values with either a scalar value, a series of values with the same indices as found in the dataframe, or a dataframe that is indexed by the columns of the target dataframe. <code>cancer_data.mean().round()</code> will take the mean of each column (this computation ignores the currently present nan values), then round, and return a dataframe indexed by the columns of the original dataframe: End of explanation cancer_data = pd.read_csv('breast-cancer-wisconsin.csv', header=None, names=cancer_header) cancer_data = cancer_data.convert_objects(convert_numeric=True) cancer_data.fillna(cancer_data.mean().round(), inplace=True) cancer_data["bare_nuclei"].describe() cancer_data["bare_nuclei"].unique() Explanation: <code>inplace=True</code> allows us to make this modification directly on the dataframe, without having to do any assignment. Now that we have figured out how to impute these missing values in a single column, lets start over and quickly apply this technique to the entire dataframe. End of explanation cancer_data.describe() Explanation: Structurally, Pandas dataframes are a collection of Series objects sharing a common index. In general, the Series object and Dataframe object share a large number of functions with some behavioral differences. In other words, whatever computation you can do on a single column can generally be applied to the entire dataframe. Now we can use the dataframe version of <code>describe</code> to get an overview of all of our data End of explanation # The following line is NOT Python code, but a special syntax for enabling inline plotting in IPython %matplotlib inline from ggplot import * import warnings # ggplot usage of pandas throws a future warning warnings.filterwarnings('ignore') Explanation: Visualizing the Data Another important tool in the data scientist's toolbox is the ability to create visualizations from data. Visualizing data is often the most logical place to start getting a deeper intuition of the data. This intuition will shape and drive your analysis. Even more important than visualizing data for your own personal benefit, it is often the job of the data scientist to use the data to tell a story. Creating illustrative visuals that succinctly convey an idea are the best way to tell that story, especially to stakeholders with less technical skillsets. Here we will be using a Python package called ggplot (https://ggplot.yhathq.com). The ggplot package is an attempt to bring visuals following the guidelines outlayed in the grammar of graphics (http://vita.had.co.nz/papers/layered-grammar.html) to Python. It is based off of and intended to mimic the features of the ggplot2 library found in R. Additionally, ggplot is designed to work with Pandas dataframes, making things nice and simple. We'll start by doing a bit of setup End of explanation plt = ggplot(aes(x = 'clump_thickness'), data = cancer_data) + \ geom_histogram(binwidth = 1, fill = 'steelblue') # using print gets the plot to show up here within the notebook. # In normal Python environment without using print, the plot appears in a window print plt Explanation: So we enabled plotting in IPython and imported everything from the ggplot package. Now we'll create a plot and then break down the components End of explanation plt = ggplot(aes(x = 'clump_thickness'), data = cancer_data) + \ geom_histogram(binwidth = 1, fill = 'steelblue') + \ geom_vline(xintercept = [cancer_data['clump_thickness'].mean()], linetype='dashed') print plt Explanation: A plot begins with the <code>ggplot</code> function. Here, we pass in the cancer_data pandas dataframe and a special function called <code>aes</code> (short for aesthetic). The values provided to <code>aes</code> change depending on which type of plot is being used. Here we are going to make a histogram from the clump_thickness column in cancer_data, so that column name needs to be passed as the x parameter to <code>aes</code>. The grammar of graphics is based off of a concept of "geoms" (short for geometric objects). These geoms provide granular control of the plot and are progressively added to the base call to <code>ggplot</code> with + syntax. Lets say we wanted to show the mean clump_thickness on this plot. We could do something like the following End of explanation plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei'), data = cancer_data) + \ geom_point() print plt Explanation: As you can see, each geom has its own set of parameters specific to the appearance of that geom (also called aesthetics). Lets try a scatter plot to get some multi-variable action End of explanation plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei'), data = cancer_data) + \ geom_point(position = 'jitter') print plt Explanation: Sometimes when working with integer data, or data that takes on a limited range of values, it is easier to visualize the plot with added jitter to the points. We can do that by adding an aesthetic to <code>geom_point</code>. End of explanation plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei', color = 'class'), data = cancer_data) + \ geom_point(position = 'jitter') print plt Explanation: With a simple aesthetic addition, we can see how these two variables play into our cancer classification End of explanation plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei', color = 'class'), data = cancer_data) + \ geom_point(position = 'jitter') + \ ggtitle("The Effect of the Bare Nuclei and Cell Shape Uniformity on Classification") + \ ylab("Amount of Bare Nuclei") + \ xlab("Uniformity in Cell shape") print plt Explanation: By adding <code>color = 'class'</code> as a parameter to the aes function, we now give a color to each unique value found in that column and automatically get a legend. Remember, 2 is benign and 4 is malignant. We can also do things such as add a title or change the axis labeling with geoms End of explanation plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei'), data = cancer_data) + \ geom_point(position = 'jitter') + \ ggtitle("The Effect of the Bare Nuclei and Cell Shape Uniformity on Classification") + \ facet_grid('class') print plt Explanation: There is definitely some patterning going on in that plot. A slightly different way to convey this idea is to use faceting. Faceting is the creation of multiple related plots arranged by the values of a given faceted variable End of explanation plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei', color = 'class'), data = cancer_data) + \ geom_point(position = 'jitter') + \ ggtitle("The Effect of the Bare Nuclei and Cell Shape Uniformity on Classification") + \ facet_grid('clump_thickness', 'marginal_adhesion') print plt Explanation: Rather than set the color equal to the class, we have created two plots based off of the class. With a facet, we can get very detailed. Lets through some more variables into the mix End of explanation cancer_features = ['uniformity_cell_shape', 'bare_nuclei'] Explanation: Unfortunately, legends for faceting are not yet implemented in the Python ggplot package. In this example we faceted on the x-axis with clump_thickness and along the y-axis with marginal_adhesion, then created 100 plots of uniformity_cell_shape vs. bare_nuclei effect on class. I highly encourage you to check out https://ggplot.yhathq.com/docs/index.html to see all of the available geoms. The best way to learn is to play with and visualize the data with many different plots and aesthetics. Machine Learning So now that we've acquired, audited, cleaned, and visualized our data, we have arrived at machine learning. By formal definition from Tom Mitchell: A computer program is set to learn from an experience E with respect to some task T and some performance measure P if its performance on T as measured by P improves with experience E. Okay, thats a bit ridiculous. Essentially machine learning is the science of building algorithms that learn from data in order make predictions about the data. There are two main classes of machine learning: supervised and unsupervised. In supervised learning, an algorithm will use the features of the data given to make a prediction about a known label. For example, we will use supervised learning here to take features such as bare_nuclei and uniformity_cell_shape and predict a tumor class (benign or malignant). This type of machine learning is called supervised because the class labels (benign or malignant) are a known quantity during learning, so we are supervising the algorithm with the "correct" answer. In unsupervised learning, an algorithm will use the features of the data to discover what types of labels there could be. The "correct" answer is not known. In this session we will be mostly focused on supervised learning as we attempt to predict whether a tumor is benign or malignant. We will also be focused on doing some practical machine learning, and will glaze over the algorithmic details. The first thing we have to do is to extract the class labels and features from <code>cancer_data</code> and store them as separate arrays. In our first classifier we will only choose two features from <code>cancer_data</code> to keep things simple End of explanation labels = cancer_data['class'].values features = cancer_data[cancer_features].values Explanation: Here we call <code>values</code> on the dataframe to extract the values stored in the dataframe as an array of numpy arrays with the same dimensions as our subsetted dataframe. Numpy is a powerful, high performance scientific computing package that implements arrays. It is used internally by pandas. We will use <code>labels</code> and <code>features</code> later on in our machine learning classifier End of explanation from sklearn.cross_validation import train_test_split features_train, features_test, labels_train, labels_test = train_test_split(features, labels, test_size = 0.3, random_state = 42) Explanation: An important concept in machine learning is to split the data set into training data and testing data. The machine learning algorithm will use the subset of training data to build a classifier to predict labels. We then test the accuracy of this classifier on the subset of testing data. This is done in order to prevent overfitting the classifier to one given set of data. Overfitting is a major concern in the design of machine learning algorithms. Conceptually, overfitting is when a classifier is really good at predicting the data used to build it, but isn't robust or general enough to predict new, yet unseen data all that well. To perform machine learning, we will use a package called sci-kit learn (sklearn for short). The sklearn cross_validation module contains a function called <code>train_test_split</code> that will take in features and labels, and randomly select values into the training and testing subsets End of explanation from sklearn.tree import DecisionTreeClassifier Explanation: For this example, we will build a Decision Tree Classifier. The goal of a decision tree is to create a prediction by outlining a simple tree of decision rules. These rules are built from the training data by slicing the data on simple boundaries and trying to minimize the prediction error of that boundary. More details on decision trees can be found here: http://scikit-learn.org/stable/modules/tree.html The first step is to import the classifier from the <code>sklearn.tree</code> module. End of explanation clf = DecisionTreeClassifier() Explanation: Next, we create a variable to store the classifier End of explanation clf.fit(features_train, labels_train) Explanation: Then we have to fit the classifier to the training data. Both the training features (uniformity_cell_shape and bare_nuclei) and the labels (benign vs. malignant) are passed to the fit function End of explanation print "Accuracy score:", clf.score(features_test,labels_test) Explanation: The classifier is now ready to make some predictions. We can use the score function to see how accurate the classifier is on the test data. The score function will take the data in <code>features_test</code>, make a prediction of benign or malignant based on the decision tree that was fit to the training data, and compare that prediction to the true values in <code>labels_test</code> End of explanation from class_vis import prettyPicture # helper class prettyPicture(clf, features_test, labels_test) Explanation: Nearly all classifiers, decision trees included, will have paremeters that can be tuned to build a more accurate model. Without any parameter tuning and using just two features we have made a pretty accurate prediction. Good job! To get a better idea of what is going on, I have included a helper function to plot our test data along with the decision boundary End of explanation
6,461
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2018 The TensorFlow Authors. Step1: はじめてのニューラルネットワーク:分類問題の初歩 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: ファッションMNISTデータセットのロード このガイドでは、Fashion MNISTを使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <table> <tr><td> <img src="https Step3: ロードしたデータセットは、NumPy配列になります。 train_images と train_labels の2つの配列は、モデルの訓練に使用される訓練用データセットです。 訓練されたモデルは、 test_images と test_labels 配列からなるテスト用データセットを使ってテストします。 画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。ラベル(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品のクラス(class)に対応しています。 <table> <tr> <th>Label</th> <th>Class</th> </tr> <tr> <td>0</td> <td>T-shirt/top</td> </tr> <tr> <td>1</td> <td>Trouser</td> </tr> <tr> <td>2</td> <td>Pullover</td> </tr> <tr> <td>3</td> <td>Dress</td> </tr> <tr> <td>4</td> <td>Coat</td> </tr> <tr> <td>5</td> <td>Sandal</td> </tr> <tr> <td>6</td> <td>Shirt</td> </tr> <tr> <td>7</td> <td>Sneaker</td> </tr> <tr> <td>8</td> <td>Bag</td> </tr> <tr> <td>9</td> <td>Ankle boot</td> </tr> </table> 画像はそれぞれ単一のラベルに分類されます。データセットには上記のクラス名が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。 Step4: データの観察 モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。 Step5: 同様に、訓練用データセットには60,000個のラベルが含まれます。 Step6: ラベルはそれぞれ、0から9までの間の整数です。 Step7: テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。 Step8: テスト用データセットには10,000個のラベルが含まれます。 Step9: データの前処理 ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。 Step10: ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。 訓練用データセットとテスト用データセットは、同じように前処理することが重要です。 Step11: 訓練用データセットの最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。 Step12: モデルの構築 ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定 ニューラルネットワークを形作る基本的な構成要素は層(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。 ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。tf.keras.layers.Dense のような層のほとんどには、訓練中に学習されるパラメータが存在します。 Step13: このネットワークの最初の層は、tf.keras.layers.Flatten です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。 ピクセルが1次元化されたあと、ネットワークは2つの tf.keras.layers.Dense 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の Dense 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードのsoftmax層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイル モデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルのコンパイル(compile)時に追加されます。 損失関数(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。 オプティマイザ(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。 メトリクス(metrics) —訓練とテストのステップを監視するのに使用します。下記の例ではaccuracy (正解率)、つまり、画像が正しく分類された比率を使用しています。 Step14: モデルの訓練 ニューラルネットワークの訓練には次のようなステップが必要です。 モデルに訓練用データを投入します—この例では train_images と train_labels の2つの配列です。 モデルは、画像とラベルの対応関係を学習します。 モデルにテスト用データセットの予測(分類)を行わせます—この例では test_images 配列です。その後、予測結果と test_labels 配列を照合します。 訓練を開始するには、model.fit メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。 Step15: モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価 次に、テスト用データセットに対するモデルの性能を比較します。 Step16: ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、過学習(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測する モデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。 Step17: これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。 Step18: 予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。 Step19: というわけで、このモデルは、この画像が、アンクルブーツ、class_names[9] である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。 Step20: 10チャンネルすべてをグラフ化してみることができます。 Step21: 0番目の画像と、予測、予測配列を見てみましょう。 Step22: 予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。 Step23: 最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。 Step24: tf.keras モデルは、サンプルの中のバッチ(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。 Step25: そして、予測を行います。 Step26: model.predict メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. Explanation: Copyright 2018 The TensorFlow Authors. End of explanation # TensorFlow と tf.keras のインポート import tensorflow.compat.v1 as tf from tensorflow import keras # ヘルパーライブラリのインポート import numpy as np import matplotlib.pyplot as plt print(tf.__version__) Explanation: はじめてのニューラルネットワーク:分類問題の初歩 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/r1/tutorials/keras/basic_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/r1/tutorials/keras/basic_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳はベストエフォートであるため、この翻訳が正確であることや英語の公式ドキュメントの 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリtensorflow/docsにプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [email protected] メーリングリストにご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。 このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである tf.kerasを使用します。 End of explanation fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() Explanation: ファッションMNISTデータセットのロード このガイドでは、Fashion MNISTを使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <table> <tr><td> <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> </td></tr> <tr><td align="center"> <b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>&nbsp; </td></tr> </table> Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場するMNIST データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。 Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。 ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。 End of explanation class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] Explanation: ロードしたデータセットは、NumPy配列になります。 train_images と train_labels の2つの配列は、モデルの訓練に使用される訓練用データセットです。 訓練されたモデルは、 test_images と test_labels 配列からなるテスト用データセットを使ってテストします。 画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。ラベル(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品のクラス(class)に対応しています。 <table> <tr> <th>Label</th> <th>Class</th> </tr> <tr> <td>0</td> <td>T-shirt/top</td> </tr> <tr> <td>1</td> <td>Trouser</td> </tr> <tr> <td>2</td> <td>Pullover</td> </tr> <tr> <td>3</td> <td>Dress</td> </tr> <tr> <td>4</td> <td>Coat</td> </tr> <tr> <td>5</td> <td>Sandal</td> </tr> <tr> <td>6</td> <td>Shirt</td> </tr> <tr> <td>7</td> <td>Sneaker</td> </tr> <tr> <td>8</td> <td>Bag</td> </tr> <tr> <td>9</td> <td>Ankle boot</td> </tr> </table> 画像はそれぞれ単一のラベルに分類されます。データセットには上記のクラス名が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。 End of explanation train_images.shape Explanation: データの観察 モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。 End of explanation len(train_labels) Explanation: 同様に、訓練用データセットには60,000個のラベルが含まれます。 End of explanation train_labels Explanation: ラベルはそれぞれ、0から9までの間の整数です。 End of explanation test_images.shape Explanation: テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。 End of explanation len(test_labels) Explanation: テスト用データセットには10,000個のラベルが含まれます。 End of explanation plt.figure() plt.imshow(train_images[0]) plt.colorbar() plt.gca().grid(False) plt.show() Explanation: データの前処理 ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。 End of explanation train_images = train_images / 255.0 test_images = test_images / 255.0 Explanation: ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。 訓練用データセットとテスト用データセットは、同じように前処理することが重要です。 End of explanation plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]]) plt.show() Explanation: 訓練用データセットの最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。 End of explanation model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation=tf.nn.relu), keras.layers.Dense(10, activation=tf.nn.softmax) ]) Explanation: モデルの構築 ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定 ニューラルネットワークを形作る基本的な構成要素は層(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。 ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。tf.keras.layers.Dense のような層のほとんどには、訓練中に学習されるパラメータが存在します。 End of explanation model.compile(optimizer=tf.keras.optimizers.Adam(), loss='sparse_categorical_crossentropy', metrics=['accuracy']) Explanation: このネットワークの最初の層は、tf.keras.layers.Flatten です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。 ピクセルが1次元化されたあと、ネットワークは2つの tf.keras.layers.Dense 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の Dense 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードのsoftmax層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイル モデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルのコンパイル(compile)時に追加されます。 損失関数(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。 オプティマイザ(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。 メトリクス(metrics) —訓練とテストのステップを監視するのに使用します。下記の例ではaccuracy (正解率)、つまり、画像が正しく分類された比率を使用しています。 End of explanation model.fit(train_images, train_labels, epochs=5) Explanation: モデルの訓練 ニューラルネットワークの訓練には次のようなステップが必要です。 モデルに訓練用データを投入します—この例では train_images と train_labels の2つの配列です。 モデルは、画像とラベルの対応関係を学習します。 モデルにテスト用データセットの予測(分類)を行わせます—この例では test_images 配列です。その後、予測結果と test_labels 配列を照合します。 訓練を開始するには、model.fit メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。 End of explanation test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print('Test accuracy:', test_acc) Explanation: モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価 次に、テスト用データセットに対するモデルの性能を比較します。 End of explanation predictions = model.predict(test_images) Explanation: ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、過学習(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測する モデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。 End of explanation predictions[0] Explanation: これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。 End of explanation np.argmax(predictions[0]) Explanation: 予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。 End of explanation test_labels[0] Explanation: というわけで、このモデルは、この画像が、アンクルブーツ、class_names[9] である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。 End of explanation def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array[i], true_label[i] plt.grid(False) plt.xticks([]) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') Explanation: 10チャンネルすべてをグラフ化してみることができます。 End of explanation i = 0 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() i = 12 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() Explanation: 0番目の画像と、予測、予測配列を見てみましょう。 End of explanation # X個のテスト画像、予測されたラベル、正解ラベルを表示します。 # 正しい予測は青で、間違った予測は赤で表示しています。 num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions, test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions, test_labels) plt.show() Explanation: 予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。 End of explanation # テスト用データセットから画像を1枚取り出す img = test_images[0] print(img.shape) Explanation: 最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。 End of explanation # 画像を1枚だけのバッチのメンバーにする img = (np.expand_dims(img,0)) print(img.shape) Explanation: tf.keras モデルは、サンプルの中のバッチ(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。 End of explanation predictions_single = model.predict(img) print(predictions_single) plot_value_array(0, predictions_single, test_labels) _ = plt.xticks(range(10), class_names, rotation=45) plt.show() Explanation: そして、予測を行います。 End of explanation prediction = predictions[0] np.argmax(prediction) Explanation: model.predict メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。 End of explanation
6,462
Given the following text description, write Python code to implement the functionality described below step by step Description: Some code playing with the Echonest API python wrapper Pythondocs Github API Overview Things you can do with the API Remix part of the API More examples with Remix Code for examples Resources for the Spotify web API Step1: Query a single song, get its audio features and make a dataframe Step2: Grab and compare the hottest tracks, available in Spotify, for 2 artists on a number of audio features
Python Code: from pyechonest import config, artist, song import pandas as pd config.ECHO_NEST_API_KEY = 'XXXXXXXX' #retrieved from https://developer.echonest.com/account/profile import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline Explanation: Some code playing with the Echonest API python wrapper Pythondocs Github API Overview Things you can do with the API Remix part of the API More examples with Remix Code for examples Resources for the Spotify web API: Python wrapper Meteor.js wrapper with links to Node.js and general client js wrappers End of explanation songs = song.search(title='Elastic Heart',artist='Sia',buckets='id:spotify',limit=True,results=1) elasticHeart = songs[0] elasticHeartFeatures = pd.DataFrame.from_dict(elasticHeart.audio_summary,orient='index') pd.DataFrame.from_dict([elasticHeart.audio_summary]) Explanation: Query a single song, get its audio features and make a dataframe End of explanation floHottest = song.search(artist = 'Flo Rida' ,sort = 'song_hotttnesss-desc', buckets = 'id:spotify', limit = True, results = 20) fsongFeatures = [] for song in floHottest: fsongFeatures.append(song.audio_summary) S= pd.DataFrame.from_dict(songFeatures) S.index = [song.title for song in siaHottest] S['hotness'] = [song.song_hotttnesss for song in siaHottest] F= pd.DataFrame.from_dict(fsongFeatures) F.index = [song.title for song in floHottest] F['hotness'] = [song.song_hotttnesss for song in floHottest] u,idx = np.unique(S.index,return_index=True) S = S.ix[idx,:] u,idx = np.unique(F.index,return_index=True) F = F.ix[idx,:] ax = pd.DataFrame({'Flo Rida':F.mean(), 'Sia': S.mean()}).plot(kind='bar',figsize=(18,6),rot=0, color = ['lightblue','salmon']); ax.set_title("Average Song Features for Artist's Hottest 20 tracks",fontsize=14); ax.tick_params(axis='x', labelsize=12) Elastic_Heart = siaHottest[5].get_tracks('spotify') Elastic_Heart[1] %%html <iframe src="https://embed.spotify.com/?uri=spotify:track:3yFdQkEQNzDwpPB1iIFtaM" width="300" height="380" frameborder="0" allowtransparency="true"></iframe> Explanation: Grab and compare the hottest tracks, available in Spotify, for 2 artists on a number of audio features End of explanation
6,463
Given the following text description, write Python code to implement the functionality described below step by step Description: Multi order stencil for the 2D/3D acoustic isotropic wave equation Step1: Forward and adjoint stencil 2D-3D is automatic from the setup
Python Code: # Choose dimension (2 or 3) dim = 2 # Choose order time_order = 6 space_order = 12 # half width for indexes, goes from -half to half width_t = int(time_order/2) width_h = int(space_order/2) # Define functions and symbols p=Function('p') s,h = symbols('s h') if dim==2: m=M(x,z) q=Q(x,z,t) d=D(x,z,t) solvep = p(x,z,t+width_t*s) solvepa = p(x,z,t-width_t*s) else : m=M(x,y,z) q=Q(x,y,z,t) d=D(x,y,z,t) solvep = p(x,y,z,t+width_t*s) solvepa = p(x,y,z,t-width_t*s) # Finite differences coefficients, not necessary here but good to have somewhere def fd_coeff_1(order): if order==16: coeffs = [0.000010, -0.000178, 0.001554, -0.008702, 0.035354, -0.113131, 0.311111, -0.888889, 0.000000, 0.888889, -0.311111, 0.113131, -0.035354, 0.008702, -0.001554, 0.000178, -0.000010] if order==14: coeffs = [-0.000042, 0.000680, -0.005303, 0.026515, -0.097222, 0.291667, -0.875000, 0.000000, 0.875000, -0.291667, 0.097222, -0.026515, 0.005303, -0.000680, 0.000042] if order==12: coeffs = [ 0.0002, -0.0026, 0.0179, -0.0794, 0.2679, -0.8571, -0.0000, 0.8571, -0.2679, 0.0794, -0.0179, 0.0026, -0.0002] if order==10: coeffs =[-0.0008 , 0.0099, -0.0595, 0.2381, -0.8333, 0.0000, 0.8333, -0.2381, 0.0595, -0.0099, 0.0008] if order==8: coeffs = [1.0/280, -4.0/105, 1.0/5, -4.0/5, 0, 4.0/5, -1.0/5,4.0/105,-1.0/280] if space_order==6: coeffs = [-1.0/60, 3.0/20, -3.0/4, 0, 3.0/4, -3.0/20, 1.0/60] if space_order==4: coeffs = [1.0/12, -2.0/3, 0, 2.0/3, -1.0/12] if space_order==2: coeffs = [-0.5, 0, 0.5] def fd_coeff_2(ordrer): if order==16: coeffs = [-0.000002, 0.000051, -0.000518, 0.003481, -0.017677, 0.075421, -0.311111, 1.777778, -3.054844, 1.777778, -0.311111, 0.075421, -0.017677, 0.003481, -0.000518, 0.000051, -0.000002] if order==14: coeffs = [0.000012, -0.000227, 0.002121, -0.013258, 0.064815, -0.291667, 1.750000, -3.023594, 1.750000, -0.291667, 0.064815, -0.013258, 0.002121, -0.000227, 0.000012] if order==12: coeffs = [-0.000060, 0.001039, -0.008929, 0.052910, -0.267857, 1.714286, -2.982778, 1.714286, -0.267857, 0.052910, -0.008929, 0.001039, -0.000060] if order==10: coeffs = [0.000317, -0.004960, 0.039683, -0.238095, 1.666667, -2.927222, 1.666667, -0.238095, 0.039683, -0.004960, 0.000317] if order==8: coeffs = [-0.001786, 0.025397, -0.200000, 1.600000, -2.847222, 1.600000, -0.200000, 0.025397, -0.001786] if space_order==6: coeffs = [0.011111, -0.150000, 1.500000, -2.722222, 1.500000, -0.150000, 0.01111] if space_order==4: coeffs = [-0.083333, 1.333333, -2.500000, 1.333333, -0.08333] if space_order==2: coeffs = [1, -2, 1] # Indexes for finite differences indx = [] indy = [] indz = [] indt = [] for i in range(-width_h,width_h+1): indx.append(x + i * h) indy.append(y + i * h) indz.append(z + i* h) for i in range(-width_t,width_t+1): indt.append(t + i * s) # Finite differences if dim==2: dtt=as_finite_diff(p(x,z,t).diff(t,t),indt) dxx=as_finite_diff(p(x,z,t).diff(x,x), indx) dzz=as_finite_diff(p(x,z,t).diff(z,z), indz) dt=as_finite_diff(p(x,z,t).diff(t), indt) lap = dxx + dzz else: dtt=as_finite_diff(p(x,y,z,t).diff(t,t),indt) dxx=as_finite_diff(p(x,y,z,t).diff(x,x), indx) dyy=as_finite_diff(p(x,y,z,t).diff(y,y), indy) dzz=as_finite_diff(p(x,y,z,t).diff(z,z), indz) dt=as_finite_diff(p(x,y,z,t).diff(t), indt) lap = dxx + dyy + dzz # Argument list for lambdify arglamb=[] arglamba=[] if dim==2: for i in range(-width_t,width_t): arglamb.append( p(x,z,indt[i+width_t])) arglamba.append( p(x,z,indt[i+width_t+1])) for i in range(-width_h,width_h+1): for j in range(-width_h,width_h+1): arglamb.append( p(indx[i+width_h],indz[j+width_h],t)) arglamba.append( p(indx[i+width_h],indz[j+width_h],t)) else: for i in range(-width_t,width_t): arglamb.append( p(x,y,z,indt[i+width_t])) arglamba.append( p(x,y,z,indt[i+width_t+1])) for i in range(-width_h,width_h+1): for j in range(-width_h,width_h+1): for k in range(-width_h,width_h+1): arglamb.append( p(indx[i+width_h],indy[i+width_h],indz[j+width_h],t)) arglamba.append( p(indx[i+width_h],indy[i+width_h],indz[j+width_h],t)) arglamb.extend((q , m, s, h, e)) arglamb=tuple(arglamb) arglamba.extend((q , m, s, h, e)) arglamba=tuple(arglamba) solvepa Explanation: Multi order stencil for the 2D/3D acoustic isotropic wave equation End of explanation # Forward wave equation wave_equation = m*dtt- lap- q + e*dt stencil = solve(wave_equation,solvep)[0] ts=lambdify(arglamb,stencil,"numpy") stencil # Adjoint wave equation wave_equationA = m*dtt- lap - d - e*dt stencilA = solve(wave_equationA,solvepa)[0] tsA=lambdify(arglamba,stencilA,"numpy") stencilA Explanation: Forward and adjoint stencil 2D-3D is automatic from the setup End of explanation
6,464
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Aerosol MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Meteorological Forcings 5. Key Properties --&gt; Resolution 6. Key Properties --&gt; Tuning Applied 7. Transport 8. Emissions 9. Concentrations 10. Optical Radiative Properties 11. Optical Radiative Properties --&gt; Absorption 12. Optical Radiative Properties --&gt; Mixtures 13. Optical Radiative Properties --&gt; Impact Of H2o 14. Optical Radiative Properties --&gt; Radiative Scheme 15. Optical Radiative Properties --&gt; Cloud Interactions 16. Model 1. Key Properties Key properties of the aerosol model 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Scheme Scope Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Form Is Required Step9: 1.6. Number Of Tracers Is Required Step10: 1.7. Family Approach Is Required Step11: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required Step12: 2.2. Code Version Is Required Step13: 2.3. Code Languages Is Required Step14: 3. Key Properties --&gt; Timestep Framework Physical properties of seawater in ocean 3.1. Method Is Required Step15: 3.2. Split Operator Advection Timestep Is Required Step16: 3.3. Split Operator Physical Timestep Is Required Step17: 3.4. Integrated Timestep Is Required Step18: 3.5. Integrated Scheme Type Is Required Step19: 4. Key Properties --&gt; Meteorological Forcings ** 4.1. Variables 3D Is Required Step20: 4.2. Variables 2D Is Required Step21: 4.3. Frequency Is Required Step22: 5. Key Properties --&gt; Resolution Resolution in the aersosol model grid 5.1. Name Is Required Step23: 5.2. Canonical Horizontal Resolution Is Required Step24: 5.3. Number Of Horizontal Gridpoints Is Required Step25: 5.4. Number Of Vertical Levels Is Required Step26: 5.5. Is Adaptive Grid Is Required Step27: 6. Key Properties --&gt; Tuning Applied Tuning methodology for aerosol model 6.1. Description Is Required Step28: 6.2. Global Mean Metrics Used Is Required Step29: 6.3. Regional Metrics Used Is Required Step30: 6.4. Trend Metrics Used Is Required Step31: 7. Transport Aerosol transport 7.1. Overview Is Required Step32: 7.2. Scheme Is Required Step33: 7.3. Mass Conservation Scheme Is Required Step34: 7.4. Convention Is Required Step35: 8. Emissions Atmospheric aerosol emissions 8.1. Overview Is Required Step36: 8.2. Method Is Required Step37: 8.3. Sources Is Required Step38: 8.4. Prescribed Climatology Is Required Step39: 8.5. Prescribed Climatology Emitted Species Is Required Step40: 8.6. Prescribed Spatially Uniform Emitted Species Is Required Step41: 8.7. Interactive Emitted Species Is Required Step42: 8.8. Other Emitted Species Is Required Step43: 8.9. Other Method Characteristics Is Required Step44: 9. Concentrations Atmospheric aerosol concentrations 9.1. Overview Is Required Step45: 9.2. Prescribed Lower Boundary Is Required Step46: 9.3. Prescribed Upper Boundary Is Required Step47: 9.4. Prescribed Fields Mmr Is Required Step48: 9.5. Prescribed Fields Mmr Is Required Step49: 10. Optical Radiative Properties Aerosol optical and radiative properties 10.1. Overview Is Required Step50: 11. Optical Radiative Properties --&gt; Absorption Absortion properties in aerosol scheme 11.1. Black Carbon Is Required Step51: 11.2. Dust Is Required Step52: 11.3. Organics Is Required Step53: 12. Optical Radiative Properties --&gt; Mixtures ** 12.1. External Is Required Step54: 12.2. Internal Is Required Step55: 12.3. Mixing Rule Is Required Step56: 13. Optical Radiative Properties --&gt; Impact Of H2o ** 13.1. Size Is Required Step57: 13.2. Internal Mixture Is Required Step58: 14. Optical Radiative Properties --&gt; Radiative Scheme Radiative scheme for aerosol 14.1. Overview Is Required Step59: 14.2. Shortwave Bands Is Required Step60: 14.3. Longwave Bands Is Required Step61: 15. Optical Radiative Properties --&gt; Cloud Interactions Aerosol-cloud interactions 15.1. Overview Is Required Step62: 15.2. Twomey Is Required Step63: 15.3. Twomey Minimum Ccn Is Required Step64: 15.4. Drizzle Is Required Step65: 15.5. Cloud Lifetime Is Required Step66: 15.6. Longwave Bands Is Required Step67: 16. Model Aerosol model 16.1. Overview Is Required Step68: 16.2. Processes Is Required Step69: 16.3. Coupling Is Required Step70: 16.4. Gas Phase Precursors Is Required Step71: 16.5. Scheme Type Is Required Step72: 16.6. Bulk Scheme Species Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mpi-m', 'sandbox-1', 'aerosol') Explanation: ES-DOC CMIP6 Model Properties - Aerosol MIP Era: CMIP6 Institute: MPI-M Source ID: SANDBOX-1 Topic: Aerosol Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. Properties: 69 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:17 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Meteorological Forcings 5. Key Properties --&gt; Resolution 6. Key Properties --&gt; Tuning Applied 7. Transport 8. Emissions 9. Concentrations 10. Optical Radiative Properties 11. Optical Radiative Properties --&gt; Absorption 12. Optical Radiative Properties --&gt; Mixtures 13. Optical Radiative Properties --&gt; Impact Of H2o 14. Optical Radiative Properties --&gt; Radiative Scheme 15. Optical Radiative Properties --&gt; Cloud Interactions 16. Model 1. Key Properties Key properties of the aerosol model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of aerosol model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of aerosol model code End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/volume ratio for aerosols" # "3D number concenttration for aerosols" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Prognostic variables in the aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of tracers in the aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are aerosol calculations generalized into families of species? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses atmospheric chemistry time stepping" # "Specific timestepping (operator splitting)" # "Specific timestepping (integrated)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestep Framework Physical properties of seawater in ocean 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the time evolution of the prognostic variables End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol advection (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol physics (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the aerosol model (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3.5. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Meteorological Forcings ** 4.1. Variables 3D Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Variables 2D Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Two dimensionsal forcing variables, e.g. land-sea mask definition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Frequency Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Frequency with which meteological forcings are applied (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Resolution Resolution in the aersosol model grid 5.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 5.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 5.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for aerosol model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Transport Aerosol transport 7.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of transport in atmosperic aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Specific transport scheme (eulerian)" # "Specific transport scheme (semi-lagrangian)" # "Specific transport scheme (eulerian and semi-lagrangian)" # "Specific transport scheme (lagrangian)" # TODO - please enter value(s) Explanation: 7.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for aerosol transport modeling End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Mass adjustment" # "Concentrations positivity" # "Gradients monotonicity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7.3. Mass Conservation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method used to ensure mass conservation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.convention') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Convective fluxes connected to tracers" # "Vertical velocities connected to tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7.4. Convention Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Transport by convention End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Emissions Atmospheric aerosol emissions 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of emissions in atmosperic aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Prescribed (climatology)" # "Prescribed CMIP6" # "Prescribed above surface" # "Interactive" # "Interactive above surface" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method used to define aerosol species (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Volcanos" # "Bare ground" # "Sea surface" # "Lightning" # "Fires" # "Aircraft" # "Anthropogenic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.3. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the aerosol species are taken into account in the emissions scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Interannual" # "Annual" # "Monthly" # "Daily" # TODO - please enter value(s) Explanation: 8.4. Prescribed Climatology Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify the climatology type for aerosol emissions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and prescribed via a climatology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.6. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.7. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.8. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and specified via an &quot;other method&quot; End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.9. Other Method Characteristics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Characteristics of the &quot;other method&quot; used for aerosol emissions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Concentrations Atmospheric aerosol concentrations 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of concentrations in atmosperic aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.4. Prescribed Fields Mmr Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed as mass mixing ratios. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.5. Prescribed Fields Mmr Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed as AOD plus CCNs. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10. Optical Radiative Properties Aerosol optical and radiative properties 10.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of optical and radiative properties End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11. Optical Radiative Properties --&gt; Absorption Absortion properties in aerosol scheme 11.1. Black Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Dust Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.3. Organics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 12. Optical Radiative Properties --&gt; Mixtures ** 12.1. External Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there external mixing with respect to chemical composition? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 12.2. Internal Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there internal mixing with respect to chemical composition? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Mixing Rule Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If there is internal mixing with respect to chemical composition then indicate the mixinrg rule End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13. Optical Radiative Properties --&gt; Impact Of H2o ** 13.1. Size Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact size? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.2. Internal Mixture Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact internal mixture? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Optical Radiative Properties --&gt; Radiative Scheme Radiative scheme for aerosol 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.2. Shortwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of shortwave bands End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Longwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of longwave bands End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Optical Radiative Properties --&gt; Cloud Interactions Aerosol-cloud interactions 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of aerosol-cloud interactions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.2. Twomey Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the Twomey effect included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.3. Twomey Minimum Ccn Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the Twomey effect is included, then what is the minimum CCN number? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.4. Drizzle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the scheme affect drizzle? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.5. Cloud Lifetime Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the scheme affect cloud lifetime? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.6. Longwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of longwave bands End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Model Aerosol model 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosperic aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dry deposition" # "Sedimentation" # "Wet deposition (impaction scavenging)" # "Wet deposition (nucleation scavenging)" # "Coagulation" # "Oxidation (gas phase)" # "Oxidation (in cloud)" # "Condensation" # "Ageing" # "Advection (horizontal)" # "Advection (vertical)" # "Heterogeneous chemistry" # "Nucleation" # TODO - please enter value(s) Explanation: 16.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the Aerosol model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Radiation" # "Land surface" # "Heterogeneous chemistry" # "Clouds" # "Ocean" # "Cryosphere" # "Gas phase chemistry" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.3. Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other model components coupled to the Aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.gas_phase_precursors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "DMS" # "SO2" # "Ammonia" # "Iodine" # "Terpene" # "Isoprene" # "VOC" # "NOx" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.4. Gas Phase Precursors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of gas phase aerosol precursors. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Bulk" # "Modal" # "Bin" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.5. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.bulk_scheme_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon / soot" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.6. Bulk Scheme Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of species covered by the bulk scheme. End of explanation
6,465
Given the following text description, write Python code to implement the functionality described below step by step Description: <img alt="Colaboratory logo" height="45px" src="https Step1: Hidden cells Some cells contain code that is necessary but not interesting for the exercise at hand. These cells will typically be collapsed to let you focus at more interesting pieces of code. If you want to see their contents, double-click the cell. Wether you peek inside or not, you must run the hidden cells for the code inside to be interpreted. Try it now, the cell is marked RUN ME. Step2: Did it work ? If not, run the collapsed cell marked RUN ME and try again! Accelerators Colaboratory offers free GPU and TPU (Tensor Processing Unit) accelerators. You can choose your accelerator in Runtime > Change runtime type The cell below is the standard boilerplate code that enables distributed training on GPUs or TPUs in Keras.
Python Code: import math import tensorflow as tf from matplotlib import pyplot as plt print("Tensorflow version " + tf.__version__) a=1 b=2 a+b Explanation: <img alt="Colaboratory logo" height="45px" src="https://colab.research.google.com/img/colab_favicon.ico" align="left" hspace="10px" vspace="0px"> <h1>Welcome to Colaboratory!</h1> Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. With Colaboratory you can write and execute code, save and share your analyses, and access powerful computing resources, all for free from your browser. Running code Code cells can be executed in sequence by pressing Shift-ENTER. Try it now. End of explanation #@title "Hidden cell with boring code [RUN ME]" def display_sinusoid(): X = range(180) Y = [math.sin(x/10.0) for x in X] plt.plot(X, Y) display_sinusoid() Explanation: Hidden cells Some cells contain code that is necessary but not interesting for the exercise at hand. These cells will typically be collapsed to let you focus at more interesting pieces of code. If you want to see their contents, double-click the cell. Wether you peek inside or not, you must run the hidden cells for the code inside to be interpreted. Try it now, the cell is marked RUN ME. End of explanation # Detect hardware try: # detect TPUs tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() # TPU detection strategy = tf.distribute.TPUStrategy(tpu) except ValueError: # detect GPUs strategy = tf.distribute.MirroredStrategy() # for GPU or multi-GPU machines (works on CPU too) #strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU #strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # for clusters of multi-GPU machines # How many accelerators do we have ? print("Number of accelerators: ", strategy.num_replicas_in_sync) # To use the selected distribution strategy: # with strategy.scope: # # --- define your (Keras) model here --- # # For distributed computing, the batch size and learning rate need to be adjusted: # global_batch_size = BATCH_SIZE * strategy.num_replicas_in_sync # num replcas is 8 on a single TPU or N when runing on N GPUs. # learning_rate = LEARNING_RATE * strategy.num_replicas_in_sync Explanation: Did it work ? If not, run the collapsed cell marked RUN ME and try again! Accelerators Colaboratory offers free GPU and TPU (Tensor Processing Unit) accelerators. You can choose your accelerator in Runtime > Change runtime type The cell below is the standard boilerplate code that enables distributed training on GPUs or TPUs in Keras. End of explanation
6,466
Given the following text description, write Python code to implement the functionality described below step by step Description: Sequence-to-Sequence Learning Many kinds of problems need us to predict a output sequence given an input sequence. This is called a sequence-to-sequence problem. One such sequence-to-sequence problem is machine translation, which is what we'll try here. The general idea of sequence-to-sequence learning with neural networks is that we have one network that is an encoder (an RNN), transforming the input sequence into some encoded representation. This representation is then fed into another network, the decoder (also an RNN), which generates an output sequence for us. That's the basic idea, anyway. There are enhancements, most notably the inclusion of an attention mechanism, which doesn't look at the encoder's single final representation but all of its intermediary representations as well. The attention mechanism involves the decoder weighting different parts of these intermediary representations so it "focuses" on certain parts at certain time steps. Another enhancement is to use a bidirectional RNN - that is, to look at the input sequence from start to finish and from finish to start. This helps because when we represent an input sequence as a single representation vector, it tends to be biased towards later parts of the sequence. We can push back against this a bit by reading the sequence both forwards and backwards. We'll work through a few variations here on the basic sequence-to-sequence architecture Step2: Let's load the corpus. We are going to do some additional processing on it, mainly to filter out sentences that are too long. Sequence-to-sequence learning can get difficult if the sequences are long; the resulting representation is biased towards later elements of the sequence. Attention mechanisms should help with this, but as I said we aren't going to explore them here (sorry). Fortunately bidirectional RNNs help too. We'll also limit our vocabulary size and the number of examples we look at to limit memory usage. Step3: It will help if we explicitly tell our network where sentences begin and end so that it can learn when to start/stop generating words (this is explained a more here). To do so we'll specify special start and end tokens. Make sure they aren't tokens that are already present in your corpus! Step4: Now we can use Keras' tokenizers to tokenize the source sequences and target sequences (note that "input" and "source" are interchangeable, as are "output" and "target"). Step5: Our input sentences are variable in length, but we can't directly input variable length vectors into our network. What we do instead is pad it with a special padding character (Keras takes care of this for us, which I'll explain a bit more below). We need to figure out the longest input and output sequences so that we make our vectors long enough to fit them. Step6: The tokenizers will take text and output a sequence of integers (which are mapped to words). Then we'll pad these sequences so that they are all of the same length (the padding value Keras uses is 0, which is why the tokenizer doesn't assign that value to any words). For example Step8: The 0 values are padding, the 1 is our start token, the 2 is our end token, and the rest are other words. The first sequence-to-sequence model we'll build will take one-hot vectors as input, so we'll write a function that takes these sequences and converts them. (Our RNN guide explains more about one-hot vectors.) Step10: Basically what this does is represent each input sequence as a matrix of one-hot vectors. This image is from our RNN guide, which deals with individual characters, but the idea is the same (just imagine words instead of characters) Step12: Defining the model Now we can start defining the sequence-to-sequence model. Since there's a lot of overlap between the one-hot and embedding versions and the bidirectional and unidirectional variations, we'll write a function that can generate a model of either combination. Step13: We're using Keras's functional API because it provides a great deal more flexibility when defining models. Layers and inputs can be linked up in ways that the sequential API doesn't support and is in general easier to develop with (you can view the output of intermediary layers, for instance). In any case, this is what we're doing here Step14: To start, we're preparing a reverse word index which will let us put in a number and get back the associated word. The decode_outputs function then just takes that 3-tensor stack of probability distributions (predictions). The variable probs represents a tier in that stack. With argmax get the indices of the highest-probability words, then we look up each of those in our reverse word index to get the actual word. We join them up with spaces and voilá, we have our translation. Training But first we have to train the model. To reduce memory usage while training, we're going to write a generator to output training data on-the-fly. This way all the data won't sit around in memory. It will generate one-hot vectors or output the raw sequences (which we need for the embedding approach) according to the one_hot parameter and output them in chunks of the batch size we specify. In the interest of neater code, we're writing this batch generator so that it can also generate raw sequences if we set one_hot=False (we'll need this when we try the embedding approach). So first we'll define a convenience function for that Step15: And then define the actual batch generator Step16: Now let's build the model and train it. We'll train it using the categorical cross-entropy loss function because this is essentially a classification problem, where we have target_vocab_size "categories". Training will likely take a very long time. 100 epochs took me a couple hours on an Nvidia GTX 980Ti. As I note later, 100 epochs is not enough to get the network performing very well; that choice is more in the interest of trying multiple models and not wanting to wait for days. Step17: Since we're going to be trying a few different models, let's also write a function to make it easier to generate translations. Step18: Let's give it a shot Step19: That's pretty bad to be honest. As I said before, I don't think you'll have particularly good results unless you train for a significantly longer amount of time. In the meantime, let's try this task with a model that learns embeddings, instead of using one-hot vectors. We can just use what we've got, but specifying one_hot=True. Step20: And we can try the bidirectional variations, e.g.
Python Code: import numpy as np from keras.models import Model from keras.layers.recurrent import LSTM from keras.layers.embeddings import Embedding from keras.layers.wrappers import TimeDistributed from keras.preprocessing.sequence import pad_sequences from keras.preprocessing.text import Tokenizer from keras.layers import Activation, Dense, RepeatVector, Input, merge Explanation: Sequence-to-Sequence Learning Many kinds of problems need us to predict a output sequence given an input sequence. This is called a sequence-to-sequence problem. One such sequence-to-sequence problem is machine translation, which is what we'll try here. The general idea of sequence-to-sequence learning with neural networks is that we have one network that is an encoder (an RNN), transforming the input sequence into some encoded representation. This representation is then fed into another network, the decoder (also an RNN), which generates an output sequence for us. That's the basic idea, anyway. There are enhancements, most notably the inclusion of an attention mechanism, which doesn't look at the encoder's single final representation but all of its intermediary representations as well. The attention mechanism involves the decoder weighting different parts of these intermediary representations so it "focuses" on certain parts at certain time steps. Another enhancement is to use a bidirectional RNN - that is, to look at the input sequence from start to finish and from finish to start. This helps because when we represent an input sequence as a single representation vector, it tends to be biased towards later parts of the sequence. We can push back against this a bit by reading the sequence both forwards and backwards. We'll work through a few variations here on the basic sequence-to-sequence architecture: with one-hot encoded inputs learning embeddings with a bidirectional encoder The attention mechanism is not very straightforward to incorporate with Keras (in my experience at least), but the seq2seq library includes one (I have not tried it myself). Data For sequence-to-sequence tasks we need a parallel corpus. This is just a corpus with input and output sequences that have been matched up (aligned) with one another. Note that "translation" doesn't have to just be between two languages - we could take any aligned parallel corpus and train a sequence-to-sequence model on it. It doesn't even have to be text, although what I'm showing here will be tailored for that. I'm going to be boring - here we'll just do a more conventional translation task. OPUS (Open Parallel Corpus) provides many free parallel corpora. In particular, we'll use their English-German Tatoeba corpus which consists of phrases translated from English to German or vice-versa. Some preprocessing was involved to extract just the aligned sentences from the various XML files OPUS provides; I've provided the processed data for you. Preparing the data First, let's import what we need. End of explanation import json data = json.load(open('../data/en_de_corpus.json', 'r')) # to deal with memory issues, # limit the dataset # we could also generate the training samples on-demand # with a generator and use keras models' `fit_generator` method max_len = 6 max_examples = 80000 max_vocab_size = 10000 def get_texts(source_texts, target_texts, max_len, max_examples): extract texts training gets difficult with widely varying lengths since some sequences are mostly padding long sequences get difficult too, so we are going to cheat and just consider short-ish sequences. this assumes whitespace as a token delimiter and that the texts are already aligned. sources, targets = [], [] for i, source in enumerate(source_texts): # assume we split on whitespace if len(source.split(' ')) <= max_len: target = target_texts[i] if len(target.split(' ')) <= max_len: sources.append(source) targets.append(target) return sources[:max_examples], targets[:max_examples] en_texts, de_texts = get_texts(data['en'], data['de'], max_len, max_examples) n_examples = len(en_texts) Explanation: Let's load the corpus. We are going to do some additional processing on it, mainly to filter out sentences that are too long. Sequence-to-sequence learning can get difficult if the sequences are long; the resulting representation is biased towards later elements of the sequence. Attention mechanisms should help with this, but as I said we aren't going to explore them here (sorry). Fortunately bidirectional RNNs help too. We'll also limit our vocabulary size and the number of examples we look at to limit memory usage. End of explanation # add start and stop tokens start_token = '^' end_token = '$' en_texts = [' '.join([start_token, text, end_token]) for text in en_texts] de_texts = [' '.join([start_token, text, end_token]) for text in de_texts] Explanation: It will help if we explicitly tell our network where sentences begin and end so that it can learn when to start/stop generating words (this is explained a more here). To do so we'll specify special start and end tokens. Make sure they aren't tokens that are already present in your corpus! End of explanation # characters for the tokenizers to filter out # preserve start and stop tokens filter_chars = '!"#$%&()*+,-./:;<=>?@[\\]^_{|}~\t\n\'`“”–'.replace(start_token, '').replace(end_token, '') source_tokenizer = Tokenizer(max_vocab_size, filters=filter_chars) source_tokenizer.fit_on_texts(en_texts) target_tokenizer = Tokenizer(max_vocab_size, filters=filter_chars) target_tokenizer.fit_on_texts(de_texts) # vocab sizes # idx 0 is reserved by keras (for padding) # and not part of the word_index, # so add 1 to account for it source_vocab_size = len(source_tokenizer.word_index) + 1 target_vocab_size = len(target_tokenizer.word_index) + 1 Explanation: Now we can use Keras' tokenizers to tokenize the source sequences and target sequences (note that "input" and "source" are interchangeable, as are "output" and "target"). End of explanation # find max length (in tokens) of input and output sentences max_input_length = max(len(seq) for seq in source_tokenizer.texts_to_sequences_generator(en_texts)) max_output_length = max(len(seq) for seq in target_tokenizer.texts_to_sequences_generator(de_texts)) Explanation: Our input sentences are variable in length, but we can't directly input variable length vectors into our network. What we do instead is pad it with a special padding character (Keras takes care of this for us, which I'll explain a bit more below). We need to figure out the longest input and output sequences so that we make our vectors long enough to fit them. End of explanation sequences = pad_sequences(source_tokenizer.texts_to_sequences(en_texts[:1]), maxlen=max_input_length) print(en_texts[0]) # >>> ^ I took the bus back. $ print(sequences[0]) # >>> [ 0 0 0 2 4 223 3 461 114 1] Explanation: The tokenizers will take text and output a sequence of integers (which are mapped to words). Then we'll pad these sequences so that they are all of the same length (the padding value Keras uses is 0, which is why the tokenizer doesn't assign that value to any words). For example: End of explanation def build_one_hot_vecs(sequences): generate one-hot vectors from token sequences # boolean to reduce memory footprint X = np.zeros((len(sequences), max_input_length, source_vocab_size), dtype=np.bool) for i, sent in enumerate(sequences): word_idxs = np.arange(max_input_length) X[i][[word_idxs, sent]] = True return X Explanation: The 0 values are padding, the 1 is our start token, the 2 is our end token, and the rest are other words. The first sequence-to-sequence model we'll build will take one-hot vectors as input, so we'll write a function that takes these sequences and converts them. (Our RNN guide explains more about one-hot vectors.) End of explanation def build_target_vecs(): encode words in the target sequences as one-hots y = np.zeros((n_examples, max_output_length, target_vocab_size), dtype=np.bool) for i, sent in enumerate(pad_sequences(target_tokenizer.texts_to_sequences(de_texts), maxlen=max_output_length)): word_idxs = np.arange(max_output_length) y[i][[word_idxs, sent]] = True return y Explanation: Basically what this does is represent each input sequence as a matrix of one-hot vectors. This image is from our RNN guide, which deals with individual characters, but the idea is the same (just imagine words instead of characters): You can think of this as a "stack" of "tiers". Each "tier" is a sequence, i.e. a sentence, each row in a tier is a word, and each element in a row is associated with a particular word. The "stack" is n_examples tall (one tier for each sentence), each tier has max_input_length rows (some of these first rows will just be padding), and each row is source_vocab_size long. We'll also encode our target sequences in this way: End of explanation hidden_dim = 128 embedding_dim = 128 def build_model(one_hot=False, bidirectional=False): build a vanilla sequence-to-sequence model. specify `one_hot=True` to build it for one-hot encoded inputs, otherwise, pass in sequences directly and embeddings will be learned. specify `bidirectional=False` to use a bidirectional LSTM if one_hot: input = Input(shape=(max_input_length,source_vocab_size)) input_ = input else: input = Input(shape=(max_input_length,), dtype='int32') input_ = Embedding(source_vocab_size, embedding_dim, input_length=max_input_length)(input) # encoder; don't return sequences, just give us one representation vector if bidirectional: forwards = LSTM(hidden_dim, return_sequences=False)(input_) backwards = LSTM(hidden_dim, return_sequences=False, go_backwards=True)(input_) encoder = merge([forwards, backwards], mode='concat', concat_axis=-1) else: encoder = LSTM(hidden_dim, return_sequences=False)(input_) # repeat encoder output for each desired output from the decoder encoder = RepeatVector(max_output_length)(encoder) # decoder; do return sequences (timesteps) decoder = LSTM(hidden_dim, return_sequences=True)(encoder) # apply the dense layer to each timestep # give output conforming to target vocab size decoder = TimeDistributed(Dense(target_vocab_size))(decoder) # convert to a proper distribution predictions = Activation('softmax')(decoder) return Model(input=input, output=predictions) Explanation: Defining the model Now we can start defining the sequence-to-sequence model. Since there's a lot of overlap between the one-hot and embedding versions and the bidirectional and unidirectional variations, we'll write a function that can generate a model of either combination. End of explanation target_reverse_word_index = {v:k for k,v in target_tokenizer.word_index.items()} def decode_outputs(predictions): outputs = [] for probs in predictions: preds = probs.argmax(axis=-1) tokens = [] for idx in preds: tokens.append(target_reverse_word_index.get(idx)) outputs.append(' '.join([t for t in tokens if t is not None])) return outputs Explanation: We're using Keras's functional API because it provides a great deal more flexibility when defining models. Layers and inputs can be linked up in ways that the sequential API doesn't support and is in general easier to develop with (you can view the output of intermediary layers, for instance). In any case, this is what we're doing here: we define the input layer (which must be explicitly defined for the functional API) then we assemble the encoder, which is an RNN (a LSTM, but you could use, for instance, a GRU) we set return_sequences to False because we only want the last output of the LSTM, which is its representation of an entire input sequence we then add RepeatVector to repeat this representation so that it's available for each of the decoder's inputs if we have bidirectional=True, we actually create two LSTMs, one of which reads the sequence backwards (the go_backwards parameter), then we concatenate them together then we assemble the decoder, which again is an RNN (also doesn't have to be an LSTM) here we set return_sequences to True because we want all the sequences (timesteps) produced by the LSTM to pass along then we add a TimeDistributed(Dense) layer; the TimeDistributed wrapper applies the Dense layer to each timestep The result of this final time distributed dense layer is a "stack" similar to the one we inputted. It also has n_examples tiers but now each tier has max_output_length rows (which again may consist of some padding rows), and each row is of target_vocab_size length. Another important difference is that these rows are not one-hot vectors. They are each a probability distribution over the target vocabulary;the softmax layer is responsible for making sure each row sums to 1 like a proper probability distribution should. Here's a illustration depicting this for one input example. Note that in this illustration the raw source sequence of indices are passed into the encoder, which is how the embedding variation of this model works; for the one-hot variation there would be an intermediary step where we create the one-hot vectors. This "stack" (which is technically called a 3-tensor) basically the translated sequence that we want, except we have to do some additional processing to turn it back into text. In the illustration above, the output of the decoder corresponds to one tier in this stack. Let's prepare that preprocessing now. Basically, we will take these probabilities and translate them into words, as illustrated in the last two steps above. End of explanation def build_seq_vecs(sequences): return np.array(sequences) Explanation: To start, we're preparing a reverse word index which will let us put in a number and get back the associated word. The decode_outputs function then just takes that 3-tensor stack of probability distributions (predictions). The variable probs represents a tier in that stack. With argmax get the indices of the highest-probability words, then we look up each of those in our reverse word index to get the actual word. We join them up with spaces and voilá, we have our translation. Training But first we have to train the model. To reduce memory usage while training, we're going to write a generator to output training data on-the-fly. This way all the data won't sit around in memory. It will generate one-hot vectors or output the raw sequences (which we need for the embedding approach) according to the one_hot parameter and output them in chunks of the batch size we specify. In the interest of neater code, we're writing this batch generator so that it can also generate raw sequences if we set one_hot=False (we'll need this when we try the embedding approach). So first we'll define a convenience function for that: End of explanation import math def generate_batches(batch_size, one_hot=False): # each epoch n_batches = math.ceil(n_examples/batch_size) while True: sequences = pad_sequences(source_tokenizer.texts_to_sequences(en_texts), maxlen=max_input_length) if one_hot: X = build_one_hot_vecs(sequences) else: X = build_seq_vecs(sequences) y = build_target_vecs() # shuffle idx = np.random.permutation(len(sequences)) X = X[idx] y = y[idx] for i in range(n_batches): start = batch_size * i end = start+batch_size yield X[start:end], y[start:end] Explanation: And then define the actual batch generator: End of explanation n_epochs = 100 batch_size = 128 model = build_model(one_hot=True, bidirectional=False) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit_generator(generator=generate_batches(batch_size, one_hot=True), samples_per_epoch=n_examples, nb_epoch=n_epochs, verbose=1) Explanation: Now let's build the model and train it. We'll train it using the categorical cross-entropy loss function because this is essentially a classification problem, where we have target_vocab_size "categories". Training will likely take a very long time. 100 epochs took me a couple hours on an Nvidia GTX 980Ti. As I note later, 100 epochs is not enough to get the network performing very well; that choice is more in the interest of trying multiple models and not wanting to wait for days. End of explanation def translate(model, sentences, one_hot=False): seqs = pad_sequences(source_tokenizer.texts_to_sequences(sentences), maxlen=max_input_length) if one_hot: input = build_one_hot_vecs(seqs) else: input = build_seq_vecs(seqs) preds = model.predict(input, verbose=0) return decode_outputs(preds) Explanation: Since we're going to be trying a few different models, let's also write a function to make it easier to generate translations. End of explanation print(en_texts[0]) print(de_texts[0]) print(translate(model, [en_texts[0]], one_hot=True)) # >>> ^ I took the bus back. $ # >>> ^ Ich nahm den Bus zurück. $ # >>> ^ ich ich die die verloren $ Explanation: Let's give it a shot: End of explanation model = build_model(one_hot=False, bidirectional=False) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit_generator(generator=generate_batches(batch_size, one_hot=False), samples_per_epoch=n_examples, nb_epoch=n_epochs, verbose=1) Explanation: That's pretty bad to be honest. As I said before, I don't think you'll have particularly good results unless you train for a significantly longer amount of time. In the meantime, let's try this task with a model that learns embeddings, instead of using one-hot vectors. We can just use what we've got, but specifying one_hot=True. End of explanation model = build_model(one_hot=False, bidirectional=True) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit_generator(generator=generate_batches(batch_size, one_hot=False), samples_per_epoch=n_examples, nb_epoch=n_epochs, verbose=1) Explanation: And we can try the bidirectional variations, e.g. End of explanation
6,467
Given the following text description, write Python code to implement the functionality described below step by step Description: Testing the nscore transformation table Step1: Getting the data ready for work If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame. Step2: The nscore transformation table function Step3: Important. You may run ns_ttable in order to optain transin, transout Get the transformation table Step4: Get the normal score transformation Note that the declustering is applied on the transformation tables Step5: Normal score transformation using rank
Python Code: #general imports import matplotlib.pyplot as plt import pygslib from matplotlib.patches import Ellipse import numpy as np import pandas as pd #make the plots inline %matplotlib inline Explanation: Testing the nscore transformation table End of explanation #get the data in gslib format into a pandas Dataframe mydata= pygslib.gslib.read_gslib_file('../datasets/cluster.dat') # This is a 2D file, in this GSLIB version we require 3D data and drillhole name or domain code # so, we are adding constant elevation = 0 and a dummy BHID = 1 mydata['Zlocation']=0 mydata['bhid']=1 # printing to verify results print (' \n **** 5 first rows in my datafile \n\n ', mydata.head(n=5)) #view data in a 2D projection plt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary']) plt.colorbar() plt.grid(True) plt.show() Explanation: Getting the data ready for work If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame. End of explanation print (pygslib.gslib.__dist_transf.nscore.__doc__) Explanation: The nscore transformation table function End of explanation transin,transout, error = pygslib.gslib.__dist_transf.ns_ttable(mydata['Primary'],mydata['Declustering Weight']) print ('there was any error?: ', error!=0) Explanation: Important. You may run ns_ttable in order to optain transin, transout Get the transformation table End of explanation mydata['NS_Primary'] = pygslib.gslib.__dist_transf.nscore(mydata['Primary'],transin,transout,getrank=False) mydata['NS_Primary'].hist(bins=30) Explanation: Get the normal score transformation Note that the declustering is applied on the transformation tables End of explanation mydata['NS_Primary'] = pygslib.gslib.__dist_transf.nscore(mydata['Primary'],transin,transout,getrank=True) mydata['NS_Primary'].hist(bins=30) Explanation: Normal score transformation using rank End of explanation
6,468
Given the following text description, write Python code to implement the functionality described below step by step Description: Anna KaRNNa In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> Step1: First we'll load the text file and convert it into integers for our network to use. Step3: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text. Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches. The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set. Step4: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch. Step5: Hyperparameters Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability. Step6: Write out the graph for TensorBoard Step7: Training Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint. Step8: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Python Code: import time from collections import namedtuple import numpy as np import tensorflow as tf Explanation: Anna KaRNNa In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> End of explanation with open('anna.txt', 'r') as f: text=f.read() vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32) text[:100] chars[:100] Explanation: First we'll load the text file and convert it into integers for our network to use. End of explanation def split_data(chars, batch_size, num_steps, split_frac=0.9): Split character data into training and validation sets, inputs and targets for each set. Arguments --------- chars: character array batch_size: Size of examples in each of batch num_steps: Number of sequence steps to keep in the input and pass to the network split_frac: Fraction of batches to keep in the training set Returns train_x, train_y, val_x, val_y slice_size = batch_size * num_steps n_batches = int(len(chars) / slice_size) # Drop the last few characters to make only full batches x = chars[: n_batches*slice_size] y = chars[1: n_batches*slice_size + 1] # Split the data into batch_size slices, then stack them into a 2D matrix x = np.stack(np.split(x, batch_size)) y = np.stack(np.split(y, batch_size)) # Now x and y are arrays with dimensions batch_size x n_batches*num_steps # Split into training and validation sets, keep the virst split_frac batches for training split_idx = int(n_batches*split_frac) train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps] val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:] return train_x, train_y, val_x, val_y train_x, train_y, val_x, val_y = split_data(chars, 10, 200) train_x.shape train_x[:,:10] Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text. Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches. The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set. End of explanation def get_batch(arrs, num_steps): batch_size, slice_size = arrs[0].shape n_batches = int(slice_size/num_steps) for b in range(n_batches): yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs] def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): if sampling == True: batch_size, num_steps = 1, 1 tf.reset_default_graph() # Declare placeholders we'll feed into the graph with tf.name_scope('inputs'): inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs') x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot') with tf.name_scope('targets'): targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets') y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot') y_reshaped = tf.reshape(y_one_hot, [-1, num_classes]) keep_prob = tf.placeholder(tf.float32, name='keep_prob') # Build the RNN layers with tf.name_scope("RNN_layers"): #lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) #drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) #cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_size) drop = tf.nn.rnn_cell.DropoutWrapper(lstm, output_keep_prob=keep_prob) cell = tf.nn.rnn_cell.MultiRNNCell([drop] * num_layers) with tf.name_scope("RNN_init_state"): initial_state = cell.zero_state(batch_size, tf.float32) # Run the data through the RNN layers with tf.name_scope("RNN_forward"): rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)] #outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state) outputs, state = tf.nn.rnn(cell, rnn_inputs, initial_state=initial_state) final_state = state # Reshape output so it's a bunch of rows, one row for each cell output with tf.name_scope('sequence_reshape'): #seq_output = tf.concat(outputs, axis=1,name='seq_output') seq_output = tf.concat(1,outputs,name='seq_output') output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output') # Now connect the RNN putputs to a softmax layer and calculate the cost with tf.name_scope('logits'): softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1), name='softmax_w') softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b') logits = tf.matmul(output, softmax_w) + softmax_b with tf.name_scope('predictions'): preds = tf.nn.softmax(logits, name='predictions') with tf.name_scope('cost'): loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss') cost = tf.reduce_mean(loss, name='cost') # Optimizer for training, using gradient clipping to control exploding gradients with tf.name_scope('train'): tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) # Export the nodes export_nodes = ['inputs', 'targets', 'initial_state', 'final_state', 'keep_prob', 'cost', 'preds', 'optimizer'] Graph = namedtuple('Graph', export_nodes) local_dict = locals() graph = Graph(*[local_dict[each] for each in export_nodes]) return graph Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch. End of explanation batch_size = 100 num_steps = 100 lstm_size = 512 num_layers = 2 learning_rate = 0.001 Explanation: Hyperparameters Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability. End of explanation model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) file_writer = tf.summary.FileWriter('./logs/3', sess.graph) Explanation: Write out the graph for TensorBoard End of explanation !mkdir -p checkpoints/anna epochs = 10 save_every_n = 200 train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps) model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/anna20.ckpt') n_batches = int(train_x.shape[1]/num_steps) iterations = n_batches * epochs for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1): iteration = e*n_batches + b start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: 0.5, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], feed_dict=feed) loss += batch_loss end = time.time() print('Epoch {}/{} '.format(e+1, epochs), 'Iteration {}/{}'.format(iteration, iterations), 'Training loss: {:.4f}'.format(loss/b), '{:.4f} sec/batch'.format((end-start))) if (iteration%save_every_n == 0) or (iteration == iterations): # Check performance, notice dropout has been set to 1 val_loss = [] new_state = sess.run(model.initial_state) for x, y in get_batch([val_x, val_y], num_steps): feed = {model.inputs: x, model.targets: y, model.keep_prob: 1., model.initial_state: new_state} batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed) val_loss.append(batch_loss) print('Validation loss:', np.mean(val_loss), 'Saving checkpoint!') saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss))) tf.train.get_checkpoint_state('checkpoints/anna') Explanation: Training Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint. End of explanation def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): prime = "Far" samples = [c for c in prime] model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt" samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) Explanation: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. End of explanation
6,469
Given the following text description, write Python code to implement the functionality described below step by step Description: Programming Bootcamp 2016 Lesson 7 Exercises - ANSWERS 1. Creating your function file (1pt) Up until now, we've created all of our functions inside the Jupyter notebook. However, in order to use your functions across different scripts, it's best to put them into a separate file. Then you can load this file from anywhere with a single line of code and have access to all your custom functions! Do the following Step1: Feel free to use these functions (and any others you've created) to solve the problems below. You can see in the test code above how they can be accessed. 2. Command line arguments (6pts) Note Step2: (B) Write a script that expects 3 numerical arguments ("a", "b", and "c") from the command line. - Check that the correct number of arguments is supplied (based on the length of sys.argv) - If not, print an error message and exit - Otherwise, go on to add the three numbers together and print the result. Copy and paste your code below once you have it working. Note Step3: (C) Here you will create a script that generates a random dataset of sequences. Your script should expect the following command line arguments, in this order. Remember to convert strings to ints when needed Step4: 3. time practice (7pts) For the following problems, use the file you created in the previous problem (fake.fasta) and the time.time() function. (Note Step5: (B) Counting characters. Is it faster to use the built-in function str.count() or to loop through a string and count characters manually? Compare the two by counting all the A's in all the sequences in fake.fasta using each method and comparing how long they take to run. (You do not need to output the counts) Step6: Which was faster? Method 2, str.count() (C) Replacing characters. Is it faster to use the built-in function str.replace() or to loop through a string and replace characters manually? Compare the two by replacing all the T's with U's in all the sequences in fake.fasta using each method, and comparing how long they take to run. (You do not need to output the edited sequences) Step7: Which was faster? Method 2, str.replace() (D) Lookup speed in data structures. Is it faster to get unique IDs using a list or a dictionary? Read in fake.fasta, ignoring everything but the header lines. Count the number of unique IDs (headers) using a list or dictionary, and compare how long each method takes to run. Be patient; this one might take a while to run! Step8: Which was faster? Method 2, dictionary If you're curious, below is a brief explanation of the outcomes you should have observed Step9: (B) Add to the code above so that it also does the following after creating the output folder Step10: (C) Now use glob to get a list of all files in the output folder from part (B) that have a .fasta extension. For each file, print just the file name (not the file path) to the screen.
Python Code: import imp my_utils = imp.load_source('my_utils', '../utilities/my_utils.py') #CHANGE THIS PATH # test that this worked print "Test my_utils.gc():", my_utils.gc("ATGGGCCCAATGG") print "Test my_utils.reverse_compl():", my_utils.reverse_compl("GGGGTCGATGCAAATTCAAA") print "Test my_utils.read_fasta():", my_utils.read_fasta("horrible.fasta") print "Test my_utils.rand_seq():", my_utils.rand_seq(23) print "Test my_utils.shuffle_nt():", my_utils.shuffle_nt("AAAAAAGTTTCCC") print "\nIf the above produced no errors, then you're good!" Explanation: Programming Bootcamp 2016 Lesson 7 Exercises - ANSWERS 1. Creating your function file (1pt) Up until now, we've created all of our functions inside the Jupyter notebook. However, in order to use your functions across different scripts, it's best to put them into a separate file. Then you can load this file from anywhere with a single line of code and have access to all your custom functions! Do the following: - Open up your plain-text editor. - Copy and paste all of the functions you created in lab6, question 3, into a blank file and save it as my_utils.py (don't forget the .py extension). Save it anywhere on your computer. - Edit the code below so that it imports your my_utils.py functions. You will need to change '../utilities/my_utils.py' to where you saved your file. Some notes: - You need to supply the path relative to where this notebook is. See the slides for some examples on how to specify paths. - You can use this method to import your functions from anywhere on your computer! In contrast, the regular import function will only find custom functions that are in the same directory as the current notebook/script. End of explanation import sys print sys.argv[1] print sys.argv[2] print sys.argv[3] print sys.argv[4] Explanation: Feel free to use these functions (and any others you've created) to solve the problems below. You can see in the test code above how they can be accessed. 2. Command line arguments (6pts) Note: Do the following in a SCRIPT, not in the notebook. You can not use command line arguments within Jupyter notebooks. After testing your code as a script, copy and paste it here for grading purposes only. (A) Write a script that expects 4 arguments, and prints those four arguments to the screen. Test this script by running it (on the command line) as shown in the lecture. Copy and paste the code below once you have it working. End of explanation import sys if len(sys.argv) == 4: a = float(sys.argv[1]) b = float(sys.argv[2]) c = float(sys.argv[3]) else: print "Incorrect args. Please enter three numbers." sys.exit() print a + b + c Explanation: (B) Write a script that expects 3 numerical arguments ("a", "b", and "c") from the command line. - Check that the correct number of arguments is supplied (based on the length of sys.argv) - If not, print an error message and exit - Otherwise, go on to add the three numbers together and print the result. Copy and paste your code below once you have it working. Note: All command line arguments are read in as strings (just like with raw_input). To use them as numbers, you must convert them with float(). End of explanation import sys, random, imp my_utils = imp.load_source('my_utils', '../utilities/my_utils.py') if len(sys.argv) == 5: outFile = sys.argv[1] numSeqs = int(sys.argv[2]) minLength = int(sys.argv[3]) maxLength = int(sys.argv[4]) else: print "Incorrect args. Please enter an outfile, numSeqs, minLength, maxLength." sys.exit() outs = open(outFile, 'w') for i in range(numSeqs): randLen = random.randint(minLength, maxLength) randSeq = my_utils.rand_seq(randLen) seqName = "seq" + str(i) outs.write(">" + seqName + "\n" + randSeq + "\n") outs.close() Explanation: (C) Here you will create a script that generates a random dataset of sequences. Your script should expect the following command line arguments, in this order. Remember to convert strings to ints when needed: 1. outFile - string; name of the output file the generated sequences will be printed to 1. numSeqs - integer; number of sequences to create 1. minLength - integer; minimum sequence length 1. maxLength - integer; maximum sequence length The script should read in these arguments and check if the correct number of arguments is supplied (exit if not). If all looks good, then print the indicated number of randomly generated sequences as follows: - the length of each individual sequence should be randomly chosen to be between minLength and maxLength (so that not all sequences are the same length) - each sequence should be given a unique ID (e.g. using a counter to make names like seq1, seq2, ...) - the output should be in fasta format (>seqID\nsequence\n) - the output shold be printed to the indicated file Then, run your script to create a file called fake.fasta containing 100,000 random sequences of random length 50-500 nt. Copy and paste your code below once you have it working. End of explanation import time start = time.time() sillyList = [] for i in range(50000): sillyList.append(sum(sillyList)) end = time.time() print end - start Explanation: 3. time practice (7pts) For the following problems, use the file you created in the previous problem (fake.fasta) and the time.time() function. (Note: there is also a copy of fake.fasta on Piazza if you need it.) Note: Do not include the time it takes to read the file in your time calculation! Loading files can take a while. (A) Initial practice with timing. Add code to the following cell to time how long it takes to run. Print the result. End of explanation # Method 1 (Manual counting) seqDict = my_utils.read_fasta("fake.fasta") start = time.time() for seqID in seqDict: seq = seqDict[seqID] count = 0 for char in seq: if char == "A": count += 1 end = time.time() print end - start # Method 2 (.count()) seqDict = my_utils.read_fasta("fake.fasta") start = time.time() for seqID in seqDict: seq = seqDict[seqID] count = seq.count("A") end = time.time() print end - start Explanation: (B) Counting characters. Is it faster to use the built-in function str.count() or to loop through a string and count characters manually? Compare the two by counting all the A's in all the sequences in fake.fasta using each method and comparing how long they take to run. (You do not need to output the counts) End of explanation # Method 1 (Manual replacement) seqDict = my_utils.read_fasta("fake.fasta") start = time.time() for seqID in seqDict: seq = seqDict[seqID] newSeq = "" for char in seq: if char == "T": newSeq += "U" else: newSeq += char end = time.time() print end - start # Method 2 (.replace()) seqDict = my_utils.read_fasta("fake.fasta") start = time.time() for seqID in seqDict: seq = seqDict[seqID] newSeq = seq.replace("T", "U") end = time.time() print end - start Explanation: Which was faster? Method 2, str.count() (C) Replacing characters. Is it faster to use the built-in function str.replace() or to loop through a string and replace characters manually? Compare the two by replacing all the T's with U's in all the sequences in fake.fasta using each method, and comparing how long they take to run. (You do not need to output the edited sequences) End of explanation # Method 1 (list) seqDict = my_utils.read_fasta("fake.fasta") uniqueIDs = [] start = time.time() for seqID in seqDict: if seqID not in uniqueIDs: uniqueIDs.append(seqID) end = time.time() print end - start # Method 2 (dictionary) seqDict = my_utils.read_fasta("fake.fasta") uniqueIDs = {} start = time.time() for seqID in seqDict: if seqID not in uniqueIDs: uniqueIDs[seqID] = True end = time.time() print end - start Explanation: Which was faster? Method 2, str.replace() (D) Lookup speed in data structures. Is it faster to get unique IDs using a list or a dictionary? Read in fake.fasta, ignoring everything but the header lines. Count the number of unique IDs (headers) using a list or dictionary, and compare how long each method takes to run. Be patient; this one might take a while to run! End of explanation import sys, os # read in command line args if len(sys.argv) == 3: inputFile = sys.argv[1] outputFolder = sys.argv[2] else: print ">>Error: Incorrect args. Please provide an input file name and an output folder. Exiting." sys.exit() # check if input file / output directory exist if not os.path.exists(inputFile): print ">>Error: input file (%s) does not exist. Exiting." % inputFile sys.exit() if not os.path.exists(outputFolder): print "Creating output folder (%s)" % outputFolder os.mkdir(outputFolder) Explanation: Which was faster? Method 2, dictionary If you're curious, below is a brief explanation of the outcomes you should have observed: (B) The built-in method should be much faster! Most built in functions are pretty well optimized, so they will often (but not always) be faster. (C) Again, the built in function should be quite a bit faster. (D) If you did this right, then the dictionary should be faster by several orders of magnitude. When you use a dictionary, Python jumps directly to where the requested key should be, if it were in the dictionary. This is very fast (it's an O(1) operation, for those who are familiar with the terminology). With lists, on the other hand, Python will scan through the whole list until it finds the requested element (or until it reaches the end). This gets slower and slower on average as you add more elements (it's an O(n) operation). Just something to keep in mind if you start working with very large datasets! 4. os and glob practice (6pts) Use horrible.fasta as a test fasta file for the following. (A) Write code that prompts the user (using raw_input()) for two pieces of information: an input file name (assumed to be a fasta file) and an output folder name (does not need to already exist). Then do the following: - Check if the input file exists - If it doesn't, print an error message - Otherwise, go on to check if the output folder exists - If it doesn't, create it Note: I did this below with command line args instead of raw_input() to give more examples of using args End of explanation import sys, os, my_utils # read in command line args if len(sys.argv) == 3: inputFile = sys.argv[1] outputFolder = sys.argv[2] else: print ">>Error: Incorrect args. Please provide an input file name and an output folder. Exiting." sys.exit() # check if input file / output directory exist if not os.path.exists(inputFile): print ">>Error: input file (%s) does not exist. Exiting." % inputFile sys.exit() if not os.path.exists(outputFolder): print "Creating output folder (%s)" % outputFolder os.mkdir(outputFolder) # read in sequences from fasta file & print to separate output files # you'll get an error for one of them because there's a ">" in the sequence id, # which is not allowed in a file name. you can handle this however you want. # here I used a try-except statement and just skipped the problematic file (with a warning message) seqs = my_utils.read_fasta(inputFile) for seqID in seqs: outFile = "%s/%s.fasta" % (outputFolder, seqID) outStr = ">%s\n%s\n" % (seqID, seqs[seqID]) try: outs = open(outFile, 'w') outs.write(outStr) outs.close() except IOError: print ">>Warning: Could not print (%s) file. Skipping." % outFile Explanation: (B) Add to the code above so that it also does the following after creating the output folder: - Read in the fasta file (ONLY if it exists) - Print each individual sequence to a separate file in the specified output folder. - The files should be named &lt;SEQID&gt;.fasta, where &lt;SEQID&gt; is the name of the sequence (from the fasta header) End of explanation import sys, glob, os # read in command line args if len(sys.argv) == 2: folderName = sys.argv[1] else: print ">>Error: Incorrect args. Please provide an folder name. Exiting." sys.exit() fastaList = glob.glob(folderName + "/*.fasta") for filePath in fastaList: print os.path.basename(filePath) Explanation: (C) Now use glob to get a list of all files in the output folder from part (B) that have a .fasta extension. For each file, print just the file name (not the file path) to the screen. End of explanation
6,470
Given the following text description, write Python code to implement the functionality described below step by step Description: Neural Network demo (not tested yet) Start a Mosquitto container first. For example Step1: Start client Step2: Utility functions Step3: List connected nodes Step4: Rename nodes Step5: Setup network configuration Clear log files Step6: Setup connections Step7: Setup weights Step8: Setup thresholds Step9: Simulate sensor input,then observe outputs of neurons Step10: Stop the demo
Python Code: import os import sys import time sys.path.append(os.path.abspath(os.path.join(os.path.pardir, '..\\codes', 'client'))) sys.path.append(os.path.abspath(os.path.join(os.path.pardir, '..\\codes', 'node'))) sys.path.append(os.path.abspath(os.path.join(os.path.pardir, '..\\codes', 'shared'))) sys.path.append(os.path.abspath(os.path.join(os.path.pardir, '..\\codes', 'micropython'))) import client from collections import OrderedDict import pandas as pd from pandas import DataFrame from time import sleep REFRACTORY_PERIOD = 0.1 # 0.1 seconds # Each ESP8266 modules represents a neuron. We have 6 of them. neurons = ['neuron_x1', 'neuron_x2', 'neuron_h1', 'neuron_h2', 'neuron_h3', 'neuron_y'] Explanation: Neural Network demo (not tested yet) Start a Mosquitto container first. For example: - Use codes\_demo\1_start_broker.sh to start a Mosquitto container on Raspberry Pi. - Config files are in mqtt_config\mqtt. - set allow_anonymous true in mqtt_config\mqtt\config\mosquitto.conf to allow anonymous client. Getting Started What this notebook does: - Using: - a client on PC - 6 ESP8266 modules (NodeMCU and D1 mini) as remote nodes - List connected nodes - Rename remote nodes - Setup neural network configuration (connections, weights, thresholds) - Fire up neurons and get logs. End of explanation the_client = client.Client() the_client.start() while not the_client.status['Is connected']: time.sleep(1) print('Node not ready yet.') Explanation: Start client End of explanation # Ask Hub for a list of connected nodes def list_nodes(): the_client.node.worker.roll_call() time.sleep(2) remote_nodes = sorted(the_client.node.worker.contacts.keys()) print('\n[____________ Connected nodes ____________]\n') print('\nConnected nodes:\n{}\n'.format(remote_nodes)) return remote_nodes def reset_node(node): message = {'message_type': 'exec', 'to_exec': 'import machine;machine.reset()'} the_client.request(node, message) def rename_node(node, new_name): with open('temp.py', 'w') as f: f.write('WORKER_NAME = ' + '\"' + new_name + '\"\n') with open('temp.py') as f: script = f.read() message = {'message_type': 'file', 'file': script, 'kwargs': {'filename': 'worker_config.py'}} the_client.request(node, message) os.remove('temp.py') time.sleep(1) reset_node(node) def rename_nodes(nodes, neurons): i = 0 for node in nodes: if node != the_client.node.worker.name: # exclude client self rename_node(node, neurons[i]) i += 1 def fire(node): message = {'message_type': 'function', 'function': 'fire'} the_client.request(node, message) def addConnection(node, neuron): message = {'message_type': 'function', 'function': 'addConnection', 'kwargs': {'neuron_id': neuron}} the_client.request(node, message) def set_connections(node, connections): message = {'message_type': 'function', 'function': 'setConnections', 'kwargs': {'connections': connections}} the_client.request(node, message) def get_connections(node): message = {'message_type': 'function', 'function': 'getConnections', 'need_result': True} _, result = the_client.request(node, message) return result.get() def setWeight(node, neuron, weight): message = {'message_type': 'function', 'function': 'setWeight', 'kwargs': {'neuron_id': neuron, 'weight': weight,}} the_client.request(node, message) def setThreshold(node, threshold): message = {'message_type': 'function', 'function': 'setThreshold', 'kwargs': {'threshold': threshold}} the_client.request(node, message) def getConfig(node): message = {'message_type': 'function', 'function': 'getConfig', 'need_result': True} _, result = the_client.request(node, message) return result.get() def getLog(node): message = {'message_type': 'function', 'function': 'getLog', 'need_result': True} _, result = the_client.request(node, message) return result.get() def emptyLog(node): message = {'message_type': 'function', 'function': 'emptyLog'} the_client.request(node, message) def emptyLogs(): for neuron in neurons: emptyLog(neuron) def mergeLogs(): logs = [] for neuron in neurons: if neuron != the_client.node.worker.name: # exclude client self currentLog = getLog(neuron) if currentLog: logs += currentLog df = DataFrame(list(logs), columns = ['time', 'neuron', 'message']) df.set_index('time', inplace = True) df.sort_index(inplace = True) return df def printConfig(neuron): print('{0:_^78}\n {1}\n'.format(neuron + " config:", getConfig(neuron))) # fire('NodeMCU_1dsc000') Explanation: Utility functions End of explanation remote_nodes = list_nodes() Explanation: List connected nodes End of explanation rename_nodes(remote_nodes, neurons) time.sleep(2) remote_nodes = list_nodes() remote_nodes = list_nodes() Explanation: Rename nodes End of explanation emptyLogs() Explanation: Setup network configuration Clear log files End of explanation addConnection('neuron_x1', 'neuron_h1') addConnection('neuron_x1', 'neuron_h2') addConnection('neuron_x2', 'neuron_h2') addConnection('neuron_x2', 'neuron_h3') addConnection('neuron_h1', 'neuron_y') addConnection('neuron_h2', 'neuron_y') addConnection('neuron_h3', 'neuron_y') Explanation: Setup connections End of explanation setWeight('neuron_h1', 'neuron_x1', 1) setWeight('neuron_h2', 'neuron_x1', 1) setWeight('neuron_h2', 'neuron_x2', 1) setWeight('neuron_h3', 'neuron_x2', 1) setWeight('neuron_y', 'neuron_h1', 1) setWeight('neuron_y', 'neuron_h2', -2) setWeight('neuron_y', 'neuron_h3', 1) Explanation: Setup weights End of explanation setThreshold('neuron_x1', 0.9) setThreshold('neuron_x2', 0.9) setThreshold('neuron_h1', 0.9) setThreshold('neuron_h2', 1.9) setThreshold('neuron_h3', 0.9) setThreshold('neuron_y', 0.9) Explanation: Setup thresholds End of explanation ### Wait for a while until action potential quiet down. emptyLogs() sleep(REFRACTORY_PERIOD) mergeLogs() ### Simulate sensor input,force neuron_x1 to fire emptyLogs() sleep(REFRACTORY_PERIOD) fire('neuron_x1') mergeLogs() ### Simulate sensor input,force neuron_x2 to fire emptyLogs() sleep(REFRACTORY_PERIOD) fire('neuron_x2') mergeLogs() ### Simulate sensor input,force neuron_x1 and neuron_x2 to fire emptyLogs() sleep(REFRACTORY_PERIOD) fire('neuron_x1') fire('neuron_x2') mergeLogs() for neuron in reversed(neurons): printConfig(neuron) Explanation: Simulate sensor input,then observe outputs of neurons End of explanation # Stopping the_client.stop() the_client = None print ('\n[________________ Demo stopped ________________]\n') Explanation: Stop the demo End of explanation
6,471
Given the following text description, write Python code to implement the functionality described below step by step Description: Table of Contents <p><div class="lev1 toc-item"><a href="#Test-for-Binder-v2" data-toc-modified-id="Test-for-Binder-v2-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Test for Binder v2</a></div><div class="lev2 toc-item"><a href="#Sys-&amp;-OS-modules" data-toc-modified-id="Sys-&amp;-OS-modules-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Sys &amp; OS modules</a></div><div class="lev2 toc-item"><a href="#Importing-a-file" data-toc-modified-id="Importing-a-file-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Importing a file</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Conclusion</a></div> # Test for Binder v2 ## Sys & OS modules Step1: Importing a file I will import this file from the agreg/ sub-folder.
Python Code: import sys print("Path (sys.path):") for f in sys.path: print(f) import os print("Current directory:") print(os.getcwd()) Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Test-for-Binder-v2" data-toc-modified-id="Test-for-Binder-v2-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Test for Binder v2</a></div><div class="lev2 toc-item"><a href="#Sys-&amp;-OS-modules" data-toc-modified-id="Sys-&amp;-OS-modules-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Sys &amp; OS modules</a></div><div class="lev2 toc-item"><a href="#Importing-a-file" data-toc-modified-id="Importing-a-file-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Importing a file</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Conclusion</a></div> # Test for Binder v2 ## Sys & OS modules End of explanation import agreg.memoisation Explanation: Importing a file I will import this file from the agreg/ sub-folder. End of explanation
6,472
Given the following text description, write Python code to implement the functionality described below step by step Description: Check unuploaded files Three possible checks Step2: 1. Load data Step4: 2. Extract emptiness statistic from Import records Step5: 3. Load files Step6: 4. Get results Processed files that weren't matched at all Step7: Files that were matched but have blank forms Step8: Records that don't match harvested CSV files
Python Code: form = None target = None output_dir = None # event = None if target is not None: assert target in ["no_matching_records", "matching_records_blank", "orphaned_records"] import pandas as pd import os import redcap as rc import numpy as np import os import sys sys.path.append('/sibis-software/python-packages/') import sibispy from sibispy import sibislogger as slog from IPython.display import display pd.set_option("display.max_rows", 999) pd.set_option("display.max_columns", 500) Explanation: Check unuploaded files Three possible checks: no_matching_records: Files that should have a matching import record, but don't matching_records_blank: Files that have a matching import record, but no data on the corresponding form (happens when another processed file has been uploaded to the record, but not this file) orphaned_records: Records with an ID that isn't matched by any file. (Sanity check.) End of explanation session = sibispy.Session() if not session.configure(): sys.exit() slog.init_log(None, None, 'QC: Check all harvester-prepared CSVs are uploaded', 'check_unuploaded_files', None) slog.startTimer1() # Setting specific constants for this run of QC api = session.connect_server('import_laptops', True) primary_key = api.def_field meta = api.export_metadata(format='df') form_names = meta.form_name.unique().tolist() if form is not None: # FIXME: This is incorrect - needs to reflect the short_to_long, etc. if not form in form_names: raise KeyError("{} not among Import Project forms".format(form)) form_names_subset = [form] else: form_names_subset = form_names # # Taken from http://pycap.readthedocs.io/en/latest/deep.html#dealing-with-large-exports # # and adapted to scope down to forms # def chunked_export(project, form, chunk_size=100, verbose=True): # def chunks(l, n): # Yield successive n-sized chunks from list l # for i in range(0, len(l), n): # yield l[i:i+n] # record_list = project.export_records(fields=[project.def_field]) # records = [r[project.def_field] for r in record_list] # #print "Total records: %d" % len(records) # try: # response = None # record_count = 0 # for record_chunk in chunks(records, chunk_size): # record_count = record_count + chunk_size # #print record_count # chunked_response = project.export_records(records=record_chunk, # fields=[project.def_field], # forms=[form], # format='df', # df_kwargs={'low_memory': False}) # if response is not None: # response = pd.concat([response, chunked_response], axis=0) # else: # response = chunked_response # except rc.RedcapError: # msg = "Chunked export failed for chunk_size={:d}".format(chunk_size) # raise ValueError(msg) # else: # return response # def load_form(api, form_name, verbose=True): # if verbose: # print(form_name) # # 1. Standard load attempt # # try: # # print "Trying standard export" # # return api.export_records(fields=[api.def_field], # # forms=[form_name], # # format='df', # # df_kwargs={'low_memory': False}) # # except (ValueError, rc.RedcapError, pd.io.common.EmptyDataError): # # pass # try: # print("Trying chunked export, 5000 records at a time") # return chunked_export(api, form_name, 5000) # except (ValueError, rc.RedcapError, pd.io.common.EmptyDataError): # pass # # 2. Chunked load with chunk size of 1000 # try: # print("Trying chunked export, 1000 records at a time") # return chunked_export(api, form_name, 1000) # except (ValueError, rc.RedcapError, pd.io.common.EmptyDataError): # pass # # 2. Chunked load with default chunk size # try: # print("Trying chunked export, default chunk size (100)") # return chunked_export(api, form_name, 100) # except (ValueError, rc.RedcapError, pd.io.common.EmptyDataError): # pass # # 3. Chunked load with tiny chunk # try: # print("Trying chunked export with tiny chunks (10)") # return chunked_export(api, form_name, 10) # except (ValueError, rc.RedcapError, pd.io.common.EmptyDataError): # print("Giving up") # return None # def load_form_with_primary_key(api, form_name, verbose=True): # df = load_form(api, form_name, verbose) # if df is not None: # return df.set_index(api.def_field) from load_utils import load_form_with_primary_key all_data = {form_name: load_form_with_primary_key(api, form_name) for form_name in form_names_subset} Explanation: 1. Load data End of explanation def count_non_nan_rowwise(df, form_name=None, drop_column=None): A more efficient method of checking non-NaN values # 1. check complete if form_name: complete_field = form_name + '_complete' if drop_columns: drop_columns.append(complete_field) else: drop_columns = [complete_field] if drop_columns is None: drop_columns = [] # 2. count up NaNs return df.drop(drop_columns, axis=1).notnull().sum(axis=1) # Apply to DF to get all empty records def set_emptiness_flags(row, form_name, drop_columns=None): # 1. check complete complete_field = form_name + '_complete' #is_incomplete = row[complete_field] == 0 # TODO: maybe complete_field not in [1, 2] to catch NaNs? # 2. count up NaNs if drop_columns: drop_columns.append(complete_field) else: drop_columns = [complete_field] # NOTE: This will only work for a Series # NOTE: For a full Data Frame, use df.drop(drop_columns, axis=1).notnull().sum(axis=1) non_nan_count = row.drop(drop_columns).notnull().sum() return pd.Series({'completion_status': row[complete_field], 'non_nan_count': non_nan_count}) emptiness_df = {form_name: all_data[form_name].apply(lambda x: set_emptiness_flags(x, form_name), axis=1) for form_name in all_data.keys() if all_data[form_name] is not None} #all_data['recovery_questionnaire'].apply(lambda x: set_emptiness_flags(x, 'recovery_questionnaire'), axis=1) for form_name in emptiness_df.keys(): emptiness_df[form_name]['form'] = form_name all_forms_emptiness = pd.concat(emptiness_df.values()) all_forms_emptiness.shape Explanation: 2. Extract emptiness statistic from Import records End of explanation short_to_long = { # Forms for Arm 1: Standard Protocol 'dd100': 'delayed_discounting_100', 'dd1000': 'delayed_discounting_1000', 'pasat': 'paced_auditory_serial_addition_test_pasat', 'stroop': 'stroop', 'ssaga_youth': 'ssaga_youth', 'ssaga_parent': 'ssaga_parent', 'youthreport1': 'youth_report_1', 'youthreport1b': 'youth_report_1b', 'youthreport2': 'youth_report_2', 'parentreport': 'parent_report', 'mrireport': 'mri_report', 'plus': 'participant_last_use_summary', 'myy': 'midyear_youth_interview', 'lssaga1_youth': 'limesurvey_ssaga_part_1_youth', 'lssaga2_youth': 'limesurvey_ssaga_part_2_youth', 'lssaga3_youth': 'limesurvey_ssaga_part_3_youth', 'lssaga4_youth': 'limesurvey_ssaga_part_4_youth', 'lssaga1_parent': 'limesurvey_ssaga_part_1_parent', 'lssaga2_parent': 'limesurvey_ssaga_part_2_parent', 'lssaga3_parent': 'limesurvey_ssaga_part_3_parent', 'lssaga4_parent': 'limesurvey_ssaga_part_4_parent', # Forms for Arm 3: Sleep Studies 'sleepeve': 'sleep_study_evening_questionnaire', 'sleeppre': 'sleep_study_presleep_questionnaire', 'sleepmor': 'sleep_study_morning_questionnaire', # Forms for Recovery project 'recq': 'recovery_questionnaire', # Forms for UCSD 'parent': 'ssaga_parent', 'youth': 'ssaga_youth', 'deldisc': 'delayed_discounting' } files_df = pd.DataFrame(columns=["file", "path", "form"]) records = [] record_paths = [] for root, subdirs, files in os.walk('/fs/storage/laptops/imported'): csv_files = [f for f in files if (f.endswith(".csv") and not f.endswith("-fields.csv"))] if csv_files: folder_df = pd.DataFrame(columns=["file", "path", "form"]) folder_df['file'] = csv_files folder_df['path'] = [root + "/" + f for f in csv_files] root_parts = root.split('/') current_folder = root_parts[-1] try: form = short_to_long[current_folder] if form not in form_names_subset: continue else: folder_df['form'] = form files_df = pd.concat([files_df, folder_df]) except KeyError as e: continue files_df.set_index("path", inplace=True) def getRecordIDFromFile(row): import re bare_file = re.sub(r"\.csv$", "", row["file"]) bare_file = re.sub(r"^\s+|\s+$", "", bare_file) if row["form"] == "delayed_discounting": bare_file = re.sub("-1000?$", "", bare_file) return bare_file files_df["record_id"] = files_df.apply(getRecordIDFromFile, axis=1) files_df.head() def fixFormName(row): import re if row["form"] == "delayed_discounting": if re.search(r"-100\.csv$", row["file"]): return "delayed_discounting_100" elif re.search(r"-1000\.csv$", row["file"]): return "delayed_discounting_1000" else: return "delayed_discounting" else: return row["form"] files_df["form"] = files_df.apply(fixFormName, axis=1) files_in_redcap = pd.merge(files_df.reset_index(), all_forms_emptiness.reset_index(), on=["record_id", "form"], how="outer") files_in_redcap.head() if output_dir is not None: files_in_redcap.to_csv(os.path.join(output_dir, "all_files_upload_status.csv"), index=False) Explanation: 3. Load files End of explanation if (target is None) or (target == "no_matching_records"): unmatched_files = files_in_redcap.loc[files_in_redcap.completion_status.isnull()] if output_dir is not None: unmatched_files.to_csv(os.path.join(output_dir, "no_matching_records.csv"), index=False) display(unmatched_files) Explanation: 4. Get results Processed files that weren't matched at all End of explanation def check_if_file_empty(row): contents = pd.read_csv(row['path']) return contents.dropna(axis="columns").shape[1] if (target is None) or (target == "matching_records_blank"): matched_blank_index = files_in_redcap['path'].notnull() & (files_in_redcap['non_nan_count'] == 0) files_in_redcap.loc[matched_blank_index, 'file_value_count'] = ( files_in_redcap .loc[matched_blank_index] .apply(check_if_file_empty, axis=1)) matched_blank = files_in_redcap.loc[matched_blank_index & (files_in_redcap['file_value_count'] > 0)] if output_dir is not None: matched_blank.to_csv(os.path.join(output_dir, "matched_blank.csv"), index=False) display(matched_blank) Explanation: Files that were matched but have blank forms End of explanation if (target is None) or (target == "orphaned_records"): orphaned_records = files_in_redcap.loc[files_in_redcap['path'].isnull() & (files_in_redcap['non_nan_count'] > 0)] if (output_dir is not None) and (orphaned_records.shape[0] > 0): orphaned_records.to_csv(os.path.join(output_dir, "orphaned_records.csv"), index=False) display(orphaned_records) Explanation: Records that don't match harvested CSV files End of explanation
6,473
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocnbgchem MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks 4. Key Properties --&gt; Transport Scheme 5. Key Properties --&gt; Boundary Forcing 6. Key Properties --&gt; Gas Exchange 7. Key Properties --&gt; Carbon Chemistry 8. Tracers 9. Tracers --&gt; Ecosystem 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton 11. Tracers --&gt; Ecosystem --&gt; Zooplankton 12. Tracers --&gt; Disolved Organic Matter 13. Tracers --&gt; Particules 14. Tracers --&gt; Dic Alkalinity 1. Key Properties Ocean Biogeochemistry key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Type Is Required Step7: 1.4. Elemental Stoichiometry Is Required Step8: 1.5. Elemental Stoichiometry Details Is Required Step9: 1.6. Prognostic Variables Is Required Step10: 1.7. Diagnostic Variables Is Required Step11: 1.8. Damping Is Required Step12: 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Time stepping method for passive tracers transport in ocean biogeochemistry 2.1. Method Is Required Step13: 2.2. Timestep If Not From Ocean Is Required Step14: 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Time stepping framework for biology sources and sinks in ocean biogeochemistry 3.1. Method Is Required Step15: 3.2. Timestep If Not From Ocean Is Required Step16: 4. Key Properties --&gt; Transport Scheme Transport scheme in ocean biogeochemistry 4.1. Type Is Required Step17: 4.2. Scheme Is Required Step18: 4.3. Use Different Scheme Is Required Step19: 5. Key Properties --&gt; Boundary Forcing Properties of biogeochemistry boundary forcing 5.1. Atmospheric Deposition Is Required Step20: 5.2. River Input Is Required Step21: 5.3. Sediments From Boundary Conditions Is Required Step22: 5.4. Sediments From Explicit Model Is Required Step23: 6. Key Properties --&gt; Gas Exchange *Properties of gas exchange in ocean biogeochemistry * 6.1. CO2 Exchange Present Is Required Step24: 6.2. CO2 Exchange Type Is Required Step25: 6.3. O2 Exchange Present Is Required Step26: 6.4. O2 Exchange Type Is Required Step27: 6.5. DMS Exchange Present Is Required Step28: 6.6. DMS Exchange Type Is Required Step29: 6.7. N2 Exchange Present Is Required Step30: 6.8. N2 Exchange Type Is Required Step31: 6.9. N2O Exchange Present Is Required Step32: 6.10. N2O Exchange Type Is Required Step33: 6.11. CFC11 Exchange Present Is Required Step34: 6.12. CFC11 Exchange Type Is Required Step35: 6.13. CFC12 Exchange Present Is Required Step36: 6.14. CFC12 Exchange Type Is Required Step37: 6.15. SF6 Exchange Present Is Required Step38: 6.16. SF6 Exchange Type Is Required Step39: 6.17. 13CO2 Exchange Present Is Required Step40: 6.18. 13CO2 Exchange Type Is Required Step41: 6.19. 14CO2 Exchange Present Is Required Step42: 6.20. 14CO2 Exchange Type Is Required Step43: 6.21. Other Gases Is Required Step44: 7. Key Properties --&gt; Carbon Chemistry Properties of carbon chemistry biogeochemistry 7.1. Type Is Required Step45: 7.2. PH Scale Is Required Step46: 7.3. Constants If Not OMIP Is Required Step47: 8. Tracers Ocean biogeochemistry tracers 8.1. Overview Is Required Step48: 8.2. Sulfur Cycle Present Is Required Step49: 8.3. Nutrients Present Is Required Step50: 8.4. Nitrous Species If N Is Required Step51: 8.5. Nitrous Processes If N Is Required Step52: 9. Tracers --&gt; Ecosystem Ecosystem properties in ocean biogeochemistry 9.1. Upper Trophic Levels Definition Is Required Step53: 9.2. Upper Trophic Levels Treatment Is Required Step54: 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton Phytoplankton properties in ocean biogeochemistry 10.1. Type Is Required Step55: 10.2. Pft Is Required Step56: 10.3. Size Classes Is Required Step57: 11. Tracers --&gt; Ecosystem --&gt; Zooplankton Zooplankton properties in ocean biogeochemistry 11.1. Type Is Required Step58: 11.2. Size Classes Is Required Step59: 12. Tracers --&gt; Disolved Organic Matter Disolved organic matter properties in ocean biogeochemistry 12.1. Bacteria Present Is Required Step60: 12.2. Lability Is Required Step61: 13. Tracers --&gt; Particules Particulate carbon properties in ocean biogeochemistry 13.1. Method Is Required Step62: 13.2. Types If Prognostic Is Required Step63: 13.3. Size If Prognostic Is Required Step64: 13.4. Size If Discrete Is Required Step65: 13.5. Sinking Speed If Prognostic Is Required Step66: 14. Tracers --&gt; Dic Alkalinity DIC and alkalinity properties in ocean biogeochemistry 14.1. Carbon Isotopes Is Required Step67: 14.2. Abiotic Carbon Is Required Step68: 14.3. Alkalinity Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-1', 'ocnbgchem') Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem MIP Era: CMIP6 Institute: NASA-GISS Source ID: SANDBOX-1 Topic: Ocnbgchem Sub-Topics: Tracers. Properties: 65 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:21 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks 4. Key Properties --&gt; Transport Scheme 5. Key Properties --&gt; Boundary Forcing 6. Key Properties --&gt; Gas Exchange 7. Key Properties --&gt; Carbon Chemistry 8. Tracers 9. Tracers --&gt; Ecosystem 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton 11. Tracers --&gt; Ecosystem --&gt; Zooplankton 12. Tracers --&gt; Disolved Organic Matter 13. Tracers --&gt; Particules 14. Tracers --&gt; Dic Alkalinity 1. Key Properties Ocean Biogeochemistry key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean biogeochemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean biogeochemistry model code (PISCES 2.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Geochemical" # "NPZD" # "PFT" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean biogeochemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Fixed" # "Variable" # "Mix of both" # TODO - please enter value(s) Explanation: 1.4. Elemental Stoichiometry Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe elemental stoichiometry (fixed, variable, mix of the two) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Elemental Stoichiometry Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe which elements have fixed/variable stoichiometry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all prognostic tracer variables in the ocean biogeochemistry component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.7. Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all diagnotic tracer variables in the ocean biogeochemistry component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.damping') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.8. Damping Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any tracer damping used (such as artificial correction or relaxation to climatology,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Time stepping method for passive tracers transport in ocean biogeochemistry 2.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for passive tracers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for passive tracers (if different from ocean) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Time stepping framework for biology sources and sinks in ocean biogeochemistry 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for biology sources and sinks End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for biology sources and sinks (if different from ocean) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline" # "Online" # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Transport Scheme Transport scheme in ocean biogeochemistry 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transport scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Use that of ocean model" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 4.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Transport scheme used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Use Different Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Decribe transport scheme if different than that of ocean model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Atmospheric Chemistry model" # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Boundary Forcing Properties of biogeochemistry boundary forcing 5.1. Atmospheric Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how atmospheric deposition is modeled End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Land Surface model" # TODO - please enter value(s) Explanation: 5.2. River Input Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river input is modeled End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Sediments From Boundary Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Sediments From Explicit Model Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from explicit sediment model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Gas Exchange *Properties of gas exchange in ocean biogeochemistry * 6.1. CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CO2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.2. CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe CO2 gas exchange End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.3. O2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is O2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.4. O2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe O2 gas exchange End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.5. DMS Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is DMS gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.6. DMS Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify DMS gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.7. N2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.8. N2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.9. N2O Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2O gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.10. N2O Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2O gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.11. CFC11 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC11 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.12. CFC11 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC11 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.13. CFC12 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC12 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.14. CFC12 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC12 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.15. SF6 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is SF6 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.16. SF6 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify SF6 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.17. 13CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 13CO2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.18. 13CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 13CO2 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.19. 14CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 14CO2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.20. 14CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 14CO2 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.21. Other Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any other gas exchange End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other protocol" # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Carbon Chemistry Properties of carbon chemistry biogeochemistry 7.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how carbon chemistry is modeled End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea water" # "Free" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7.2. PH Scale Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, describe pH scale. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Constants If Not OMIP Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, list carbon chemistry constants. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Tracers Ocean biogeochemistry tracers 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of tracers in ocean biogeochemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.2. Sulfur Cycle Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sulfur cycle modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrogen (N)" # "Phosphorous (P)" # "Silicium (S)" # "Iron (Fe)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.3. Nutrients Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List nutrient species present in ocean biogeochemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrates (NO3)" # "Amonium (NH4)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.4. Nitrous Species If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous species. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dentrification" # "N fixation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.5. Nitrous Processes If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous processes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Tracers --&gt; Ecosystem Ecosystem properties in ocean biogeochemistry 9.1. Upper Trophic Levels Definition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Definition of upper trophic level (e.g. based on size) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Upper Trophic Levels Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Define how upper trophic level are treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "PFT including size based (specify both below)" # "Size based only (specify below)" # "PFT only (specify below)" # TODO - please enter value(s) Explanation: 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton Phytoplankton properties in ocean biogeochemistry 10.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of phytoplankton End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Diatoms" # "Nfixers" # "Calcifiers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Pft Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton functional types (PFT) (if applicable) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microphytoplankton" # "Nanophytoplankton" # "Picophytoplankton" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.3. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton size classes (if applicable) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "Size based (specify below)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Tracers --&gt; Ecosystem --&gt; Zooplankton Zooplankton properties in ocean biogeochemistry 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of zooplankton End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microzooplankton" # "Mesozooplankton" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Zooplankton size classes (if applicable) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 12. Tracers --&gt; Disolved Organic Matter Disolved organic matter properties in ocean biogeochemistry 12.1. Bacteria Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there bacteria representation ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Labile" # "Semi-labile" # "Refractory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Lability Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe treatment of lability in dissolved organic matter End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diagnostic" # "Diagnostic (Martin profile)" # "Diagnostic (Balast)" # "Prognostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Tracers --&gt; Particules Particulate carbon properties in ocean biogeochemistry 13.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is particulate carbon represented in ocean biogeochemistry? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "POC" # "PIC (calcite)" # "PIC (aragonite" # "BSi" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Types If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, type(s) of particulate matter taken into account End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "No size spectrum used" # "Full size spectrum" # "Discrete size classes (specify which below)" # TODO - please enter value(s) Explanation: 13.3. Size If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13.4. Size If Discrete Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic and discrete size, describe which size classes are used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Function of particule size" # "Function of particule type (balast)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Sinking Speed If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, method for calculation of sinking speed of particules End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "C13" # "C14)" # TODO - please enter value(s) Explanation: 14. Tracers --&gt; Dic Alkalinity DIC and alkalinity properties in ocean biogeochemistry 14.1. Carbon Isotopes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which carbon isotopes are modelled (C13, C14)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.2. Abiotic Carbon Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is abiotic carbon modelled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Prognostic" # "Diagnostic)" # TODO - please enter value(s) Explanation: 14.3. Alkalinity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is alkalinity modelled ? End of explanation
6,474
Given the following text description, write Python code to implement the functionality described below step by step Description: DICCIONARIOS En Python, un diccionario es una colección no ordenada de pares clave - valor donde la clave y el valor son objetos Python. El acceso a los elementos de un diccionario se realiza a través de la clave. En otros lenguajes se les conoce como tablas hash. <br /> CREAR UN DICCIONARIO De la misma forma que con listas, es posible definir un diccionario directamente con los miembros que va a contener, o bien inicializar el diccionario vacío y luego agregar los valores. Los diccionarios se crean utilizando llaves { }. Los elementos del diccionario son de la forma clave Step1: La clave de un diccionario puede ser cualquier variable de tipo inmutable Step2: Otra forma de crear un diccionario es declararlo vacío y luego añadir los valores, se le declara como un par de llaves sin nada en medio, y luego se asignan valores directamente a los índices. Step3: <br /> ACCESO A LOS ELEMENTOS DE UN DICCIONARIO Para acceder al valor asociado a una determinada clave, utilizamos la notación de corchetes [ ] usada en listas, pero utilizando la clave elegida en lugar del índice. Step4: Las claves son únicas dentro de un diccionario, es decir que no puede haber un diccionario que tenga dos veces la misma clave, si se asigna un valor a una clave ya existente, se reemplaza el valor anterior. Step5: Para evitar estos errores, podemos usar la función in , que comprueba si un elemento está en el diccionario o utilizar el método get , que devuelve el valor None si la clave no está en el diccionario. Step6: Para eliminar elementos de un diccionario se utiliza el método pop Step7: <BR/> MÉTODOS Step8: <BR/> LA FUNCIÓN ZIP Existen algunas diferencias entre las versiones 2.7 y 3.5 de Python Step9: También podemos hacer zip inverso para separar una lista de tuplas en varias secuencias.
Python Code: # Definición de un diccionario vacío, los diccionarios se engloban mediante llaves {} dic = { } # Nos devuelve que el diccionario 'dic' está vacío bool(dic) # En los diccionarios siempre debe definirse una clave y un valor # Sintáxis: dic = {clave: 'valor', clave: 'valor', clave: 'valor'} dic = {1:'Lunes', 2:'Martes', 3:'Miercoles' } dic Explanation: DICCIONARIOS En Python, un diccionario es una colección no ordenada de pares clave - valor donde la clave y el valor son objetos Python. El acceso a los elementos de un diccionario se realiza a través de la clave. En otros lenguajes se les conoce como tablas hash. <br /> CREAR UN DICCIONARIO De la misma forma que con listas, es posible definir un diccionario directamente con los miembros que va a contener, o bien inicializar el diccionario vacío y luego agregar los valores. Los diccionarios se crean utilizando llaves { }. Los elementos del diccionario son de la forma clave : valor y cada uno de los elementos se separan por comas. End of explanation # El diccionario 'd1' tiene como clave cadenas y como valor enteros o tuplas d1 = {'Lunes' : 1, 'Martes' : 2 , 'Finde' : (6,7) } print(d1) # Aunque los elementos de un diccionario se defninan de manera desordenada, al llamar a dicho diccionario estos # siempre aparecerán de manera ordenada mediante su clave. d2 = {7:'Domingo', 1:'Lunes', 3:'Miércoles', 6:'Sábado', 5:'Viernes', 2:'Martes', 4:'Jueves'} d2 Explanation: La clave de un diccionario puede ser cualquier variable de tipo inmutable: cadenas enteros tuplas (con valores inmutables en sus miembros), etc. Los valores de un diccionario pueden ser de cualquier tipo: listas, cadenas, tuplas, otros diccionarios, objetos, etc. End of explanation # Se declara un diccionario vacío dic = {} # Se añaden elementos a dicho diccionario otorgando un valor a la clave dic[1] = 'a' dic[2] = 'b' dic[3] = 'c' dic Explanation: Otra forma de crear un diccionario es declararlo vacío y luego añadir los valores, se le declara como un par de llaves sin nada en medio, y luego se asignan valores directamente a los índices. End of explanation dic = {'x': 1, 'y': 2, 'z': 4} dic['y'] Explanation: <br /> ACCESO A LOS ELEMENTOS DE UN DICCIONARIO Para acceder al valor asociado a una determinada clave, utilizamos la notación de corchetes [ ] usada en listas, pero utilizando la clave elegida en lugar del índice. End of explanation # Definimos un nuevo valor para la clave 'y', que lo que hará es borrar el anterior valor y sustituirlo por el nuevo dic['y'] = 99 # Añade una nueva clave, no existente, con su valor correspondiente dic['zz'] = 'Hola' dic # Si se intenta acceder a una clave no existente, Python nos arroja un error dic['xx'] Explanation: Las claves son únicas dentro de un diccionario, es decir que no puede haber un diccionario que tenga dos veces la misma clave, si se asigna un valor a una clave ya existente, se reemplaza el valor anterior. End of explanation # dic = {'x': 1, 'y': 2, 'z': 4} # Preguntamos por la clave y nos devuelve un valor booleano 'x' not in dic # Mediante GET podemos realizar una consulta sobre una clave para conocer su valor # dic.get('x') # Mediante la función TYPE, podemos averiguar si está presente o no el diccionario o de que tipo es el valor # type(dic.get('x')) # Si la clave no está en el diccionario nos muestra una variable de tipo NONE type(dic.get('xx')) Explanation: Para evitar estos errores, podemos usar la función in , que comprueba si un elemento está en el diccionario o utilizar el método get , que devuelve el valor None si la clave no está en el diccionario. End of explanation dic borrar = dic.pop('zz') borrar dic # NOTA DE AUTOR # Método para capturar un error de tipo KeyError en Python try: dic['xx'] except KeyError: # capturamos la excepción print("Cuidado, no existe la clave" + " 'xx' " + "en el diccionario") Explanation: Para eliminar elementos de un diccionario se utiliza el método pop: End of explanation dic # Uso del método KEYS para obtener el valor de las claves de un diccionario dic.keys() # Uso del método VALUES para obtener el los valores de un diccionario dic.values() # Para fusionar diccionarios utilizamos el método update. d1 d2 d1.update(d2) d1 Explanation: <BR/> MÉTODOS: KEYS, VALUES Y UPDATE El método keys devuelve una lista con las claves del diccionario. El método values devuelve una lista con los valores del diccionario. End of explanation # Vamos a ver un ejemplo en Python 3.5 # Definimos una lista llamada 's1' s1 = [1, 2, 3, 4] # Definimos una lista llamada 's2' s2 = ['primavera', 'otoño', 'verano', 'invierno'] # Uso de la función ZIP para unir ambas listas z = zip(s1, s2) # Creamos una lista a partir de la variable 'z' que a su vez es un merge de las listas 's1' y 's2' # Lo que nos mostrará será un solapamiento de ambas listas, alternando entre valores de la primera y la segunda, # como si de una cremallera se tratase s_zip = list(z) # Guardamos la lista en una variable para poder usarla s_zip # Mostramos la lista contenida en la variable s_zip # Preguntamos por el primero de los valores de la lista s_zip[0] Explanation: <BR/> LA FUNCIÓN ZIP Existen algunas diferencias entre las versiones 2.7 y 3.5 de Python: Python 2.7: La función zip permite crear una lista de tuplas a partir de los elementos de otras secuencias. Python 3.5: La función zip permite crear un objeto iterable a partir de los elementos de otras secuencias. End of explanation # Con los datos del ejemplo anterior s1 = [1, 2, 3 , 4] s2 = ['primavera', 'verano', 'otoño', 'invierno' ] z = zip(s1, s2) s_zip = list(z) s_zip # Definimos dos variables 'c1' y 'c2' para desentrelazar la lista contenida dentro de la variable 's_zip' # que a su vez contiene una lista de tuplas c1, c2 = zip(*s_zip) # Mostramos el resultado c1, c2 # A partir de secuencias también es posible crear diccionarios # Definimos la variable 'map' como un diccionario usando la función dict() map = dict(zip(s1,c2)) map Explanation: También podemos hacer zip inverso para separar una lista de tuplas en varias secuencias. End of explanation
6,475
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: view as many images as possible
Python Code:: figure = plt.figure() num_of_images = 60 for index in range(1, num_of_images + 1): plt.subplot(6, 10, index) plt.axis('off') plt.imshow(images[index].numpy().squeeze(), cmap='gray_r')
6,476
Given the following text description, write Python code to implement the functionality described below step by step Description: Overview Installation instructions for Anaconda and Python for .NET Examples presented here are based on Python 3.5 Examples are shown in jupyter notebook application. You can use the same code in the spyder.exe (winPython IDE) You can run examples from the notebook itself (this browser window), or use spyder scripts <font color='blue'>Examples shown in early parts of this notebook are simple, barebone scripts in order to explain the communication process with LightTools. A more flexible, easy to use function library project is decribed later in this presentation. Project name Step1: Sending a message to LightTools The message will appear in the Console Window, and the Macro Output tab Step2: Sending commands to LightTools The command below will Step3: Send a command with Coord3() function The coord3() function will create a string in the format "XYZ x,y,z" Step4: Setting and getting data Following example shows how to use DbSet() and DbGet() functions to access data Set the radius of the sphere primitive to 10 Get the radius to test whether the "set" worked correctly Step5: Select, Copy, Boolean, Move Make a cylinder Subtract the cylinder from sphere Move the resulting solid to XYZ 0,10,10 If you need more controls with images Step6: Access data in grids (1D and 2D) Access to data in grids is a slightly different process There are two types of data grids 1D and 2D When accessing grid data, we need to use the two optional arguments in the DbGet() and DbSet() functions. Typically we omit these arguments for general data access Note that LTPython project, described later in this presentation provides more flexible data access methods. This is an illustration of barebone script code Here's an example of getting the spectral distribution from a receiver Step7: Here's an example of getting mesh data from a receiver This example shows how to access individual cell values Typically, you can use the GetMeshData() function described later in this document to get the data for a given mesh in a single call Note that LTPython project, described later in this presentation provides more flexible data access methods. This is an illustration of barebone script code Step9: Writing and calling functions This is a function to retrieve data from a receiver mesh Get the data from the entire mesh in one call, without having to iterate through each cell The function below also returns some other mesh parameters such as the dimensions and bounds Notice also that it includes help strings (known as Doc Strings) Step10: Here's how we call the above function with arguments Get the data Create a 2D grids for x and y, uniformly spaced, for plotting Use 'pcolormesh()' for plotting 'pcolormesh()' is faster than 'pcolor()' Step11: Accessing JumpStart Functions JumpStart library is a set of helper functions available for macro users These functions attempt to simplify the syntax/usage so that you can write macros faster! LTCOM64 includes all JumpStart functions This means you can access both LightTools API (that we looked at so far) and JumpStart functions using a single reference library The example below shows how to create a handle to JumpStart functions Step12: After creating the handle, you can use all the available functions For details on these functions, please refer to Help>Document Library>API Reference Guide Most JumpStart functions support wild card (*) capability i.e. you can perform a given operation across multiple objects simultaneously Example below shows how to create a sphere and move it to a specific location, using JumpStart functions Step13: Creating a simple model for a parameter study Make a block, set positon/orientation Change one surface to a "Smooth/specular Mirror" Add a single NSRay Add a dummy plane to capture the reflected ray Step14: Add the mirror, set the optical property and orientation Step15: Add the dummy and NSRay Step16: Now we are ready to change mirror and get the ray data Step17: How to use optimization algorithms from 'scipy' Use of 'minimize' function There are three key parts to setup an optimization problem Initiate the minimize function Use initial variable data call the objective function Apply variable values generated by the minimize to LightTools model Evaluate the metrit function, return the merit function value Import minimize from scipy library We still need the libraries mentioned above in order to connect to LightTools, etc. Import the LTCOM64 library and create a connection to the running LightTools session Step18: Our objective function, called by the minimize, should use the parameters sent from the minimize function Update variables Evaluate the merit function Return the merit function value First, a separate function to evaluate the merit function Step19: Another function to apply variable values Note that we do not skip disabled variables! Step20: Now we can create the objective function 'vardata' is what we get from minimize function for example, if we setup 3 variables, we will get 3 values Step21: Finally, we call the minimize function with arguments We need to pass the initial variable values to the minimize For convenience, we can read the values from LightTools rather than hard coding Make sure to save the original values since we will modify them during optimization Step22: Simple optimization example Open 'Simple2VarOpt.1.lts' X and Y coordinates of the NSRay are variables Merit function is defined for X=0, Y=0 (local intersection coordinates on dummy plane) When optimized, the ray should be placed at the origin of the dummy plane Run the above code blocks in the sequential order to see the optimization process Results will be printed below the last code block, where we invke the minimize function Repeat the optimization for the following models BezierSweptOpt.1.lts Collimate a fan of rays using a collimator built with Swept geometry The second profile of the Swept is 'Bezier', and we try to optimize Bezier parameters Simple2VarOpt_Lens.1.lts Focus a ray fan using a conic lens The curvature and the conic constant are the variables RayGrid_SplinePatch.1.lts Start with a flat mirror, created with a splinepatch lens surface Collimate the ray grid (i.e. perpendicular to the dummy plane) This is a 9-variable problem and Nelder-Mead will require many iterations Try 'powell' (or optengines[2]) <font color='red'>res=minimize(ApplyVarsReturnMF,v0, method=optengines[2] ,options={'disp' Step23: Sample Library Project ("LTPython") This is a library of supporting functions that enable you to write macros more efficiently Shown below are few examples. Refer to the following section on using Spyder, in order to see how to utilize the function library in your scripts In order to run the following examples, you must have the two modules (LTData.py, LTUtilities.py) in your work directory. Work directory is shown in the notebook kernel, as shown below Note that the *.ipynb file is the jupyter notebook file we are using here Several data get/set examples Note that the full data access string, via Copy Data Access Name, can be passed to these functions Step24: Several examples of getting and plotting receiver mesh and spectral data Step25: Examples of capturing screenshots Step26: Access to ray path data Consider the following system, where three sources are used to illuminate a dummy plane Assume we want to see ray paths going through the cylinder object Step27: Now we can get the ray path strings, and turn on only the paths that involve the cylinder object Step28: Get receiver ray data that match the selected ray paths Step29: Receiver rays based on Ray Ordinal Number Every ray starts with an ordinal number, based on the ray sequence (1, 2, 3, etc.) Diring ray trace, ordinal number does not change Ordinal number can be used as a unique identifier when filtering ray data on receivers Consider the following ray paths through a lens One can isolate the ray paths using ray path analyzer or a macro approach discussed above However, in this particular case, we want to obtan the ray intersection points on the lens surface A receiver on the lens surface can give the ray intersection points for all rays, not just the ray path shown If the ray ordinal numbers on the receiver attached to the dummy plane are known, then we can match those ray ordinal numbers to the subset of rays on the receiver attached to the lens surface The simplest way to visualize ray intersection points as a point cloud is to generate a ray data source using the subset of rays, and importing that ray source using the local coordinate system on the lens surface Step30: Import the resulting ray source using the local coordinate system on the lens surface RaySource "C Step31: Running Spyder Spyder provides a more versatile code environment with debug capabilities. For regular macro development work, this is the best environment Typical Spyder environment will appear like this How to load the test project into Spyder Unzip the supplied LTPython.zip to your current working directory This is usually C
Python Code: # Import the packages/libraries you typically use import clr import System import numpy as np import matplotlib.pyplot as plt #This forces plots inline in the Spyder/Python Command Console %matplotlib inline #In the line below, make sure the path matches your installation! LTCOM64Path="C:\\Program Files\\Optical Research Associates\\" LTCOM64Path=LTCOM64Path + "LightTools 8.4.0\\Utilities.NET\\LTCOM64.dll" clr.AddReference(LTCOM64Path) from LTCOM64 import LTAPIx lt0=LTAPIx() #If PID capabilities (for multiple LightTools sessions) needed, use the PID for the session you want #lt0.LTPID=12040 lt0.UpdateLTPointer #If no PID is specified, connect to the first running session Explanation: Overview Installation instructions for Anaconda and Python for .NET Examples presented here are based on Python 3.5 Examples are shown in jupyter notebook application. You can use the same code in the spyder.exe (winPython IDE) You can run examples from the notebook itself (this browser window), or use spyder scripts <font color='blue'>Examples shown in early parts of this notebook are simple, barebone scripts in order to explain the communication process with LightTools. A more flexible, easy to use function library project is decribed later in this presentation. Project name: LTPython</font> How to initiate a connection with LightTools Macro examples Passing a message, commands, and data access with plotting (XY Scatter) Mesh data access and plotting (2D Raster) Parameter study example Optimization with Python (scipy.minimize) Simple 2-variable example Focus with conic lens Collimator with Swept/Bezier SplinePatch example Still using COM (pyWin32)? There are many reasons to change your macros to use LTCOM64 library as soon as possible Your macros are stuck with a specific version of a type library e.g. A macro referenced LTAPI will have to be re-programmed if you want to use a new function in LTAPI4 We are not able to provide any temporary updates to the type library if you find an issue/limitation with a given function, etc. This is due to the complexity associated with COM architecture/distribution It's unlikely that we will udate current COM libraries in the future, although the existing functionality will continue to work as long as Windows supports COM Connecting with LightTools using COM (pyWin32) is described at the end Most examples presented here will work "as is", but creating pointers to LTAPI and JumpStart functions will be slightly different Using Python with LightTools - A Quick Start Guide This is a brief introduction for how to use LightTools macros from jupyter Notebook, using Python language and .NET features For full development environment, use winPython distribution (spyder) Jupyter Notebook is an excellent tool for presentations, training, quick macros, etc. Install Anaconda https://www.continuum.io/downloads Used version for examples: 4.2.0, 64-bit The Anaconda installation includes the following packages we need Python base package numpy scipy matplotlib (includes pyplot library) jupyter notebook and many others Install Python for .NET This requires Framework 4.0 This is where you can download the Python for .NET http://www.lfd.uci.edu/~gohlke/pythonlibs/#pythonnet Make sure to select the version that matches the version of Python you installed with Anaconda Installation of the Python .NET Open a DOS command prompt (cmd) Change the directory to where you dounloaded the *.whl file Enter the following command: pip install some-package.whl With Anaconda and Python for .NET installed, the installation is complete. The next step in writing a macro is to connect to the .NET librarries. - LTCOM64.dll installed under the /LightTools/Utilities.NET/ folder is what we need - Python NET provides the .NET access capabilities. The "import clr" statement below provides the System.Reflection capabilities in .NET - The LTCOM64 library contains the LTCOM64.LTAPIx and LTCOM64.JSNET2 (JumpStart library functions). The special nature of these functions is that they do not require any COM pointers - In the .NET interface, COM pointers are not allowed - COM aspects needed to interact with LightTools are automatically handled by the library End of explanation lt0.Message("Hello from jupyter Notebook - 2!") Explanation: Sending a message to LightTools The message will appear in the Console Window, and the Macro Output tab End of explanation #Set the focus to the 3D Window, pass a fixed command string to create a sphere with radius 5 lt0.Cmd('\V3D ctrsphere xyz 0,0,0 xyz 0,0,5') Explanation: Sending commands to LightTools The command below will: set the focus to the 3D Window, and add a sphere Get the name of the last created solid object Set the radius of the last sphere to 10 End of explanation cmdstr="ctrsphere " + lt0.Coord3(0,0,0) + lt0.Coord3(0,0,5) print(cmdstr) #so that we can see it lt0.Cmd(cmdstr) Explanation: Send a command with Coord3() function The coord3() function will create a string in the format "XYZ x,y,z" End of explanation #Set the radius to 10 key="Solid[@Last].Primitive[1]" lt0.DbSet(key,"Radius",10) r=lt0.DbGet(key,"Radius") print("Radius of the sphere is: " + str(r)) Explanation: Setting and getting data Following example shows how to use DbSet() and DbGet() functions to access data Set the radius of the sphere primitive to 10 Get the radius to test whether the "set" worked correctly End of explanation from IPython.display import Image Image(filename = PATH + 'BooleanAndMove.PNG',width=500,height=100) cmdstr="Cylinder " +lt0.Coord3(0,0,0) + " 3 15" #radius =3, length = 15 lt0.Cmd(cmdstr) #Get the names of the objects. We have 2 objects #Notice that we are using the "index" of each solid object names=[] for i in [1,2]: key="Solid[" + str(i) + "]" print("Current data key is: " + key) #so that we can see it names.append(lt0.DbGet(key, "Name")) print(names[i-1]) #Select two objects lt0.Cmd("Select " + lt0.Str(names[0]) + " More " + lt0.Str(names[1])) lt0.Cmd("Subtract") #Resulting object has te name of the first selected object for boolean lt0.Cmd("Select " + lt0.Str(names[0])) lt0.Cmd("Move " + lt0.Coord3(0,10,10)) Explanation: Select, Copy, Boolean, Move Make a cylinder Subtract the cylinder from sphere Move the resulting solid to XYZ 0,10,10 If you need more controls with images End of explanation #Get the spectral power distribution from a receiver (1D grids) key="receiver[1].spectral_distribution[1]" cellcount=int(lt0.DbGet(key,"Count")) print("Number of rows: " + str(cellcount)) w=np.zeros((cellcount)) p=np.zeros((cellcount)) for i in range(1,cellcount+1,1): w[i-1],stat=lt0.DbGet(key,"Wavelength_At",0,i,1) #data returned is a tuple! p[i-1],stat=lt0.DbGet(key,"Power_At",0,i,1) plt.plot(w,p,'-r') Explanation: Access data in grids (1D and 2D) Access to data in grids is a slightly different process There are two types of data grids 1D and 2D When accessing grid data, we need to use the two optional arguments in the DbGet() and DbSet() functions. Typically we omit these arguments for general data access Note that LTPython project, described later in this presentation provides more flexible data access methods. This is an illustration of barebone script code Here's an example of getting the spectral distribution from a receiver End of explanation #Get the mesh data one cell at a time (this is a 2D grid) # Note that a faster method for mesh data is described below key="receiver[1].Mesh[1]" xdim=int(lt0.DbGet(key,"X_Dimension")) #Columns ydim=int(lt0.DbGet(key,"Y_Dimension")) #Rows cv=np.zeros((ydim,xdim)) for i in range(1,xdim+1,1): for j in range(1,ydim+1): cv[j-1,i-1],stat=lt0.DbGet(key,"CellValue",0,i,j) #Get the mesh bounds MinX=lt0.DbGet(key,"Min_X_Bound") MaxX=lt0.DbGet(key,"Max_X_Bound") MinY=lt0.DbGet(key,"Min_Y_Bound") MaxY=lt0.DbGet(key,"Max_Y_Bound") #Create a data grid for plotting, and plot the data xvec=np.linspace(MinX,MaxX,xdim+1) yvec=np.linspace(MinY,MaxY,ydim+1) X,Y=np.meshgrid(xvec,yvec) plt.pcolormesh(X,Y,cv,cmap='jet') plt.xlabel("X") plt.ylabel("Y") plt.axis("equal") #See below for a simpler/faster method to access mesh data Explanation: Here's an example of getting mesh data from a receiver This example shows how to access individual cell values Typically, you can use the GetMeshData() function described later in this document to get the data for a given mesh in a single call Note that LTPython project, described later in this presentation provides more flexible data access methods. This is an illustration of barebone script code End of explanation def GetLTMeshParams(MeshKey,CellValueType): Get the data from a receiver mesh. Parameters ---------- MeshKey : String data access string for the receiver mesh CellValueType : data type to retrieve Returns ------- X_Dimension Number of bins in X dimension Y_Dimension Number of bins in Y dimension Min_X_Bound Minimum X bound for the mesh Max_X_Bound Maximum X bound for the mesh Min_Y_Bound Minimum Y bound for the mesh Max_Y_Bound Maximum Y bound for the mesh Mesh_Data_Array An array of data, based on the cell value type requested Examples -------- meshkey="receiver[1].Mesh[1]" xdim,ydim,minx,maxx,miny,maxy,md=GetLTMeshParams(meshkey,"CellValue") XDim=int(lt0.DbGet(MeshKey,"X_Dimension")) YDim=int(lt0.DbGet(MeshKey,"Y_Dimension")) MinX=lt0.DbGet(MeshKey,"Min_X_Bound") MaxX=lt0.DbGet(MeshKey,"Max_X_Bound") MinY=lt0.DbGet(MeshKey,"Min_Y_Bound") MaxY=lt0.DbGet(MeshKey,"Max_Y_Bound") # We need a double array to retrieve data dblArray=System.Array.CreateInstance(System.Double,XDim,YDim) [Stat,mData]=lt0.GetMeshData(MeshKey,dblArray,CellValueType) MeshData=np.ones((XDim,YDim)) print(XDim,YDim) for i in range(0,XDim): for j in range(0,YDim): MeshData[i,j]=mData[i,j] #print(mData[i,j]) MeshData=np.rot90(MeshData) #Notice how we return multiple data items return XDim,YDim,MinX,MaxX,MinY,MaxY,MeshData Explanation: Writing and calling functions This is a function to retrieve data from a receiver mesh Get the data from the entire mesh in one call, without having to iterate through each cell The function below also returns some other mesh parameters such as the dimensions and bounds Notice also that it includes help strings (known as Doc Strings) End of explanation import matplotlib meshkey="receiver[1].Mesh[1]" xdim,ydim,minx,maxx,miny,maxy,md=GetLTMeshParams(meshkey,"CellValue") cellx=np.linspace(minx,maxx,xdim+1) celly=np.linspace(miny,maxy,ydim+1) X,Y=np.meshgrid(cellx,celly) #Raster chart in LOG scale plt.pcolormesh(X,Y,np.flipud(md),cmap="jet",norm=matplotlib.colors.LogNorm()) plt.colorbar() plt.axis("equal") plt.xlabel("X") plt.ylabel("Y") Explanation: Here's how we call the above function with arguments Get the data Create a 2D grids for x and y, uniformly spaced, for plotting Use 'pcolormesh()' for plotting 'pcolormesh()' is faster than 'pcolor()' End of explanation from LTCOM64 import JSNET2 js=JSNET2() #If PID capabilities (for multiple LightTools sessions) needed, use the PID for the session you want #js.LTPID=12040 js.UpdateLTPointer Explanation: Accessing JumpStart Functions JumpStart library is a set of helper functions available for macro users These functions attempt to simplify the syntax/usage so that you can write macros faster! LTCOM64 includes all JumpStart functions This means you can access both LightTools API (that we looked at so far) and JumpStart functions using a single reference library The example below shows how to create a handle to JumpStart functions End of explanation js.MakeSphere(5,"mySphere") js.MoveVector("mySphere",0,10,10) # js.MoveVector("mys*",0,10,10) will move all objects whose name starts with 'mys' Explanation: After creating the handle, you can use all the available functions For details on these functions, please refer to Help>Document Library>API Reference Guide Most JumpStart functions support wild card (*) capability i.e. you can perform a given operation across multiple objects simultaneously Example below shows how to create a sphere and move it to a specific location, using JumpStart functions End of explanation #First, let's create a simple function to add a new optical property #This will create a new property, and return the name def AddNewProperty(propname): lt0.Cmd("\O" + lt0.Str("PROPERTY_MANAGER[1]")) lt0.Cmd("AddNew=") lt0.Cmd("\Q") lt0.DbSet("Property[@Last]", "Name", propname) return 0 op="myMirror" AddNewProperty(op) key="PROPERTY[" + op + "]" lt0.DbSet(key,"Simple Type","Mirror") Explanation: Creating a simple model for a parameter study Make a block, set positon/orientation Change one surface to a "Smooth/specular Mirror" Add a single NSRay Add a dummy plane to capture the reflected ray End of explanation mirrorname="myMirror" js.MakeTube(0.25,10,10,"R",mirrorname) key="SOLID[@Last].SURFACE[LeftSurface].ZONE[1]" lt0.DbSet(key,"PropertyName",op) #Set the orientation, Alpha=45 key="Solid[@Last]" lt0.DbSet(key,"Alpha",-45) Explanation: Add the mirror, set the optical property and orientation End of explanation #Add a NSRay lt0.Cmd("NSRayAim xyz 0,10,0 xyz 0,0,0") #Add a dummy plane lt0.Cmd("DummyPlane xyz 0,0,-20 xyz 0,0,-40") Explanation: Add the dummy and NSRay End of explanation key="Solid[1]" segkey="NS_RAY[@Last].NS_SEGMENT[segment_2]" numpts=11 datax=np.zeros((numpts,numpts)) datay=np.zeros((numpts,numpts)) alpha=np.linspace(-55,-35,11) beta=np.linspace(-20,20,numpts) for i in range(0,numpts,1): lt0.DbSet(key,"Alpha",float(alpha[i])) for j in range(0,11,1): lt0.DbSet(key,"Beta",float(beta[j])) datax[i,j]=lt0.DbGet(segkey,"Local_Surface_X") datay[i,j]=lt0.DbGet(segkey,"Local_Surface_Y") plt.scatter(datax,datay) plt.xlabel('X') plt.ylabel('Y') Explanation: Now we are ready to change mirror and get the ray data End of explanation from scipy.optimize import minimize import numpy as np import matplotlib.pyplot as plt import clr #Initiate the connection with LightTools clr.AddReference("C:\\Program Files\\Optical Research Associates\\LightTools 8.4.0\\Utilities.NET\\LTCOM64.dll") from LTCOM64 import LTAPIx lt0=LTAPIx() lt0.UpdateLTPointer Explanation: How to use optimization algorithms from 'scipy' Use of 'minimize' function There are three key parts to setup an optimization problem Initiate the minimize function Use initial variable data call the objective function Apply variable values generated by the minimize to LightTools model Evaluate the metrit function, return the merit function value Import minimize from scipy library We still need the libraries mentioned above in order to connect to LightTools, etc. Import the LTCOM64 library and create a connection to the running LightTools session End of explanation def EvalMF(): lt0.Cmd("\O" + lt0.Str("OPT_MERITFUNCTIONS[1]")) lt0.Cmd("EvaluateAll=") lt0.Cmd("\Q") return 0 Explanation: Our objective function, called by the minimize, should use the parameters sent from the minimize function Update variables Evaluate the merit function Return the merit function value First, a separate function to evaluate the merit function End of explanation def setVarVals(v): v=np.asarray(v) vlist=lt0.DbList('Lens_Manager[1]','Opt_DBVariable') vcount=lt0.ListSize(vlist) lt0.SetOption('DbUpdate',0) for i in range(1,vcount+1): vkey=lt0.ListAtPos(vlist,i) lt0.DbSet(vkey,'CurrentValue',float(v[i-1])) print('Variable Value: ' + str(v[i-1])) lt0.SetOption('DbUpdate',1) lt0.ListDelete(vlist) Explanation: Another function to apply variable values Note that we do not skip disabled variables! End of explanation def ApplyVarsReturnMF(vardata): myd=np.asarray(vardata) setVarVals(myd) EvalMF() mfv=lt0.DbGet('OPT_MERITFUNCTIONS[1]','CurrentValue') print("MF Value: " + str(mfv)) print('****') return mfv Explanation: Now we can create the objective function 'vardata' is what we get from minimize function for example, if we setup 3 variables, we will get 3 values End of explanation # Here's a sample list of optimization algorithms we can try # Some of these algorithms require 'jac', which is the Jacobian (gradiant), and it's not shown here # The Nelder-Mead is the best option to try first, given its simplicity optengines=['Nelder-Mead','BFGS','powell','Newton-CG','SLSQP','TNC'] vlist=lt0.DbList('Lens_Manager[1]','Opt_DBVariable') vcount=int(lt0.ListSize(vlist)) lt0.ListDelete(vlist) v0=np.zeros((vcount)) for i in range(1,vcount+1): v0[i-1]=lt0.DbGet('OPT_DBVARIABLE[' +str(i) +']','CurrentValue') # Note that 'maxiter' should be small (e.g. 5) for other algorithms, except 'Nelder-Mead' res=minimize(ApplyVarsReturnMF,v0,method=optengines[0],options={'disp': True,'maxiter':50}) Explanation: Finally, we call the minimize function with arguments We need to pass the initial variable values to the minimize For convenience, we can read the values from LightTools rather than hard coding Make sure to save the original values since we will modify them during optimization End of explanation res=minimize(ApplyVarsReturnMF,v0,method=optengines[2],options={'disp': True,'maxiter':5}) Explanation: Simple optimization example Open 'Simple2VarOpt.1.lts' X and Y coordinates of the NSRay are variables Merit function is defined for X=0, Y=0 (local intersection coordinates on dummy plane) When optimized, the ray should be placed at the origin of the dummy plane Run the above code blocks in the sequential order to see the optimization process Results will be printed below the last code block, where we invke the minimize function Repeat the optimization for the following models BezierSweptOpt.1.lts Collimate a fan of rays using a collimator built with Swept geometry The second profile of the Swept is 'Bezier', and we try to optimize Bezier parameters Simple2VarOpt_Lens.1.lts Focus a ray fan using a conic lens The curvature and the conic constant are the variables RayGrid_SplinePatch.1.lts Start with a flat mirror, created with a splinepatch lens surface Collimate the ray grid (i.e. perpendicular to the dummy plane) This is a 9-variable problem and Nelder-Mead will require many iterations Try 'powell' (or optengines[2]) <font color='red'>res=minimize(ApplyVarsReturnMF,v0, method=optengines[2] ,options={'disp': True, 'maxiter':5})</font> End of explanation #Import the module and update the LT pointer import LTData as ltd ltd.lt0=lt0 #update the pointer #Now you can get/set the data items like this R = ltd.GetLTDbItem('Solid[1].Primitive[1].radius') print('Radius is: ' + str(R)) ltd.SetLTDbItem('solid[1].primitive[1].radius',15) illum=ltd.GetLTGridItem('receiver[1].mesh[1].CellValue_UI',45,45) #Accessing a 2D grid print('Value is: ' + str(illum)) wave=ltd.GetLTGridItem('RECEIVER[1].SPECTRAL_DISTRIBUTION[1].Wavelength_At',5) #Accessing a 1D grid print('Wavelength is: ' + str(wave)) #Make sure there's a valid spectral region with at least 1 row for the following code! stat=ltd.SetLTGridItem('spectral_region[1].WavelengthAt',600,1) #Setting data in a 1D grid Explanation: Sample Library Project ("LTPython") This is a library of supporting functions that enable you to write macros more efficiently Shown below are few examples. Refer to the following section on using Spyder, in order to see how to utilize the function library in your scripts In order to run the following examples, you must have the two modules (LTData.py, LTUtilities.py) in your work directory. Work directory is shown in the notebook kernel, as shown below Note that the *.ipynb file is the jupyter notebook file we are using here Several data get/set examples Note that the full data access string, via Copy Data Access Name, can be passed to these functions End of explanation #First, import standard libraries we need for arrays/plotting import matplotlib.pyplot as plt # general plotting import numpy as np #additional support for arrays, etc. #Plot a mesh ltd.PlotRaster('receiver[1].mesh[1]','cellvalue',colormap='jet', xlabel='X-Value',ylabel='Y-Value',title='Mesh Data',plotsize=(5,5),plottype='2D') #Plot the spectral distribution numrows,spd=ltd.PlotSpectralDistribution('receiver[1].spectral_distribution[1]',returndata=True) plt.plot(spd[:,0],spd[:,1]) #Plot true color data. Note the index=2 for the CIE mesh r,g,b=ltd.PlotTrueColorRster('receiver[1].mesh[2]',plotsize=(5,5),returndata=True) Explanation: Several examples of getting and plotting receiver mesh and spectral data End of explanation #We need to save the screenshot as an image file in the work directory #LTUtilities module handles the work directory and file IO import LTUtilities as ltu ltu.lt0=lt0 ltd.ltu=ltu #check the workdir wd=ltu.checkpyWorkDir() print(ltu.workdirstr) # this is where image files are saved #Get a screenshot of the 3D View viewname='3d' im,imname=ltd.GetViewImage(viewname) plt.imshow(im) #Get a screenshot of an open chart view #Usually, V3D is the first view. The '3' below indicates the second chart view currently open viewname='3' im,imname=ltd.GetViewImage(viewname) plt.imshow(im) Explanation: Examples of capturing screenshots End of explanation #Let's get a screenshot of the full system viewname='1' im,imname=ltd.GetViewImage(viewname) plt.imshow(im) Explanation: Access to ray path data Consider the following system, where three sources are used to illuminate a dummy plane Assume we want to see ray paths going through the cylinder object End of explanation #Ray path data key='receiver[1]' #First, let's hide all ray paths lt0.Cmd('\O"RECEIVER[1].FORWARD_SIM_FUNCTION[1]" HideAll= \Q') #Now get the ray path data, and show only the matchine paths va,pa,ra,st=ltd.GetRayPathData(key,usevisibleonly=False) # Two subplots, different size from matplotlib import gridspec fig = plt.figure(figsize=(6, 6)) gs = gridspec.GridSpec(2,1, height_ratios=[1,3]) ax1 = plt.subplot(gs[0]) ax1.plot(pa,'o') ax1.set_xlabel('Path Index') ax1.set_ylabel('Power') ax1.grid(True) s2='cylin' #this is the string we're searching for for i in range(0,len(st)): #print(st[i]) s1=st[i].lower() if s2 in s1: #print(str(i) + ';' + st[i]) ltd.SetLTGridItem(key + '.forward_sim_function[1].RayPathVisibleAt','yes',(i+1)) #Finally, let's get another screenshot to show the results viewname='1' im,imname=ltd.GetViewImage(viewname) ax2 = plt.subplot(gs[1]) ax2.imshow(im) ax2.axis('off') plt.tight_layout() Explanation: Now we can get the ray path strings, and turn on only the paths that involve the cylinder object End of explanation #receiver ray data des=['raydatax','raydatay','raydataz'] reckey='receiver[1]' simtype='Forward_Sim_Function[1]' #Note here that we specify the following function to # use passfilters flag N,M,raydata=ltd.GetLTReceiverRays(reckey,des,usepassfilters=True) plt.plot(raydata[:,0],raydata[:,1],'o') plt.xlabel('Ray Data Local X') plt.ylabel('Ray Data Local Y') plt.axis('equal') Explanation: Get receiver ray data that match the selected ray paths End of explanation #Assume default data, x, y, z, l, m, n, p simdata='forward_sim_function[1]' reckey1='receiver[1]' #receiver on the lens surface reckey2='receiver[2]' #receiver on the dummy plane n,rayfname=ltd.MakeRayFileUsingRayOrdinal(reckey1,DataAccessKey_Ordinal=reckey2) Explanation: Receiver rays based on Ray Ordinal Number Every ray starts with an ordinal number, based on the ray sequence (1, 2, 3, etc.) Diring ray trace, ordinal number does not change Ordinal number can be used as a unique identifier when filtering ray data on receivers Consider the following ray paths through a lens One can isolate the ray paths using ray path analyzer or a macro approach discussed above However, in this particular case, we want to obtan the ray intersection points on the lens surface A receiver on the lens surface can give the ray intersection points for all rays, not just the ray path shown If the ray ordinal numbers on the receiver attached to the dummy plane are known, then we can match those ray ordinal numbers to the subset of rays on the receiver attached to the lens surface The simplest way to visualize ray intersection points as a point cloud is to generate a ray data source using the subset of rays, and importing that ray source using the local coordinate system on the lens surface End of explanation #Extra ray data, OPL reckey='receiver[1]' #Notice that the second argument is an Enum (integer) for the filter type N,exdata=ltd.GetLTReceiverRays_Extra(reckey,ltd.ExtraRayData.Optical_Path_Length.value) plt.hist(exdata,bins=21,color='green') plt.xlabel('OPL') plt.ylabel('Frequency') Explanation: Import the resulting ray source using the local coordinate system on the lens surface RaySource "C:/.../pyWorkDir/1mi8clam.txt" LXYZ 0,0,0 LXYZ 0,0,1 LXYZ 0,1,0 Note: rename the ray source with a meaningful name. The default name used is random After the ray source is loaded into the model, intersection points can be visualized as a point cloud in the 3D model Extra ray data for receiver filters This data is not directly available with LTAPI4.GetMeshData() Only way to access this data is the use DbGet() function for each ray This means the process will be slower when there's a large number of rays on the receiver Following example shows how to access optical path length for each ray Optical Path Length filter is required on the receiver End of explanation import win32com.client import numpy as np import matplotlib.pyplot as plt #DbGet() and Mesh data example lt = win32com.client.Dispatch("LightTools.LTAPI4") XD=int(lt.DbGet(MeshKey,"X_Dimension")) YD=int(lt.DbGet(MeshKey,"Y_Dimension")) k=np.ones((XD,YD)) #The CellFilter may not work for all options in COM mode [stat,myd,f]=lt.GetMeshData("receiver[1].Mesh[1]",list(k),"CellValue") g=np.asarray(myd) g=np.rot90(g) x = np.linspace(-3, 3, XD) y = np.linspace(-3, 3, XD) X,Y = np.meshgrid(x, y) plt.pcolor(X,Y,g) plt.pcolormesh(X,Y,g,cmap="gray") plt.xlabel("X") plt.ylabel("Y") #JumpStart library js = win32com.client.Dispatch("LTCOM64.JSML") js.MakeSphere(lt,5,"mySphere") js.MoveVector(lt,"mySphere",0,10,10) # js.MoveVector(lt,"mys*",0,10,10) will move all objects whose name starts with 'mys' Explanation: Running Spyder Spyder provides a more versatile code environment with debug capabilities. For regular macro development work, this is the best environment Typical Spyder environment will appear like this How to load the test project into Spyder Unzip the supplied LTPython.zip to your current working directory This is usually C:/Users/YourUserName/ Run Spyder Go to Project>Open Project. Project files will appear like this Test code to test most of the available functions are in "TestLTDataFunctions.py" Most of the code is commented out. Make sure to uncomment the portions you like to try Watch the attached video clip to see few examples These are the different modules LTData This includes a set of functions to get/set database items, grid items, receiver data, ray path data, etc. LTUtilities This module contains some general purpose utilities, used by LTData and other modules LTProperties This is a special module to illustrate how to use JumpStart Optical Property functions Notice that this module still uses COM. We will fix this issue. For now, this is the only way to access these JumpStart functions (fairly new to the JS library) This module only contains "test code" that illustrates how to use the base functions in JS library LTOpt Few optimization examples. Use the attached test models for these examples Ignore other modules How to use win32COM client to connect to LightTools Note that this is not a recommended method due to possible compatibility issues in the future! End of explanation
6,477
Given the following text description, write Python code to implement the functionality described below step by step Description: Add In this tutorial, we will construct a n-bit adder from n full adders. Magma has built in support for addition using the + operator, so please don't think Magma is so low-level that you need to create logical and arithmetic functions in order to use it! We use this example to show how circuits are composed to form new circuits. Since we are using the ICE40, we need to set the target of Mantle to "ice40". Step1: Mantle FullAdder In the last example, we defined a Python function that created a full adder. In this example, we are going to use the built-in FullAdder from Mantle. Mantle is our standard library of useful circuits. Step2: We can print out the interface of the FullAdder. Step3: This tells us that the full adder has three inputs I0, I1, and CIN. Note that the type of these arguments are In(Bit). There are also two outputs O and COUT, both with type Out(Bit). In Magma arguments in the circuit interface are normally qualified to be inputs or outputs. Step4: Before testing the full adder on the IceStick board, let's test it using the Python simulator. Step5: class Add2 - Defining a Circuit Now let's build a 2-bit adder using FullAdder. We'll use a simple ripple carry adder design by connecting the carry out of one full adder to the carry in of the next full adder. The resulting adder will accept as input a carry in, and generate a final carry out. Here's a logisim diagram of the circuit we will construct Step6: Although we are making an 2-bit adder, we do this using a for loop that can be generalized to construct an n-bit adder. Each time through the for loop we create an instance of a full adder by calling FullAdder(). Recall that circuits are python classes, so that calling a class returns an instance of that class. Note how we wire up the full adders. Calling an circuit instance has the effect of wiring up the arguments to the inputs of the circuit. That is, O, COUT = fulladder(I0, I1, CIN) is equivalent to m.wire(IO, fulladder.I0) m.wire(I1, fulladder.I1) m.wire(CIN, fulladder.CIN) O = fulladder.O COUT = fulladder.COUT The outputs of the circuit are returned. Inside this loop we append single bit outputs from the full adders to the Python list O. We also set the CIN of the next full adder to the COUT of the previous instance. Finally, we then convert the list O to a Uint(n). In addition to Bits(n), Magma also has built in types UInt(n) and SInt(n) to represent unsigned and signed ints. Magma also has type conversion functions bits, uint, and sint to convert between different types. In this example, m.uint(C) converts the list of bits to a UInt(len(C)). DefineAdd Generator One question you may be asking yourself, is how can this code be generalized to produce an n-bit adder. We do this by creating an add generator. A generator is a Python function that takes parameters and returns a circuit class. Calling the generator with different parameter values will create different circuits. The power of Magma results from being to use all the features of Python to create powerful hardware generators. Here is the code Step7: First, notice that a circuit generator by convention begins with the prefix Define. In this example, DefineAdd has a parameter n which is the width of the adder. A circuit generator returns a subclass of Circuit. A standard way to write this is to construct a new Circuit class within the body of the generator. The code within the body of the generator can refer to the arguments to the generator. Like Verilog modules, Magma circuits must have unique names. Because Python does not provide the facilities to dynamically generate the class name, dynamically constructed Magma circuits are named using the name class variable. Python generators need to create unique names for each generated circuit because Magma will cache circuit definitions based on the name. Note how the name of the circuit is set using the format string f'Add{n}'. For example, if n is 2, the name of the circuit will be Add2. Magma allows you to use Python string manipulation functions to create mnemonic names. As we will see, the resulting verilog module will have the same name. This is very useful for debugging. We also can create the parameterized types within the generator. In this example, we use the type UInt(n) which depends on n. The loop within definition can also refer to the parameter n' Finally, notice we defined three interrelated functions Step8: We define a main function that instances our 2-bit adder and wires it up to J1 and J3. Notice the use of Python's slicing syntax using our width variable N. Step9: As before, we compile. Step10: And use our yosys, arcachne-pnr, and icestorm tool flow. Step11: You can test the program by connecting up some switches and LEDs to the headers. You should see the sum of the inputs displayed on the LEDs. First, we need to find out what pins J1 and J3 are wired up to. (Note Step12: In this example, we have J1 wire up to the four switch/LED circuits on the left, and J3 wired up to the three LED (no switch) circuits on the right Again, it can be useful to examine the compiled Verilog. Notice that it includes a Verilog definition of the mantle FullAdder implemented using the SB_LUT4 and SB_CARRY primtives. The Add2 module instances two FullAdders and wires them up. Step13: You can also display the circuit using graphviz.
Python Code: import magma as m m.set_mantle_target("ice40") Explanation: Add In this tutorial, we will construct a n-bit adder from n full adders. Magma has built in support for addition using the + operator, so please don't think Magma is so low-level that you need to create logical and arithmetic functions in order to use it! We use this example to show how circuits are composed to form new circuits. Since we are using the ICE40, we need to set the target of Mantle to "ice40". End of explanation from mantle import FullAdder Explanation: Mantle FullAdder In the last example, we defined a Python function that created a full adder. In this example, we are going to use the built-in FullAdder from Mantle. Mantle is our standard library of useful circuits. End of explanation print(FullAdder) Explanation: We can print out the interface of the FullAdder. End of explanation fulladder = FullAdder() print(fulladder.I0, type(fulladder.I0)) print(fulladder.I1, type(fulladder.I1)) print(fulladder.CIN, type(fulladder.CIN)) print(fulladder.O, type(fulladder.O)) print(fulladder.COUT, type(fulladder.O)) Explanation: This tells us that the full adder has three inputs I0, I1, and CIN. Note that the type of these arguments are In(Bit). There are also two outputs O and COUT, both with type Out(Bit). In Magma arguments in the circuit interface are normally qualified to be inputs or outputs. End of explanation from magma.simulator import PythonSimulator fulladder = PythonSimulator(FullAdder) assert fulladder(1, 0, 0) == (1, 0), "Failed" assert fulladder(0, 1, 0) == (1, 0), "Failed" assert fulladder(1, 1, 0) == (0, 1), "Failed" assert fulladder(1, 0, 1) == (0, 1), "Failed" assert fulladder(1, 1, 1) == (1, 1), "Failed" print("Success!") Explanation: Before testing the full adder on the IceStick board, let's test it using the Python simulator. End of explanation class Add2(m.Circuit): IO = ['I0', m.In(m.UInt[2]), 'I1', m.In(m.UInt[2]), 'CIN', m.In(m.Bit), 'O', m.Out(m.UInt[2]), 'COUT', m.Out(m.Bit) ] @classmethod def definition(io): n = len(io.I0) O = [] COUT = io.CIN for i in range(n): fulladder = FullAdder() Oi, COUT = fulladder(io.I0[i], io.I1[i], COUT) O.append(Oi) io.O <= m.uint(O) io.COUT <= COUT Explanation: class Add2 - Defining a Circuit Now let's build a 2-bit adder using FullAdder. We'll use a simple ripple carry adder design by connecting the carry out of one full adder to the carry in of the next full adder. The resulting adder will accept as input a carry in, and generate a final carry out. Here's a logisim diagram of the circuit we will construct: Here is a Python class that implements a 2-bit adder. End of explanation def DefineAdd(n): class _Add(m.Circuit): name = f'Add{n}' IO = ['I0', m.In(m.UInt[n]), 'I1', m.In(m.UInt[n]), 'CIN', m.In(m.Bit), 'O', m.Out(m.UInt[n]), 'COUT', m.Out(m.Bit) ] @classmethod def definition(io): O = [] COUT = io.CIN for i in range(n): fulladder = FullAdder() Oi, COUT = fulladder(io.I0[i], io.I1[i], COUT) O.append(Oi) io.O <= m.uint(O) io.COUT <= COUT return _Add def Add(n): return DefineAdd(n)() def add(i0, i1, cin): assert len(i0) == len(i1) return Add(len(i0))(i0, i1, cin) Explanation: Although we are making an 2-bit adder, we do this using a for loop that can be generalized to construct an n-bit adder. Each time through the for loop we create an instance of a full adder by calling FullAdder(). Recall that circuits are python classes, so that calling a class returns an instance of that class. Note how we wire up the full adders. Calling an circuit instance has the effect of wiring up the arguments to the inputs of the circuit. That is, O, COUT = fulladder(I0, I1, CIN) is equivalent to m.wire(IO, fulladder.I0) m.wire(I1, fulladder.I1) m.wire(CIN, fulladder.CIN) O = fulladder.O COUT = fulladder.COUT The outputs of the circuit are returned. Inside this loop we append single bit outputs from the full adders to the Python list O. We also set the CIN of the next full adder to the COUT of the previous instance. Finally, we then convert the list O to a Uint(n). In addition to Bits(n), Magma also has built in types UInt(n) and SInt(n) to represent unsigned and signed ints. Magma also has type conversion functions bits, uint, and sint to convert between different types. In this example, m.uint(C) converts the list of bits to a UInt(len(C)). DefineAdd Generator One question you may be asking yourself, is how can this code be generalized to produce an n-bit adder. We do this by creating an add generator. A generator is a Python function that takes parameters and returns a circuit class. Calling the generator with different parameter values will create different circuits. The power of Magma results from being to use all the features of Python to create powerful hardware generators. Here is the code: End of explanation N = 2 from loam.boards.icestick import IceStick icestick = IceStick() for i in range(N): icestick.J1[i].input().on() icestick.J1[i+N].input().on() for i in range(N+1): icestick.J3[i].output().on() Explanation: First, notice that a circuit generator by convention begins with the prefix Define. In this example, DefineAdd has a parameter n which is the width of the adder. A circuit generator returns a subclass of Circuit. A standard way to write this is to construct a new Circuit class within the body of the generator. The code within the body of the generator can refer to the arguments to the generator. Like Verilog modules, Magma circuits must have unique names. Because Python does not provide the facilities to dynamically generate the class name, dynamically constructed Magma circuits are named using the name class variable. Python generators need to create unique names for each generated circuit because Magma will cache circuit definitions based on the name. Note how the name of the circuit is set using the format string f'Add{n}'. For example, if n is 2, the name of the circuit will be Add2. Magma allows you to use Python string manipulation functions to create mnemonic names. As we will see, the resulting verilog module will have the same name. This is very useful for debugging. We also can create the parameterized types within the generator. In this example, we use the type UInt(n) which depends on n. The loop within definition can also refer to the parameter n' Finally, notice we defined three interrelated functions: DefineAdd(n), Add(n), and add(i0, i1, cin). Why are there three functions? Because there are three stages in using Magma to create hardware. The first stage is to generate or define circuits. The second stage is to create instances of these circuits. And the third stage is to wire up the circuits. Functions named DefineX are generators. Generators are functions that return Circuits. Functions named X return circuit instances. This is done by calling DefineX and then instancing the circuit. This may seem very inefficient. Fortunately, circuits classes are cached and only defined once. Finally, functions named lowercase x do one more thing. They wire the arguments of to x to the circuit. They can also construct the appropriate circuit class depending on the types of the arguments. In this example, add constructs an n-bit adder, where n is the width of the inputs. We strongly recommend that you follow this naming convention. Running on the IceStick In order to test the adder, we setup the IceStick board to have two 2-bit inputs and one 3-bit output. As before, J1 will be used for inputs and J3 for outputs. End of explanation main = icestick.DefineMain() O, COUT = add( main.J1[0:N], main.J1[N:2*N], 0 ) main.J3[0:N] <= O main.J3[N] <= COUT m.EndDefine() Explanation: We define a main function that instances our 2-bit adder and wires it up to J1 and J3. Notice the use of Python's slicing syntax using our width variable N. End of explanation m.compile('build/add', main) Explanation: As before, we compile. End of explanation %%bash cd build yosys -q -p 'synth_ice40 -top main -blif add.blif' add.v arachne-pnr -q -d 1k -o add.txt -p add.pcf add.blif icepack add.txt add.bin #iceprog add.bin Explanation: And use our yosys, arcachne-pnr, and icestorm tool flow. End of explanation %cat build/add.pcf Explanation: You can test the program by connecting up some switches and LEDs to the headers. You should see the sum of the inputs displayed on the LEDs. First, we need to find out what pins J1 and J3 are wired up to. (Note: you can use % to execute shell commands inline in Jupyter notebooks) End of explanation %cat build/add.v Explanation: In this example, we have J1 wire up to the four switch/LED circuits on the left, and J3 wired up to the three LED (no switch) circuits on the right Again, it can be useful to examine the compiled Verilog. Notice that it includes a Verilog definition of the mantle FullAdder implemented using the SB_LUT4 and SB_CARRY primtives. The Add2 module instances two FullAdders and wires them up. End of explanation #DefineAdd(4) Explanation: You can also display the circuit using graphviz. End of explanation
6,478
Given the following text description, write Python code to implement the functionality described below step by step Description: محاولة لإستكشاف افضل الطرق لتحسين اداء نموذج بيما Step1: هذه الدالة تعطينا توصيف كامل للبيانات و تكشف لنا في ما إذا كانت هناك قيم مفقودة Step2: سيبورن مكتبة جميلة للرسوميات سهلة في الكتابة لكن مفيدة جداً في المعلومات التي ممكن ان نقراءها عبر الهيستوقرام فائدها ممكن ان تكون في 1- تلخيص توزيع البينات في رسوميات 2- فهم او الإطلاع على القيم الفريدة 3- تحمل الرسوميات معنى اعمق من الكلمات Step3: تجربة استخدام تقييس و تدريج الخواص لتحسين اداء النموذج Step4: تجربة تحسين اداء النموذج باستخدام طريقة standard scaler Step5: تجربة تحسين اداء النموذج بطريقة min-max scaler
Python Code: import numpy as np import pandas as pd import seaborn as sb from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn import preprocessing %matplotlib inline df = pd.read_csv('diabetes.csv') df.head(20) #لاستعراض ال20 السجلات الاولى من إطار البيانات Explanation: محاولة لإستكشاف افضل الطرق لتحسين اداء نموذج بيما End of explanation df.info() Explanation: هذه الدالة تعطينا توصيف كامل للبيانات و تكشف لنا في ما إذا كانت هناك قيم مفقودة End of explanation sb.countplot(x='Outcome',data=df, palette='hls') sb.countplot(x='Pregnancies',data=df, palette='hls') sb.countplot(x='Glucose',data=df, palette='hls') sb.heatmap(df.corr()) sb.pairplot(df, hue="Outcome") from scipy.stats import kendalltau sb.jointplot(df['Pregnancies'], df['Glucose'], kind="hex", stat_func=kendalltau, color="#4CB391") import matplotlib.pyplot as plt g = sb.FacetGrid(df, row="Pregnancies", col="Outcome", margin_titles=True) bins = np.linspace(0, 50, 13) g.map(plt.hist, "BMI", color="steelblue", bins=bins, lw=0) sb.pairplot(df, vars=["Pregnancies", "BMI"]) Explanation: سيبورن مكتبة جميلة للرسوميات سهلة في الكتابة لكن مفيدة جداً في المعلومات التي ممكن ان نقراءها عبر الهيستوقرام فائدها ممكن ان تكون في 1- تلخيص توزيع البينات في رسوميات 2- فهم او الإطلاع على القيم الفريدة 3- تحمل الرسوميات معنى اعمق من الكلمات End of explanation columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'] labels = df['Outcome'].values features = df[list(columns)].values X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.30) clf = RandomForestClassifier(n_estimators=1) clf = clf.fit(X_train, y_train) accuracy = clf.score(X_train, y_train) print ' اداء النموذج في عينة التدريب بدقة ', accuracy*100 accuracy = clf.score(X_test, y_test) print ' اداء النموذج في عينة الفحص بدقة ', accuracy*100 ypredict = clf.predict(X_train) print '\n Training classification report\n', classification_report(y_train, ypredict) print "\n Confusion matrix of training \n", confusion_matrix(y_train, ypredict) ypredict = clf.predict(X_test) print '\n Testing classification report\n', classification_report(y_test, ypredict) print "\n Confusion matrix of Testing \n", confusion_matrix(y_test, ypredict) Explanation: تجربة استخدام تقييس و تدريج الخواص لتحسين اداء النموذج End of explanation #scaling scaler = StandardScaler() # Fit only on training data scaler.fit(X_train) X_train = scaler.transform(X_train) # apply same transformation to test data X_test = scaler.transform(X_test) clf = RandomForestClassifier(n_estimators=1) clf = clf.fit(X_train, y_train) accuracy = clf.score(X_train, y_train) print ' اداء النموذج في عينة التدريب بدقة ', accuracy*100 accuracy = clf.score(X_test, y_test) print ' اداء النموذج في عينة الفحص بدقة ', accuracy*100 ypredict = clf.predict(X_train) print '\n Training classification report\n', classification_report(y_train, ypredict) print "\n Confusion matrix of training \n", confusion_matrix(y_train, ypredict) ypredict = clf.predict(X_test) print '\n Testing classification report\n', classification_report(y_test, ypredict) print "\n Confusion matrix of Testing \n", confusion_matrix(y_test, ypredict) Explanation: تجربة تحسين اداء النموذج باستخدام طريقة standard scaler End of explanation columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'] labels = df['Outcome'].values features = df[list(columns)].values X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.30) scaler = preprocessing.MinMaxScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) # apply same transformation to test data X_test = scaler.transform(X_test) clf = RandomForestClassifier(n_estimators=1) clf = clf.fit(X_train, y_train) accuracy = clf.score(X_train, y_train) print ' اداء النموذج في عينة التدريب بدقة ', accuracy*100 accuracy = clf.score(X_test, y_test) print ' اداء النموذج في عينة الفحص بدقة ', accuracy*100 ypredict = clf.predict(X_train) print '\n Training classification report\n', classification_report(y_train, ypredict) print "\n Confusion matrix of training \n", confusion_matrix(y_train, ypredict) ypredict = clf.predict(X_test) print '\n Testing classification report\n', classification_report(y_test, ypredict) print "\n Confusion matrix of Testing \n", confusion_matrix(y_test, ypredict) columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'] labels = df['Outcome'].values features = df[list(columns)].values X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.30) clf = RandomForestClassifier(n_estimators=5) clf = clf.fit(X_train, y_train) accuracy = clf.score(X_train, y_train) print ' اداء النموذج في عينة التدريب بدقة ', accuracy*100 accuracy = clf.score(X_test, y_test) print ' اداء النموذج في عينة الفحص بدقة ', accuracy*100 ypredict = clf.predict(X_train) print '\n Training classification report\n', classification_report(y_train, ypredict) print "\n Confusion matrix of training \n", confusion_matrix(y_train, ypredict) ypredict = clf.predict(X_test) print '\n Testing classification report\n', classification_report(y_test, ypredict) print "\n Confusion matrix of Testing \n", confusion_matrix(y_test, ypredict) Explanation: تجربة تحسين اداء النموذج بطريقة min-max scaler End of explanation
6,479
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Load-Network" data-toc-modified-id="Load-Network-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Load Network</a></span></li><li><span><a href="#Explore-Directions" data-toc-modified-id="Explore-Directions-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Explore Directions</a></span><ul class="toc-item"><li><span><a href="#Interactive" data-toc-modified-id="Interactive-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Interactive</a></span></li></ul></li><li><span><a href="#Ganspace" data-toc-modified-id="Ganspace-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Ganspace</a></span></li></ul></div> Step1: Load Network Step2: Explore Directions Step3: Interactive
Python Code: is_stylegan_v1 = False from pathlib import Path import matplotlib.pyplot as plt import numpy as np import sys import os from datetime import datetime from tqdm import tqdm # ffmpeg installation location, for creating videos plt.rcParams['animation.ffmpeg_path'] = str('/usr/bin/ffmpeg') import ipywidgets as widgets from ipywidgets import interact, interact_manual from IPython.display import display from ipywidgets import Button os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' %load_ext autoreload %autoreload 2 # StyleGAN2 Repo sys.path.append('/tf/notebooks/stylegan2') # StyleGAN Utils from stylegan_utils import load_network, gen_image_fun, synth_image_fun, create_video # v1 override if is_stylegan_v1: from stylegan_utils import load_network_v1 as load_network from stylegan_utils import gen_image_fun_v1 as gen_image_fun from stylegan_utils import synth_image_fun_v1 as synth_image_fun import run_projector import projector import training.dataset import training.misc # Data Science Utils sys.path.append(os.path.join(*[os.pardir]*3, 'data-science-learning')) from ds_utils import generative_utils res_dir = Path.home() / 'Documents/generated_data/stylegan' Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Load-Network" data-toc-modified-id="Load-Network-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Load Network</a></span></li><li><span><a href="#Explore-Directions" data-toc-modified-id="Explore-Directions-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Explore Directions</a></span><ul class="toc-item"><li><span><a href="#Interactive" data-toc-modified-id="Interactive-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Interactive</a></span></li></ul></li><li><span><a href="#Ganspace" data-toc-modified-id="Ganspace-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Ganspace</a></span></li></ul></div> End of explanation MODELS_DIR = Path.home() / 'Documents/models/stylegan2' MODEL_NAME = 'original_ffhq' SNAPSHOT_NAME = 'stylegan2-ffhq-config-f' Gs, Gs_kwargs, noise_vars = load_network(str(MODELS_DIR / MODEL_NAME / SNAPSHOT_NAME) + '.pkl') Z_SIZE = Gs.input_shape[1] IMG_SIZE = Gs.output_shape[2:] IMG_SIZE img = gen_image_fun(Gs, np.random.randn(1, Z_SIZE), Gs_kwargs, noise_vars) plt.imshow(img) Explanation: Load Network End of explanation def plot_direction_grid(dlatent, direction, coeffs): fig, ax = plt.subplots(1, len(coeffs), figsize=(15, 10), dpi=100) for i, coeff in enumerate(coeffs): new_latent = (dlatent.copy() + coeff*direction) ax[i].imshow(synth_image_fun(Gs, new_latent, Gs_kwargs, randomize_noise=False)) ax[i].set_title(f'Coeff: {coeff:0.1f}') ax[i].axis('off') plt.show() # load learned direction direction = np.load('/tf/media/datasets/stylegan/learned_directions.npy') nb_latents = 5 # generate dlatents from mapping network dlatents = Gs.components.mapping.run(np.random.randn(nb_latents, Z_SIZE), None, truncation_psi=1.) for i in range(nb_latents): plot_direction_grid(dlatents[i:i+1], direction, np.linspace(-2, 2, 5)) Explanation: Explore Directions End of explanation # Setup plot image dpi = 100 fig, ax = plt.subplots(dpi=dpi, figsize=(7, 7)) fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0, wspace=0) plt.axis('off') im = ax.imshow(gen_image_fun(Gs, np.random.randn(1, Z_SIZE),Gs_kwargs, noise_vars, truncation_psi=1)) #prevent any output for this cell plt.close() # fetch attributes names directions_dir = MODELS_DIR / MODEL_NAME / 'directions' / 'set01' attributes = [e.stem for e in directions_dir.glob('*.npy')] # get names or projected images data_dir = res_dir / 'projection' / MODEL_NAME / SNAPSHOT_NAME / '' entries = [p.name for p in data_dir.glob("*") if p.is_dir()] entries.remove('tfrecords') # set target latent to play with #dlatents = Gs.components.mapping.run(np.random.randn(1, Z_SIZE), None, truncation_psi=0.5) #target_latent = dlatents[0:1] #target_latent = np.array([np.load("/out_4/image_latents2000.npy")]) %matplotlib inline @interact def i_direction(attribute=attributes, entry=entries, coeff=(-10., 10.)): direction = np.load(directions_dir / f'{attribute}.npy') target_latent = np.array([np.load(data_dir / entry / "image_latents1000.npy")]) new_latent_vector = target_latent.copy() + coeff*direction im.set_data(synth_image_fun(Gs, new_latent_vector, Gs_kwargs, True)) ax.set_title('Coeff: %0.1f' % coeff) display(fig) dest_dir = Path("C:/tmp/tmp_mona") timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") fig.savefig(dest_dir / (timestamp + '.png'), bbox_inches='tight') Explanation: Interactive End of explanation
6,480
Given the following text description, write Python code to implement the functionality described below step by step Description: Quickstart In this simple example we will generate some simulated data, and fit them with 3ML. Let's start by generating our dataset Step1: We can now fit it easily with 3ML Step2: Plot data and model Step3: Compute the goodness of fit using Monte Carlo simulations (NOTE Step4: The procedure outlined above works for any distribution for the data (Gaussian or Poisson). In this case we are using Gaussian data, thus the log(likelihood) is just half of a $\chi^2$. We can then also use the $\chi^2$ test, which give a close result without performing simulations
Python Code: from threeML import * # Let's generate some data with y = Powerlaw(x) gen_function = Powerlaw() # Generate a dataset using the power law, and a # constant 30% error x = np.logspace(0, 2, 50) xyl_generator = XYLike.from_function("sim_data", function = gen_function, x = x, yerr = 0.3 * gen_function(x)) y = xyl_generator.y y_err = xyl_generator.yerr Explanation: Quickstart In this simple example we will generate some simulated data, and fit them with 3ML. Let's start by generating our dataset: End of explanation fit_function = Powerlaw() xyl = XYLike("data", x, y, y_err) parameters, like_values = xyl.fit(fit_function) Explanation: We can now fit it easily with 3ML: End of explanation fig = xyl.plot(x_scale='log', y_scale='log') Explanation: Plot data and model: End of explanation gof, all_results, all_like_values = xyl.goodness_of_fit() print("The null-hypothesis probability from simulations is %.2f" % gof['data']) Explanation: Compute the goodness of fit using Monte Carlo simulations (NOTE: if you repeat this exercise from the beginning many time, you should find that the quantity "gof" is a random number distributed uniformly between 0 and 1. That is the expected result if the model is a good representation of the data) End of explanation import scipy.stats # Compute the number of degrees of freedom n_dof = len(xyl.x) - len(fit_function.free_parameters) # Get the observed value for chi2 # (the factor of 2 comes from the fact that the Gaussian log-likelihood is half of a chi2) obs_chi2 = 2 * like_values['-log(likelihood)']['data'] theoretical_gof = scipy.stats.chi2(n_dof).sf(obs_chi2) print("The null-hypothesis probability from theory is %.2f" % theoretical_gof) Explanation: The procedure outlined above works for any distribution for the data (Gaussian or Poisson). In this case we are using Gaussian data, thus the log(likelihood) is just half of a $\chi^2$. We can then also use the $\chi^2$ test, which give a close result without performing simulations: End of explanation
6,481
Given the following text description, write Python code to implement the functionality described below step by step Description: Executed Step1: Load software and filenames definitions Step2: Data folder Step3: List of data files Step4: Data load Initial loading of the data Step5: Laser alternation selection At this point we have only the timestamps and the detector numbers Step6: We need to define some parameters Step7: We should check if everithing is OK with an alternation histogram Step8: If the plot looks good we can apply the parameters with Step9: Measurements infos All the measurement data is in the d variable. We can print it Step10: Or check the measurements duration Step11: Compute background Compute the background using automatic threshold Step12: Burst search and selection Step13: Fret fit Max position of the Kernel Density Estimation (KDE) Step14: Weighted mean of $E$ of each burst Step15: Gaussian fit (no weights) Step16: Gaussian fit (using burst size as weights) Step17: Stoichiometry fit Max position of the Kernel Density Estimation (KDE) Step18: The Maximum likelihood fit for a Gaussian population is the mean Step19: Computing the weighted mean and weighted standard deviation we get Step20: Save data to file Step21: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved. Step22: This is just a trick to format the different variables
Python Code: ph_sel_name = "None" data_id = "27d" # data_id = "7d" Explanation: Executed: Mon Mar 27 11:36:54 2017 Duration: 8 seconds. usALEX-5samples - Template This notebook is executed through 8-spots paper analysis. For a direct execution, uncomment the cell below. End of explanation from fretbursts import * init_notebook() from IPython.display import display Explanation: Load software and filenames definitions End of explanation data_dir = './data/singlespot/' import os data_dir = os.path.abspath(data_dir) + '/' assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir Explanation: Data folder: End of explanation from glob import glob file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f) ## Selection for POLIMI 2012-11-26 datatset labels = ['17d', '27d', '7d', '12d', '22d'] files_dict = {lab: fname for lab, fname in zip(labels, file_list)} files_dict data_id Explanation: List of data files: End of explanation d = loader.photon_hdf5(filename=files_dict[data_id]) Explanation: Data load Initial loading of the data: End of explanation d.ph_times_t, d.det_t Explanation: Laser alternation selection At this point we have only the timestamps and the detector numbers: End of explanation d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0) Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations: End of explanation plot_alternation_hist(d) Explanation: We should check if everithing is OK with an alternation histogram: End of explanation loader.alex_apply_period(d) Explanation: If the plot looks good we can apply the parameters with: End of explanation d Explanation: Measurements infos All the measurement data is in the d variable. We can print it: End of explanation d.time_max Explanation: Or check the measurements duration: End of explanation d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7) dplot(d, timetrace_bg) d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa Explanation: Compute background Compute the background using automatic threshold: End of explanation d_orig = d d = bext.burst_search_and_gate(d, m=10, F=7) assert d.dir_ex == 0 assert d.leakage == 0 print(d.ph_sel) dplot(d, hist_fret); # if data_id in ['7d', '27d']: # ds = d.select_bursts(select_bursts.size, th1=20) # else: # ds = d.select_bursts(select_bursts.size, th1=30) ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30) n_bursts_all = ds.num_bursts[0] def select_and_plot_ES(fret_sel, do_sel): ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel) ds_do = ds.select_bursts(select_bursts.ES, **do_sel) bpl.plot_ES_selection(ax, **fret_sel) bpl.plot_ES_selection(ax, **do_sel) return ds_fret, ds_do ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1) if data_id == '7d': fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False) do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '12d': fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False) do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '17d': fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False) do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '22d': fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False) do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '27d': fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False) do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) bandwidth = 0.03 n_bursts_fret = ds_fret.num_bursts[0] n_bursts_fret dplot(ds_fret, hist2d_alex, scatter_alpha=0.1); nt_th1 = 50 dplot(ds_fret, hist_size, which='all', add_naa=False) xlim(-0, 250) plt.axvline(nt_th1) Th_nt = np.arange(35, 120) nt_th = np.zeros(Th_nt.size) for i, th in enumerate(Th_nt): ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th) nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th plt.figure() plot(Th_nt, nt_th) plt.axvline(nt_th1) nt_mean = nt_th[np.where(Th_nt == nt_th1)][0] nt_mean Explanation: Burst search and selection End of explanation E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size') E_fitter = ds_fret.E_fitter E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03]) E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5)) E_fitter.fit_res[0].params.pretty_print() fig, ax = plt.subplots(1, 2, figsize=(14, 4.5)) mfit.plot_mfit(E_fitter, ax=ax[0]) mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1]) print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100)) display(E_fitter.params*100) # ds_fret.add(E_fitter = E_fitter) # dplot(ds_fret, hist_fret_kde, weights='size', bins=np.r_[-0.2:1.2:bandwidth], bandwidth=bandwidth); # plt.axvline(E_pr_fret_kde, ls='--', color='r') # print(ds_fret.ph_sel, E_pr_fret_kde) Explanation: Fret fit Max position of the Kernel Density Estimation (KDE): End of explanation ds_fret.fit_E_m(weights='size') Explanation: Weighted mean of $E$ of each burst: End of explanation ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None) Explanation: Gaussian fit (no weights): End of explanation ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size') E_kde_w = E_fitter.kde_max_pos[0] E_gauss_w = E_fitter.params.loc[0, 'center'] E_gauss_w_sig = E_fitter.params.loc[0, 'sigma'] E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0])) E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr Explanation: Gaussian fit (using burst size as weights): End of explanation S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True) S_fitter = ds_fret.S_fitter S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03]) S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5) fig, ax = plt.subplots(1, 2, figsize=(14, 4.5)) mfit.plot_mfit(S_fitter, ax=ax[0]) mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1]) print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100)) display(S_fitter.params*100) S_kde = S_fitter.kde_max_pos[0] S_gauss = S_fitter.params.loc[0, 'center'] S_gauss_sig = S_fitter.params.loc[0, 'sigma'] S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0])) S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr Explanation: Stoichiometry fit Max position of the Kernel Density Estimation (KDE): End of explanation S = ds_fret.S[0] S_ml_fit = (S.mean(), S.std()) S_ml_fit Explanation: The Maximum likelihood fit for a Gaussian population is the mean: End of explanation weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.) S_mean = np.dot(weights, S)/weights.sum() S_std_dev = np.sqrt( np.dot(weights, (S - S_mean)**2)/weights.sum()) S_wmean_fit = [S_mean, S_std_dev] S_wmean_fit Explanation: Computing the weighted mean and weighted standard deviation we get: End of explanation sample = data_id Explanation: Save data to file End of explanation variables = ('sample n_bursts_all n_bursts_fret ' 'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr ' 'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr ' 'nt_mean\n') Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved. End of explanation variables_csv = variables.replace(' ', ',') fmt_float = '{%s:.6f}' fmt_int = '{%s:d}' fmt_str = '{%s}' fmt_dict = {**{'sample': fmt_str}, **{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}} var_dict = {name: eval(name) for name in variables.split()} var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n' data_str = var_fmt.format(**var_dict) print(variables_csv) print(data_str) # NOTE: The file name should be the notebook name but with .csv extension with open('results/usALEX-5samples-PR-raw-AND-gate.csv', 'a') as f: f.seek(0, 2) if f.tell() == 0: f.write(variables_csv) f.write(data_str) Explanation: This is just a trick to format the different variables: End of explanation
6,482
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 Google LLC. Licensed under the Apache License, Version 2.0 (the "License"); Step1: Black Scholes Step2: Black Scholes pricing and implied volatility usage Here we see how to price vanilla options in the Black Scholes framework using the library. Semantics of the interface If $S$ is the spot price of an asset, $r$ the risk free rate, $T$ the time to expiry, $\sigma$ the volatility. The price of a call $C$ under Black Scholes model exhibits the following relationship (suppressing unusued notation) Step3: We now show how to invert the Black Scholes pricing model in order to recover the volatility which generated a given market price under a particular term structure. Again, the implied volatility interface operates on batches of options, with each index of the arrays corresponding to an independent problem to solve. Step5: Which should show that implied_vols is very close to the volatilities used to generate the market prices. Here we provided initial starting positions, however, by default tff will chose an adaptive initialisation position as discussed below. Black Scholes implied volatility convergence region We now look at some charts which provide a basic illustration of the convergence region of the implemented root finding method. The library provides an implied volatility root finding method. If not provided with an initial starting point, a starting point will be found using the Radiocic-Polya approximation [1] to the implied volatility. This section illustrates both call styles and the comparitive advantage of using targeted initialisation. In this example Step6: Where the grey values represent nans in the grid. Note that the bottom left corner of each image lies outside the bounds where inversion should be possible. The pattern of nan values for different values of a fixed initialisation strategy will be different (rerun the colab to see). Black Scholes implied volatility initialisation strategy accuracy comparison We can also consider the median absolute error for fixed versus Radiocic-Polya initialisation of the root finder. We consider a clipped grid looking at performance away from the boundaries where extreme values or nans might occur.
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2019 Google LLC. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation #@title Upgrade to TensorFlow 2.1+ !pip install --upgrade tensorflow #@title Install TF Quant Finance !pip install tf-quant-finance #@title Imports import matplotlib.pyplot as plt import numpy as np import tensorflow as tf tf.compat.v1.enable_eager_execution() import tf_quant_finance as tff option_price = tff.black_scholes.option_price implied_vol = tff.black_scholes.implied_vol from IPython.core.pylabtools import figsize figsize(21, 14) # better graph size for Colab Explanation: Black Scholes: Price and Implied Vol in TFF <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.google.com/url?q=https://github.com/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Black_Scholes_Price_and_Implied_Vol.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Black_Scholes_Price_and_Implied_Vol.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> End of explanation # Calculate discount factors (e^-rT) rate = 0.05 expiries = np.array([0.5, 1.0, 2.0, 1.3]) discount_factors = np.exp(-rate * expiries) # Current value of assets. spots = np.array([0.9, 1.0, 1.1, 0.9]) # Forward value of assets at expiry. forwards = spots / discount_factors # Strike prices given by: strikes = np.array([1.0, 2.0, 1.0, 0.5]) # Indicate whether options are call (True) or put (False) is_call_options = np.array([True, True, False, False]) # The volatilites at which the options are to be priced. volatilities = np.array([0.7, 1.1, 2.0, 0.5]) # Calculate the prices given the volatilities and term structure. prices = option_price( volatilities=volatilities, strikes=strikes, expiries=expiries, forwards=forwards, discount_factors=discount_factors, is_call_options=is_call_options) prices Explanation: Black Scholes pricing and implied volatility usage Here we see how to price vanilla options in the Black Scholes framework using the library. Semantics of the interface If $S$ is the spot price of an asset, $r$ the risk free rate, $T$ the time to expiry, $\sigma$ the volatility. The price of a call $C$ under Black Scholes model exhibits the following relationship (suppressing unusued notation): $C(S, r) = e^{-rT} C(e^{rT}S, 0)$ Where $e^{-rT}$ is the discount factor, and $e^{rT}S_t$ the forward price of the asset to expiry. The tff's interface is framed in terms of forward prices and discount factors (rather than spot prices and risk free rates). This corresponds to the right hand side of the above relationship. Parallelism Note that the library allows pricing of options in parallel: each argument (such as the strikes) is an array and each index corresponds to an independent option to price. For example, this allows the simultaneous pricing of the same option with different expiry dates, or strike prices or both. End of explanation # Initial positions for finding implied vol. initial_volatilities = np.array([2.0, 0.5, 2.0, 0.5]) # Identifier whether the option is call (True) or put (False) is_call_options = np.array([True, True, False, False]) # Find the implied vols beginning at initial_volatilities. implied_vols = implied_vol( prices=prices, strikes=strikes, expiries=expiries, forwards=forwards, discount_factors=discount_factors, is_call_options=is_call_options, initial_volatilities=initial_volatilities, validate_args=True, tolerance=1e-9, max_iterations=200, name=None, dtype=None) implied_vols Explanation: We now show how to invert the Black Scholes pricing model in order to recover the volatility which generated a given market price under a particular term structure. Again, the implied volatility interface operates on batches of options, with each index of the arrays corresponding to an independent problem to solve. End of explanation #@title Example data on a grid. def grid_data(strike_vec, vol_vec, dtype=np.float64): Construct dummy data with known ground truth. For a grid of known strikes by volatilities, return the price. Assumes the forward prices and expiries are fixed at unity. Args: strikes: a vector of strike prices from which to form the grid. volatilities: a vector of volatilities from which to form the grid. dtype: a numpy datatype for the element values of returned arrays. Returns: (forwards, strikes, expiries, true_volatilities, prices) all of which are identically shaped numpy arrays. nstrikes = len(strike_vec) nvolatilities = len(vol_vec) vol_ones = np.matrix(np.ones((1, nvolatilities))) strike_ones = np.matrix(np.ones((nstrikes, 1))) strikes = np.array(np.matrix(strike_vec).T * vol_ones, dtype=dtype) volatilities = np.array(strike_ones * np.matrix(vol_vec), dtype=dtype) expiries = np.ones_like(strikes, dtype=dtype) forwards = np.ones_like(strikes, dtype=dtype) initials = np.ones_like(strikes, dtype=dtype) prices = option_price(volatilities=volatilities, strikes=strikes, expiries=expiries, forwards=forwards, dtype=tf.float64) return (forwards, strikes, expiries, volatilities, initials, prices) # Build a 1000 x 1000 grid of options find the implied volatilities of. nstrikes = 1000 nvolatilities = 1000 strike_vec = np.linspace(0.0001, 5.0, nstrikes) vol_vec = np.linspace(0.0001, 5.0, nvolatilities) max_iterations = 50 grid = grid_data(strike_vec, vol_vec) forwards0, strikes0, expiries0, volatilities0, initials0, prices0 = grid initials0 = discounts0 = signs0 = np.ones_like(prices0) # Implied volitilities, starting the root finder at 1. implied_vols_fix = implied_vol( prices=prices0, strikes=strikes0, expiries=expiries0, forwards=forwards0, initial_volatilities=initials0, validate_args=False, tolerance=1e-8, max_iterations=max_iterations) # Implied vols starting the root finder at the Radiocic-Polya approximation. implied_vols_polya = implied_vol( prices=prices0, strikes=strikes0, expiries=expiries0, forwards=forwards0, validate_args=False, tolerance=1e-8, max_iterations=max_iterations) #@title Visualisation of accuracy plt.clf() thinner = 100 fig, _axs = plt.subplots(nrows=1, ncols=2) fig.subplots_adjust(hspace=0.3) axs = _axs.flatten() implied_vols = [implied_vols_fix, implied_vols_polya] titles = ["Fixed initialisation implied vol minus true vol", "Radiocic-Polya initialised implied vol minus true vol"] vmin = np.min(map(np.min, implied_vols)) vmax = np.max(map(np.max, implied_vols)) images = [] for i in range(2): _title = axs[i].set_title(titles[i]) _title.set_position([.5, 1.03]) im = axs[i].imshow(implied_vols[i] - volatilities0, origin="lower", interpolation="none", cmap="seismic", vmin=-1.0, vmax=1.0) images.append(im) axs[i].set_xticks(np.arange(0, len(vol_vec), thinner)) axs[i].set_yticks(np.arange(0, len(strike_vec), thinner)) axs[i].set_xticklabels(np.round(vol_vec[0:len(vol_vec):thinner], 3)) axs[i].set_yticklabels(np.round(strike_vec[0:len(strike_vec):thinner], 3)) plt.colorbar(im, ax=axs[i], fraction=0.046, pad=0.00) axs[i].set_ylabel('Strike') axs[i].set_xlabel('True vol') plt.show() pass Explanation: Which should show that implied_vols is very close to the volatilities used to generate the market prices. Here we provided initial starting positions, however, by default tff will chose an adaptive initialisation position as discussed below. Black Scholes implied volatility convergence region We now look at some charts which provide a basic illustration of the convergence region of the implemented root finding method. The library provides an implied volatility root finding method. If not provided with an initial starting point, a starting point will be found using the Radiocic-Polya approximation [1] to the implied volatility. This section illustrates both call styles and the comparitive advantage of using targeted initialisation. In this example: Forward prices are fixed at 1. Strike prices are from uniform grid on (0, 5). Expiries are fixed at 1. Volatilities are from a uniform grid on (0, 5). Fixed initial volatilities (where used) are 1. Option prices were computed by tff.black_scholes.option_price on the other data. Discount factors are 1. [1] Dan Stefanica and Rados Radoicic. An explicit implied volatility formula. International Journal of Theoretical and Applied Finance. Vol. 20, no. 7, 2017. End of explanation # Indices for selecting the middle of the grid. vol_slice = np.arange(int(0.25*len(vol_vec)), int(0.75*len(vol_vec))) strike_slice = np.arange(int(0.25*len(strike_vec)), int(0.75*len(strike_vec))) error_fix = implied_vols_fix.numpy() - volatilities0 error_fix_sub = [error_fix[i, j] for i, j in zip(strike_slice, vol_slice)] # Calculate the median absolute error in the central portion of the the grid # for the fixed initialisation. median_error_fix = np.median( np.abs(error_fix_sub) ) median_error_fix error_polya = implied_vols_polya.numpy() - volatilities0 error_polya_sub = [error_polya[i, j] for i, j in zip(strike_slice, vol_slice)] # Calculate the median absolute error in the central portion of the the grid # for the Radiocic-Polya approximation. median_error_polya = np.median( np.abs(error_polya_sub) ) median_error_polya median_error_fix / median_error_polya Explanation: Where the grey values represent nans in the grid. Note that the bottom left corner of each image lies outside the bounds where inversion should be possible. The pattern of nan values for different values of a fixed initialisation strategy will be different (rerun the colab to see). Black Scholes implied volatility initialisation strategy accuracy comparison We can also consider the median absolute error for fixed versus Radiocic-Polya initialisation of the root finder. We consider a clipped grid looking at performance away from the boundaries where extreme values or nans might occur. End of explanation
6,483
Given the following text description, write Python code to implement the functionality described below step by step Description: Virus and drug interactions in the human body NOTE Step1: Use pandas.read_csv() to Load the Data The data file we'll use is in a file format called CSV, which stands for comma-separated values. It's a commonly used format for storing 2-dimensional data, and programs like Microsoft Excel can import and export .CSV files. The code below will use the read_csv() function from the pandas data analysis library to load the CSV file you need from the web, then store the data as a variable called hiv_data. Step2: Examine the data Now, use the Pandas analysis tools you've used in the past to analyze and visualize the plots! Some useful analysis methods include .describe(), .head(), .tail(), .mean(),.min(),.max(). Some useful plotting functions include .plot.scatter() and .plot.histogram(). In particular, make sure to show the time evolution of the viral load (i.e., viral load as a function of time). Step3: Make a simple fit to the experimental data Now we're going to try some simple fits to the viral load as a function of time. It's fairly common for processes in nature to behave in an exponential or power-law way, so we'll try to create fits for two simple models Step4: Plot the model and data together Try to find values for the model parameters that fit well. Adjust the values of $L_{V0,p}$, $L_{V0,e}$, $\tau_p$, $\tau_e$, and $\alpha$ to obtain the best fit to the experimental data. As in the previous section, adjust the plot limits and scales as necessary so you can identify which model (and set of model parameters) provides the best fit? Step5: Question Step7: Assignment wrapup Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
Python Code: # some code to set up the problem. # Make plots inline %matplotlib inline # Make inline plots vector graphics instead of raster graphics from IPython.display import set_matplotlib_formats set_matplotlib_formats('pdf', 'svg') # import modules for plotting and data analysis import matplotlib.pyplot as plt import numpy as np import pandas Explanation: Virus and drug interactions in the human body NOTE: THIS IS A TWO-DAY PROJECT! We don't expect you to get it all done in one day! Names of group members Work in pairs, and put the names of both people in your group here! Goals of this assignment The main goal of this assignment is to model a biological process (namely, the competition between viruses and the human body's immune system) in a mathematical way, and in doing so reproduce an experimental dataset. More specifically: To develop an intuition about how equation-based models work To understand how to evaluate models by plotting them together with experimental data To practice predicting what the effect will be when we change the parameters of a model To learn how we can iterate to improve the fit of a model Our project - working with data and models Some background: In this project we explore a model of the viral load—the number of virions in the blood of a patient infected with HIV—after the administration of an antiretroviral drug. Viruses multiply in more than one way. One of the most common is called the Lytic Cycle, and the other is the Lysogenic Cycle. Both cycles are similar in that the virus takes cells hostage and use the cell's resources to multiply, making many copies of itself. Once enough new viruses are produced inside the cell it bursts, and the newly-created viruses then are released into the bloodstream to carry on the process and search for new host cells to invade. Antiviral drugs behave differently than antibiotics - rather than directly destroying the virus population in a patient, they instead generally inhibit the creation of new viruses by preventing viruses from entering target cells, by preventing the viruses from synthesizing new viruses once they have invaded new cells, or by preventing the release of newly-created viruses from the host cell. This project: The first part of this project will involve visualizing and fitting a simple model to experimental measurements of the HIV viral load in a person's body after they are administered, and the second part will involve creating a more sophisticated mathematical model of the biological processes involved, and comparing it to the experimental data. Loading and Examining Experimental Data What is this data? We're going to use experimental data of actual viral loads provided by Kindler and Nelson (2015). They write: File HIVseries.mat contains variable "a" with two columns of data. The first is the time in days since administration of a treatment to an HIV positive patient; the second contains the concentration of virus in that patient's blood, in arbitrary units. HIVseries.csv and HIVseries.npy contain the same data in the same format. HIVseries.npz contains the same data in two separate arrays called time_in_days and viral_load. Data from A. Perelson. Modelling viral and immune system dynamics. Nature Revs. Immunol. (2002) vol. 2 (1) pp. 28--36 (Box 1). So, to summarize, the dataset hiv_data has 2 columns: time_in_days is the number of days since an HIV-positive patient received a treatment. viral_load is the concentraiton of the virus in that patients blood, in arbitrary units. In the cells below, we're going to load, examine, and visualize this data and make a simple model to describe it. End of explanation # Loading the data in a data frame hiv_data = pandas.read_csv( "https://raw.githubusercontent.com/ComputationalModeling/IPML-Data/master/01HIVseries/HIVseries.csv", header = None, names = ["time_in_days", "viral_load"] ) Explanation: Use pandas.read_csv() to Load the Data The data file we'll use is in a file format called CSV, which stands for comma-separated values. It's a commonly used format for storing 2-dimensional data, and programs like Microsoft Excel can import and export .CSV files. The code below will use the read_csv() function from the pandas data analysis library to load the CSV file you need from the web, then store the data as a variable called hiv_data. End of explanation # Put your code here, and add additional cells as necessary. Explanation: Examine the data Now, use the Pandas analysis tools you've used in the past to analyze and visualize the plots! Some useful analysis methods include .describe(), .head(), .tail(), .mean(),.min(),.max(). Some useful plotting functions include .plot.scatter() and .plot.histogram(). In particular, make sure to show the time evolution of the viral load (i.e., viral load as a function of time). End of explanation # Put your code here, and add additional cells as necessary. Explanation: Make a simple fit to the experimental data Now we're going to try some simple fits to the viral load as a function of time. It's fairly common for processes in nature to behave in an exponential or power-law way, so we'll try to create fits for two simple models: Model 1 (power law): $L(t){V,pow} = L{V0,p} (t/\tau_p)^{-\alpha}$ Model 2 (exponential decay): $L(t){V,exp} = L{V0,e} e^{-t/\tau_e}$ In these models, the values $L_{V0,p}$, $L_{V0,e}$, $\tau_p$, $\tau_e$, and $\alpha$ are constants describing different parts of the model's behavior (more on this later). Use np.linspace() to create an array of times (at least 100, going from t=0 to t=10 days), and then create two additional arrays of values corresponding to Models 1 and 2 above. Plot them both on the same graph. Note that you can plot multiple datasets together by repeated calls to plt.plot(), or alternately by giving multiple sets of x,y,format data to plt.plot(). For example: plt.plot(x1vals,y1vals,'r--') plt.plot(x2vals,y2vals,'go') and plt.plot(x1vals,y1vals,'r--',x2vals,y2vals,'go') will produce the same plot. Note also that you can manipulate the plot's axes and limits using the pyplot xscale(), yscale(), xlim(), and ylim() methods. Making one or more of the axis scales a log axis (i.e., calling plt.yscale('log')) may make it easier to see similarities or differences between the models. As a reminder: if you have a question about a particular python method or function, you can learn about that by typing a question mark after it. For example, np.linspace? will give you information about that method. End of explanation # Put your code here, and add additional cells as necessary. Explanation: Plot the model and data together Try to find values for the model parameters that fit well. Adjust the values of $L_{V0,p}$, $L_{V0,e}$, $\tau_p$, $\tau_e$, and $\alpha$ to obtain the best fit to the experimental data. As in the previous section, adjust the plot limits and scales as necessary so you can identify which model (and set of model parameters) provides the best fit? End of explanation # put your program here, and add additional cells as necessary! Explanation: Question: Which model does a better job of describing the experimental data? And, does it succeed equally well at both early and late times in the data? And, what do you think the various quantities in the model ($L_{V0,p}$, $L_{V0,e}$, $\tau_p$, $\tau_e$, and $\alpha$) represent? put your answer here! Creating a mathematical model describing the system In this section, we're going to use our understanding of the biological and biochemical processes that are at work in this system to make a model describing the system's evolution over time. Some background knowledge we need for this model In general, we can think of what happenes to an infected patient that has been administered an antiviral drug using a simple model. The key points are: Viruses multiply rapidly if uninhibited by the body's immune system and infect cells at a rate that is proportional to the number of virions (virus particles), $N_v$, that are in the bloodstream. In other words, $\frac{dN_I}{dt}$, the rate at which the number of cells that are infected ($N_I$) changes, depends on $N_v$ and the time scale for multiplication $t_{mul}$. $N_v$ in turn depends on the number of infected cells and the number of virions produced per infected cell, $\gamma$. As antiviral drugs are administered at a constant rate, it takes some amount of time $T_{crit}$ for the amount of drug in the bloodstream to reach a high enough level that it suppresses the formation of new viruses. (This time varies from patient to patient, but is typically one day to a few days.) After the drug takes effect, we can assume that cell infection immediately stops. After infection stops, the number of infected cells $N_I$ decreases at a rate $\frac{dN_I}{dt} = -N_{I}/t_{rel}$, where $t_{rel}$ is the time scale on which infected cells release virions into the bloodstream and die. Once cells can no longer be infected, virions are released into the bloodstream through the death of previously infected cells. The rate at which these virions are released behaves as $\frac{dN_v}{dt} = \gamma N_{I}/t_{rel}$. The body clears virions out of the body at a rate that depends on the amount of virions that are in the bloodstream, $\frac{dN_v}{dt} = -N_v/t_{clr}$ ($N_v$ is the number of virions in the bloodstream and $t_{clr}$ is the time scale on which virions are cleared from the body). A table of parameters in this model is here, for your convenience: | Parameter | Meaning | | --------- | ------- | |$N_v$ | Number of virions in bloodstream| |$N_I$|Number of infected cells| |$t_{mul}$|Timescale for virus multiplication| |$\gamma$|Number of virions produced per infected cell| |$T_{crit}$ |Amount of time it takes for drugs to stop virus reproduction| |$t_{rel}$| Timescale on which infected cells release virions| |$t_{clr}$| Timescale on which virions are cleared from the body| Your mission You have a mission that will take place in three parts: Using the information above and your whiteboards, create a mathematical model for how the viral load in the bloodstream, $N_v$, varies as a function of time. Don't use numbers - just symbols! After you are happy with your model (and after you've talked to one of the instructors about it), figure out how to implement it as a computer program. Do so below! Compare the shape of the plot created by your model with the experimental data from earlier in this project. How do they compare? (Suggestion: assume that all of the time scales in your model are roughly equal, and roughly a day, and vary them from there.) End of explanation from IPython.display import HTML HTML( <iframe src="https://goo.gl/forms/cHBqN8XXeOOTt0K32?embedded=true" width="80%" height="1200px" frameborder="0" marginheight="0" marginwidth="0"> Loading... </iframe> ) Explanation: Assignment wrapup Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! End of explanation
6,484
Given the following text description, write Python code to implement the functionality described below step by step Description: Writing idiomatic python code the Zen of Python Step1: PEP8 Let's have a look Step5: Thuthiness is defined by __bool__() method Step6: What's the pythonic way? Step7: Don't be this guy Step8: Testing for None python if something is None Step9: Sometimes can be a bit slower if used inside a long loop. Checking for type Step10: How to check if the variable is a list or dictionary? Step11: Checking if an object is iterable? Step12: Similar way, by checking the methods Step13: Another way Step14: won't work, because Python does not automatically convert to str Step15: Pythonic Step16: using dictionaries Step17: f-strings (Python 3.6+) Just pulling variables from the namespace! Step18: References Write Pythonic Code Like a Seasoned Developer
Python Code: import this Explanation: Writing idiomatic python code the Zen of Python End of explanation if []: print('this is false') False # false is false [] # empty lists {} # empty dictionaries or sets "" # empty strings 0 # zero integers 0.00000 # zero floats None # None (but not zero) Explanation: PEP8 Let's have a look: https://www.python.org/dev/peps/pep-0008/ Static checkers in editors and GitHub Naming conventions Names should be descriptive! Rule of thumb: If you need a comment to describe it, you probably need a better name. ```python import my_module_name from foo import bar constant, not changed at runtime MODEL_GRID = 'cartesian' def calculate_something(data): Compute some parameters # ... return result class ModelDomain(object): Very long description... Attributes: ... Parameters: ... def init(self, name='', description='', bounds=None, mask_bounds=None): self.name = name self.description = description # something else... def mask_var(self, data): Mask the data of the given variable outside the region. return data.where(self.mask_bounds, data) class MyDescriptiveError(Exception): pass ``` A handful of foundational concepts Truthiness End of explanation class MyClass: def __init__(self, data): self.data = data def __bool__(self): if len(self.data) > 0: return True else: return False foo = MyClass(data=[1, 2, 3]) if foo: print('it contains some data') else: print('no data') Explanation: Thuthiness is defined by __bool__() method End of explanation vrbl = True if vrbl: # NO: if condtion == True print('do something') Explanation: What's the pythonic way? End of explanation def test_if_greater_than_ten(x): return True if x>10 else False test_if_greater_than_ten(11) def fun(): print('blah') x = fun x() type(x) callable(x) isinstance(12345, (int, float)) Explanation: Don't be this guy: <img src="https://imgur.com/24DSLiA.png"> End of explanation my_axis = 'x' if my_axis in ('x', 'y'): # Instead of writing like that: # if my_axis == 'x' or my_axis == 'y' print('horizontal') elif my_axis == 'z': print('vertical') Explanation: Testing for None python if something is None: print('no results') else: print('here are some results') negation: python if something is not None: print('Option A') else: print('Option B') Multiple tests against a single variable End of explanation a = [1,2,3] Explanation: Sometimes can be a bit slower if used inside a long loop. Checking for type End of explanation import numpy as np a = np.arange(10) isinstance(a, np.ndarray) Explanation: How to check if the variable is a list or dictionary? End of explanation the_variable = [1, 2, 3, 4] another_variable = "This is my string. There are many like it, but this one is mine." and_another_variable = 1000000000000 for i in another_variable[:10]: # iterate over the first 10 elements and print them print(i) import collections if isinstance(1234563645, collections.Iterable): # iterable print('It is iterable') else: # not iterable print('It is NOT iterable') Explanation: Checking if an object is iterable? End of explanation hasattr(the_variable, '__iter__') Explanation: Similar way, by checking the methods: End of explanation day = 30 month = 'October' Explanation: Another way: duck-typing style Pythonic programming style that determines an object's type by inspection of its method or attribute signature rather than by explicit relationship to some type object ("If it looks like a duck and quacks like a duck, it must be a duck.") By emphasizing interfaces rather than specific types, well-designed code improves its flexibility by allowing polymorphic substitution. Duck-typing avoids tests using type() or isinstance(). Instead, it typically employs the EAFP (Easier to Ask Forgiveness than Permission) style of programming. python try: for i in the_variable: # ... except TypeError: print(the_variable, 'is not iterable') Source: https://stackoverflow.com/a/1952481/5365232 Modern string formatting End of explanation print('Today is ' + month + ', ' + str(day)) Explanation: won't work, because Python does not automatically convert to str: python print('Today is ' + month + ', ' + day) Works, but not pythonic: End of explanation print('Today is {}, {}'.format(month, day)) print('Today is {1}, {0}'.format(month, day)) print('Today is {1}, {0}. Tomorrow will be still {0}'.format(month, day)) print('Today is {m}, {d}. Tomorrow will be still {m}. And again: {d}'.format(m=month, d=day)) Explanation: Pythonic: End of explanation data = {'dow': 'Monday', 'location': 'UEA', 'who': 'Python Group'} print('Today is {dow}. We are at {location}.'.format(**data)) Explanation: using dictionaries End of explanation print(f'Today is {day}th. The month is {month}') # print(f'Today is {data["dow"]}. We are at {data["location"]}') Explanation: f-strings (Python 3.6+) Just pulling variables from the namespace! End of explanation HTML(html) Explanation: References Write Pythonic Code Like a Seasoned Developer End of explanation
6,485
Given the following text description, write Python code to implement the functionality described below step by step Description: Required Step2: TODO by Ruxi Feb 8, 2016 Tasks install jekyll create js folder make webapps.html Version control Saving
Python Code: import os.path, gitpath #pip install git+'https://github.com/ruxi/python-gitpath.git' os.chdir(gitpath.root()) # changes path to .git root #os.getcwd() #check current work directory Explanation: Required: End of explanation py_commit_msg = templating py_commit_msg %%bash -s "$py_commit_msg" echo $1 git add --all :/ git commit -a -m "$1" #message from py_commit_msg git push origin master Explanation: TODO by Ruxi Feb 8, 2016 Tasks install jekyll create js folder make webapps.html Version control Saving End of explanation
6,486
Given the following text description, write Python code to implement the functionality described below step by step Description: Evaluating Survival Models The most frequently used evaluation metric of survival models is the concordance index (c index, c statistic). It is a measure of rank correlation between predicted risk scores $\hat{f}$ and observed time points $y$ that is closely related to Kendall’s τ. It is defined as the ratio of correctly ordered (concordant) pairs to comparable pairs. Two samples $i$ and $j$ are comparable if the sample with lower observed time $y$ experienced an event, i.e., if $y_j > y_i$ and $\delta_i = 1$, where $\delta_i$ is a binary event indicator. A comparable pair $(i, j)$ is concordant if the estimated risk $\hat{f}$ by a survival model is higher for subjects with lower survival time, i.e., $\hat{f}_i > \hat{f}_j \land y_j > y_i$, otherwise the pair is discordant. Harrell's estimator of the c index is implemented in concordance_index_censored. While Harrell's concordance index is easy to interpret and compute, it has some shortcomings Step1: Bias of Harrell's Concordance Index Harrell's concordance index is known to be biased upwards if the amount of censoring in the test data is high [1]. Uno et al proposed an alternative estimator of the concordance index that behaves better in such situations. In this section, we are going to apply concordance_index_censored and concordance_index_ipcw to synthetic survival data and compare their results. Simulation Study We are generating a synthetic biomarker by sampling from a standard normal distribution. For a given hazard ratio, we compute the associated (actual) survival time by drawing from an exponential distribution. The censoring times were generated from a uniform independent distribution $\textrm{Uniform}(0,\gamma)$, where we choose $\gamma$ to produce different amounts of censoring. Since Uno's estimator is based on inverse probability of censoring weighting, we need to estimate the probability of being censored at a given time point. This probability needs to be non-zero for all observed time points. Therefore, we restrict the test data to all samples with observed time lower than the maximum event time $\tau$. Usually, one would use the tau argument of concordance_index_ipcw for this, but we apply the selection before to pass identical inputs to concordance_index_censored and concordance_index_ipcw. The estimates of the concordance index are therefore restricted to the interval $[0, \tau]$. Step2: Let us assume a moderate hazard ratio of 2 and generate a small synthetic dataset of 100 samples from which we estimate the concordance index. We repeat this experiment 200 times and plot mean and standard deviation of the difference between the actual (in the absence of censoring) and estimated concordance index. Since the hazard ratio remains constant and only the amount of censoring changes, we would want an estimator for which the difference between the actual and estimated c to remain approximately constant across simulations. Step3: We can observe that estimates are on average below the actual value, except for the highest amount of censoring, where Harrell's c begins overestimating the performance (on average). With such a small dataset, the variance of differences is quite big, so let us increase the amount of data to 1000 and repeat the simulation (this may take some time). Step4: Now we can observe that Harrell's c begins to overestimate performance starting with approximately 49% censoring while Uno's c is still underestimating the performance, but is on average very close to the actual performance for large amounts of censoring. For the final experiment, we double the size of the dataset to 2000 samples and repeat the analysis (this may take several minutes to compute). Step5: The trend we observed in the previous simulation is now even more pronounced. Harrell's c is becoming more and more overconfident in the performance of the synthetic marker with increasing amount of censoring, while Uno's c remains stable. In summary, while the difference between concordance_index_ipcw and concordance_index_censored is negligible for small amounts of censoring, when analyzing survival data with moderate to high amounts of censoring, you might want to consider estimating the performance using concordance_index_ipcw instead of concordance_index_censored. Time-dependent Area under the ROC The area under the receiver operating characteristics curve (ROC curve) is a popular performance measure for binary classification task. In the medical domain, it is often used to determine how well estimated risk scores can separate diseased patients (cases) from healthy patients (controls). Given a predicted risk score $\hat{f}$, the ROC curve compares the false positive rate (1 - specificity) against the true positive rate (sensitivity) for each possible value of $\hat{f}$. When extending the ROC curve to continuous outcomes, in particular survival time, a patient’s disease status is typically not fixed and changes over time Step6: Serum creatinine measurements are missing for some patients, therefore we are just going to impute these values with the mean using scikit-learn's SimpleImputer. Step7: Similar to Uno's estimator of the concordance index described above, we need to be a little bit careful when selecting the test data and time points we want to evaluate the ROC at, due to the estimator's dependence on inverse probability of censoring weighting. First, we are going to check whether the observed time of the test data lies within the observed time range of the training data. Step8: When choosing the time points to evaluate the ROC at, it is important to remember to choose the last time point such that the probability of being censored after the last time point is non-zero. In the simulation study above, we set the upper bound to the maximum event time, here we use a more conservative approach by setting the upper bound to the 80% percentile of observed time points, because the censoring rate is quite large at 72.5%. Note that this approach would be appropriate for choosing tau of concordance_index_ipcw too. Step9: We begin by considering individual real-valued features as risk scores without actually fitting a survival model. Hence, we obtain an estimate of how well age, creatinine, kappa FLC, and lambda FLC are able to distinguish cases from controls at each time point. Step10: The plot shows the estimated area under the time-dependent ROC at each time point and the average across all time points as dashed line. We can see that age is overall the most discriminative feature, followed by $\kappa$ and $\lambda$ FLC. That fact that age is the strongest predictor of overall survival in the general population is hardly surprising (we have to die at some point after all). More differences become evident when considering time Step11: Next, we fit a Cox proportional hazards model to the training data. Step12: Using the test data, we want to assess how well the model can distinguish survivors from deceased in weekly intervals, up to 6 months after enrollment. Step13: The plot shows that the model is doing moderately well on average with an AUC of ~0.72 (dashed line). However, there is a clear difference in performance between the first and second half of the time range. The performance on the test data increases up to 56 days from enrollment, remains high until 98 days and quickly drops thereafter. Thus, we can conclude that the model is most effective in predicting death in the medium-term. Using Time-dependent Risk Scores The downside of Cox proportional hazards model is that it can only predict a risk score that is independent of time (due to the built-in proportional hazards assumption). Therefore, a single predicted risk score needs to work well for every time point. In contrast, a Random Survival Forest does not have this restriction. So let's fit such a model to the training data. Step14: For prediction, we do not call predict, which returns a time-independent risk score, but call predict_cumulative_hazard_function, which returns a risk function over time for each patient. We obtain the time-dependent risk scores by evaluating each cumulative hazard function at the time points we are interested in. Step15: Now, we can compare the result with the predictive performance of the Cox proportional hazards model from above. Step16: Indeed, the Random Survival Forest performs slightly better on average, mostly due to the better performance in the intervals 25–50 days, and 112–147 days. Above 147 days, it actually is doing worse. This shows that the mean AUC is convenient to assess overall performance, but it can hide interesting characteristics that only become visible when looking at the AUC at individual time points. Time-dependent Brier Score The time-dependent Brier score is an extension of the mean squared error to right censored data. Given a time point $t$, it is defined as Step17: We want to train a model on the training data and assess its discrimination and calibration on the test data. Here, we consider a Random Survival Forest and Cox's proportional hazards model with elastic-net penalty. Step18: First, let's start with discrimination as measured by the concordance index. Step19: The result indicates that both models perform equally well, achieving a concordance index of 0.688, which is significantly better than a random model with 0.5 concordance index. Unfortunately, it doesn't help us to decide which model we should choose. So let's consider the time-dependent Brier score as an alternative, which asses discrimination and calibration. We first need to determine for which time points $t$ we want to compute the Brier score for. We are going to use a data-driven approach here by selecting all time points between the 10% and 90% percentile of observed time points. Step20: This returns 1690 time points, for which we need to estimate the probability of survival for, which is given by the survival function. Thus, we iterate over the predicted survival functions on the test data and evaluate each at the time points from above. Step21: In addition, we want to have a baseline to tell us how much better our models are from random. A random model would simply predict 0.5 every time. Step22: Another useful reference is the Kaplan-Meier estimator that does not consider any features Step23: Instead of comparing calibration across all 1690 time points, we'll be using the integrated Brier score (IBS) over all time points, which will give us a single number to compare the models by. Step24: Despite Random Survival Forest and Cox's proportional hazards model performing equally well in terms of discrimination (c-index), there seems to be a notable difference in terms of calibration (IBS), with Cox's proportional hazards model outperforming Random Survival Forest. Using Metrics in Hyper-parameter Search Usually, estimators have hyper-parameters that one wants to optimize. For example, the maximum tree depth for tree-based learners. For this purpose, we can use scikit-learn's GridSearchCV to search for the hyper-parameter configuration that on average works best. By default, estimators' performance will be evaluated in terms of Harrell's concordance index, as implemented in concordance_index_censored. For other metrics, one can wrap an estimator with one of the following classes Step25: To illustrate this, we are going to use the Random Survival Forest and the German Breast Cancer Study Group 2 from above. First, we define that we want to evaluate the performance of each hyper-parameter configuration by 3-fold cross-validation. Step26: Next, we define the set of hyper-parameters to evaluate. Here, we search for the best value for max_depth between 1 and 10 (excluding). Note that we have to prefix max_depth with estimator__, because we are going to wrap the actual RandomSurvivalForest instance with one of the classes above. Step27: Now, we can put all the pieces together and start searching for the best hyper-parameters that maximize concordance_index_ipcw. Step28: The same process applies when optimizing hyper-parameters to maximize cumulative_dynamic_auc. Step29: While as_concordance_index_ipcw_scorer and as_cumulative_dynamic_auc_scorer can be used with any estimator, as_integrated_brier_score_scorer is only available for estimators that provide the predict_survival_function method, which includes RandomSurvivalForest. If available, hyper-parameters that maximize the negative intergrated time-dependent Brier score will be selected, because a lower Brier score indicates better performance. Step30: Finally, we can visualize the results of the grid search and compare the best performing hyper-parameter configurations (marked with a red dot).
Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline import pandas as pd from sklearn.impute import SimpleImputer from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline from sksurv.datasets import load_flchain, load_gbsg2 from sksurv.functions import StepFunction from sksurv.linear_model import CoxPHSurvivalAnalysis, CoxnetSurvivalAnalysis from sksurv.metrics import ( concordance_index_censored, concordance_index_ipcw, cumulative_dynamic_auc, integrated_brier_score, ) from sksurv.nonparametric import kaplan_meier_estimator from sksurv.preprocessing import OneHotEncoder, encode_categorical from sksurv.util import Surv plt.rcParams['figure.figsize'] = [7.2, 4.8] Explanation: Evaluating Survival Models The most frequently used evaluation metric of survival models is the concordance index (c index, c statistic). It is a measure of rank correlation between predicted risk scores $\hat{f}$ and observed time points $y$ that is closely related to Kendall’s τ. It is defined as the ratio of correctly ordered (concordant) pairs to comparable pairs. Two samples $i$ and $j$ are comparable if the sample with lower observed time $y$ experienced an event, i.e., if $y_j > y_i$ and $\delta_i = 1$, where $\delta_i$ is a binary event indicator. A comparable pair $(i, j)$ is concordant if the estimated risk $\hat{f}$ by a survival model is higher for subjects with lower survival time, i.e., $\hat{f}_i > \hat{f}_j \land y_j > y_i$, otherwise the pair is discordant. Harrell's estimator of the c index is implemented in concordance_index_censored. While Harrell's concordance index is easy to interpret and compute, it has some shortcomings: 1. it has been shown that it is too optimistic with increasing amount of censoring [1], 2. it is not a useful measure of performance if a specific time range is of primary interest (e.g. predicting death within 2 years). Since version 0.8, scikit-survival supports an alternative estimator of the concordance index from right-censored survival data, implemented in concordance_index_ipcw, that addresses the first issue. The second point can be addressed by extending the well known receiver operating characteristic curve (ROC curve) to possibly censored survival times. Given a time point $t$, we can estimate how well a predictive model can distinguishing subjects who will experience an event by time $t$ (sensitivity) from those who will not (specificity). The function cumulative_dynamic_auc implements an estimator of the cumulative/dynamic area under the ROC for a given list of time points. The first part of this notebook will illustrate the first issue with simulated survival data, while the second part will focus on the time-dependent area under the ROC applied to data from a real study. Finally, part three will discuss the time-dependent Brier score, which is an extension of the mean squared error to right censored data. End of explanation import scipy.optimize as opt def generate_marker(n_samples, hazard_ratio, baseline_hazard, rnd): # create synthetic risk score X = rnd.randn(n_samples, 1) # create linear model hazard_ratio = np.array([hazard_ratio]) logits = np.dot(X, np.log(hazard_ratio)) # draw actual survival times from exponential distribution, # refer to Bender et al. (2005), https://doi.org/10.1002/sim.2059 u = rnd.uniform(size=n_samples) time_event = -np.log(u) / (baseline_hazard * np.exp(logits)) # compute the actual concordance in the absence of censoring X = np.squeeze(X) actual = concordance_index_censored(np.ones(n_samples, dtype=bool), time_event, X) return X, time_event, actual[0] def generate_survival_data(n_samples, hazard_ratio, baseline_hazard, percentage_cens, rnd): X, time_event, actual_c = generate_marker(n_samples, hazard_ratio, baseline_hazard, rnd) def get_observed_time(x): rnd_cens = np.random.RandomState(0) # draw censoring times time_censor = rnd_cens.uniform(high=x, size=n_samples) event = time_event < time_censor time = np.where(event, time_event, time_censor) return event, time def censoring_amount(x): event, _ = get_observed_time(x) cens = 1.0 - event.sum() / event.shape[0] return (cens - percentage_cens)**2 # search for upper limit to obtain the desired censoring amount res = opt.minimize_scalar(censoring_amount, method="bounded", bounds=(0, time_event.max())) # compute observed time event, time = get_observed_time(res.x) # upper time limit such that the probability # of being censored is non-zero for `t > tau` tau = time[event].max() y = Surv.from_arrays(event=event, time=time) mask = time < tau X_test = X[mask] y_test = y[mask] return X_test, y_test, y, actual_c def simulation(n_samples, hazard_ratio, n_repeats=100): measures = ("censoring", "Harrel's C", "Uno's C",) data_mean = {} data_std = {} for measure in measures: data_mean[measure] = [] data_std[measure] = [] rnd = np.random.RandomState(seed=987) # iterate over different amount of censoring for cens in (.1, .25, .4, .5, .6, .7): data = {"censoring": [], "Harrel's C": [], "Uno's C": [],} # repeaditly perform simulation for _ in range(n_repeats): # generate data X_test, y_test, y_train, actual_c = generate_survival_data( n_samples, hazard_ratio, baseline_hazard=0.1, percentage_cens=cens, rnd=rnd) # estimate c-index c_harrell = concordance_index_censored(y_test["event"], y_test["time"], X_test) c_uno = concordance_index_ipcw(y_train, y_test, X_test) # save results data["censoring"].append(100. - y_test["event"].sum() * 100. / y_test.shape[0]) data["Harrel's C"].append(actual_c - c_harrell[0]) data["Uno's C"].append(actual_c - c_uno[0]) # aggregate results for key, values in data.items(): data_mean[key].append(np.mean(data[key])) data_std[key].append(np.std(data[key], ddof=1)) data_mean = pd.DataFrame.from_dict(data_mean) data_std = pd.DataFrame.from_dict(data_std) return data_mean, data_std def plot_results(data_mean, data_std, **kwargs): index = pd.Index(data_mean["censoring"].round(3), name="mean percentage censoring") for df in (data_mean, data_std): df.drop("censoring", axis=1, inplace=True) df.index = index ax = data_mean.plot.bar(yerr=data_std, **kwargs) ax.set_ylabel("Actual C - Estimated C") ax.yaxis.grid(True) ax.axhline(0.0, color="gray") Explanation: Bias of Harrell's Concordance Index Harrell's concordance index is known to be biased upwards if the amount of censoring in the test data is high [1]. Uno et al proposed an alternative estimator of the concordance index that behaves better in such situations. In this section, we are going to apply concordance_index_censored and concordance_index_ipcw to synthetic survival data and compare their results. Simulation Study We are generating a synthetic biomarker by sampling from a standard normal distribution. For a given hazard ratio, we compute the associated (actual) survival time by drawing from an exponential distribution. The censoring times were generated from a uniform independent distribution $\textrm{Uniform}(0,\gamma)$, where we choose $\gamma$ to produce different amounts of censoring. Since Uno's estimator is based on inverse probability of censoring weighting, we need to estimate the probability of being censored at a given time point. This probability needs to be non-zero for all observed time points. Therefore, we restrict the test data to all samples with observed time lower than the maximum event time $\tau$. Usually, one would use the tau argument of concordance_index_ipcw for this, but we apply the selection before to pass identical inputs to concordance_index_censored and concordance_index_ipcw. The estimates of the concordance index are therefore restricted to the interval $[0, \tau]$. End of explanation hazard_ratio = 2.0 ylim = [-0.035, 0.035] mean_1, std_1 = simulation(100, hazard_ratio) plot_results(mean_1, std_1, ylim=ylim) Explanation: Let us assume a moderate hazard ratio of 2 and generate a small synthetic dataset of 100 samples from which we estimate the concordance index. We repeat this experiment 200 times and plot mean and standard deviation of the difference between the actual (in the absence of censoring) and estimated concordance index. Since the hazard ratio remains constant and only the amount of censoring changes, we would want an estimator for which the difference between the actual and estimated c to remain approximately constant across simulations. End of explanation mean_2, std_2 = simulation(1000, hazard_ratio) plot_results(mean_2, std_2, ylim=ylim) Explanation: We can observe that estimates are on average below the actual value, except for the highest amount of censoring, where Harrell's c begins overestimating the performance (on average). With such a small dataset, the variance of differences is quite big, so let us increase the amount of data to 1000 and repeat the simulation (this may take some time). End of explanation mean_3, std_3 = simulation(2000, hazard_ratio) plot_results(mean_3, std_3, ylim=ylim) Explanation: Now we can observe that Harrell's c begins to overestimate performance starting with approximately 49% censoring while Uno's c is still underestimating the performance, but is on average very close to the actual performance for large amounts of censoring. For the final experiment, we double the size of the dataset to 2000 samples and repeat the analysis (this may take several minutes to compute). End of explanation x, y = load_flchain() x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0) Explanation: The trend we observed in the previous simulation is now even more pronounced. Harrell's c is becoming more and more overconfident in the performance of the synthetic marker with increasing amount of censoring, while Uno's c remains stable. In summary, while the difference between concordance_index_ipcw and concordance_index_censored is negligible for small amounts of censoring, when analyzing survival data with moderate to high amounts of censoring, you might want to consider estimating the performance using concordance_index_ipcw instead of concordance_index_censored. Time-dependent Area under the ROC The area under the receiver operating characteristics curve (ROC curve) is a popular performance measure for binary classification task. In the medical domain, it is often used to determine how well estimated risk scores can separate diseased patients (cases) from healthy patients (controls). Given a predicted risk score $\hat{f}$, the ROC curve compares the false positive rate (1 - specificity) against the true positive rate (sensitivity) for each possible value of $\hat{f}$. When extending the ROC curve to continuous outcomes, in particular survival time, a patient’s disease status is typically not fixed and changes over time: at enrollment a subject is usually healthy, but may be diseased at some later time point. Consequently, sensitivity and specificity become time-dependent measures. Here, we consider cumulative cases and dynamic controls at a given time point $t$, which gives rise to the time-dependent cumulative/dynamic ROC at time $t$. Cumulative cases are all individuals that experienced an event prior to or at time $t$ ($t_i \leq t$), whereas dynamic controls are those with $t_i>t$. By computing the area under the cumulative/dynamic ROC at time $t$, we can determine how well a model can distinguish subjects who fail by a given time ($t_i \leq t$) from subjects who fail after this time ($t_i>t$). Hence, it is most relevant if one wants to predict the occurrence of an event in a period up to time $t$ rather than at a specific time point $t$. The cumulative_dynamic_auc function implements an estimator of the cumulative/dynamic area under the ROC at a given list of time points. To illustrate its use, we are going to use data from a study that investigated to which extent the serum immunoglobulin free light chain (FLC) assay can be used predict overall survival. The dataset has 7874 subjects and 9 features; the endpoint is death, which occurred for 2169 subjects (27.5%). First, we are loading the data and split it into train and test set to evaluate how well markers generalize. End of explanation num_columns = ['age', 'creatinine', 'kappa', 'lambda'] imputer = SimpleImputer().fit(x_train.loc[:, num_columns]) x_test_imputed = imputer.transform(x_test.loc[:, num_columns]) Explanation: Serum creatinine measurements are missing for some patients, therefore we are just going to impute these values with the mean using scikit-learn's SimpleImputer. End of explanation y_events = y_train[y_train['death']] train_min, train_max = y_events["futime"].min(), y_events["futime"].max() y_events = y_test[y_test['death']] test_min, test_max = y_events["futime"].min(), y_events["futime"].max() assert train_min <= test_min < test_max < train_max, \ "time range or test data is not within time range of training data." Explanation: Similar to Uno's estimator of the concordance index described above, we need to be a little bit careful when selecting the test data and time points we want to evaluate the ROC at, due to the estimator's dependence on inverse probability of censoring weighting. First, we are going to check whether the observed time of the test data lies within the observed time range of the training data. End of explanation times = np.percentile(y["futime"], np.linspace(5, 81, 15)) print(times) Explanation: When choosing the time points to evaluate the ROC at, it is important to remember to choose the last time point such that the probability of being censored after the last time point is non-zero. In the simulation study above, we set the upper bound to the maximum event time, here we use a more conservative approach by setting the upper bound to the 80% percentile of observed time points, because the censoring rate is quite large at 72.5%. Note that this approach would be appropriate for choosing tau of concordance_index_ipcw too. End of explanation def plot_cumulative_dynamic_auc(risk_score, label, color=None): auc, mean_auc = cumulative_dynamic_auc(y_train, y_test, risk_score, times) plt.plot(times, auc, marker="o", color=color, label=label) plt.xlabel("days from enrollment") plt.ylabel("time-dependent AUC") plt.axhline(mean_auc, color=color, linestyle="--") plt.legend() for i, col in enumerate(num_columns): plot_cumulative_dynamic_auc(x_test_imputed[:, i], col, color="C{}".format(i)) ret = concordance_index_ipcw(y_train, y_test, x_test_imputed[:, i], tau=times[-1]) Explanation: We begin by considering individual real-valued features as risk scores without actually fitting a survival model. Hence, we obtain an estimate of how well age, creatinine, kappa FLC, and lambda FLC are able to distinguish cases from controls at each time point. End of explanation from sksurv.datasets import load_veterans_lung_cancer va_x, va_y = load_veterans_lung_cancer() va_x_train, va_x_test, va_y_train, va_y_test = train_test_split( va_x, va_y, test_size=0.2, stratify=va_y["Status"], random_state=0 ) Explanation: The plot shows the estimated area under the time-dependent ROC at each time point and the average across all time points as dashed line. We can see that age is overall the most discriminative feature, followed by $\kappa$ and $\lambda$ FLC. That fact that age is the strongest predictor of overall survival in the general population is hardly surprising (we have to die at some point after all). More differences become evident when considering time: the discriminative power of FLC decreases at later time points, while that of age increases. The observation for age again follows common sense. In contrast, FLC seems to be a good predictor of death in the near future, but not so much if it occurs decades later. Evaluating a Model's Predictions Most of the time, we do not want to evaluate the discriminatory power of individual features, but how a predictive model performs, based on many features. To demonstrate this, we will fit a survival model to predict the risk of death from the Veterans' Administration Lung Cancer Trial. First, we split the data into 80% for training and 20% for testing and use the stratify option to ensure that we do not end up with test data only containing censored death times. End of explanation cph = make_pipeline(OneHotEncoder(), CoxPHSurvivalAnalysis()) cph.fit(va_x_train, va_y_train) Explanation: Next, we fit a Cox proportional hazards model to the training data. End of explanation va_times = np.arange(8, 184, 7) cph_risk_scores = cph.predict(va_x_test) cph_auc, cph_mean_auc = cumulative_dynamic_auc( va_y_train, va_y_test, cph_risk_scores, va_times ) plt.plot(va_times, cph_auc, marker="o") plt.axhline(cph_mean_auc, linestyle="--") plt.xlabel("days from enrollment") plt.ylabel("time-dependent AUC") plt.grid(True) Explanation: Using the test data, we want to assess how well the model can distinguish survivors from deceased in weekly intervals, up to 6 months after enrollment. End of explanation from sksurv.ensemble import RandomSurvivalForest rsf = make_pipeline( OneHotEncoder(), RandomSurvivalForest(n_estimators=100, min_samples_leaf=7, random_state=0) ) rsf.fit(va_x_train, va_y_train) Explanation: The plot shows that the model is doing moderately well on average with an AUC of ~0.72 (dashed line). However, there is a clear difference in performance between the first and second half of the time range. The performance on the test data increases up to 56 days from enrollment, remains high until 98 days and quickly drops thereafter. Thus, we can conclude that the model is most effective in predicting death in the medium-term. Using Time-dependent Risk Scores The downside of Cox proportional hazards model is that it can only predict a risk score that is independent of time (due to the built-in proportional hazards assumption). Therefore, a single predicted risk score needs to work well for every time point. In contrast, a Random Survival Forest does not have this restriction. So let's fit such a model to the training data. End of explanation rsf_chf_funcs = rsf.predict_cumulative_hazard_function( va_x_test, return_array=False) rsf_risk_scores = np.row_stack([chf(va_times) for chf in rsf_chf_funcs]) rsf_auc, rsf_mean_auc = cumulative_dynamic_auc( va_y_train, va_y_test, rsf_risk_scores, va_times ) Explanation: For prediction, we do not call predict, which returns a time-independent risk score, but call predict_cumulative_hazard_function, which returns a risk function over time for each patient. We obtain the time-dependent risk scores by evaluating each cumulative hazard function at the time points we are interested in. End of explanation plt.plot(va_times, cph_auc, "o-", label="CoxPH (mean AUC = {:.3f})".format(cph_mean_auc)) plt.plot(va_times, rsf_auc, "o-", label="RSF (mean AUC = {:.3f})".format(rsf_mean_auc)) plt.xlabel("days from enrollment") plt.ylabel("time-dependent AUC") plt.legend(loc="lower center") plt.grid(True) Explanation: Now, we can compare the result with the predictive performance of the Cox proportional hazards model from above. End of explanation gbsg_X, gbsg_y = load_gbsg2() gbsg_X = encode_categorical(gbsg_X) gbsg_X_train, gbsg_X_test, gbsg_y_train, gbsg_y_test = train_test_split( gbsg_X, gbsg_y, stratify=gbsg_y["cens"], random_state=1 ) Explanation: Indeed, the Random Survival Forest performs slightly better on average, mostly due to the better performance in the intervals 25–50 days, and 112–147 days. Above 147 days, it actually is doing worse. This shows that the mean AUC is convenient to assess overall performance, but it can hide interesting characteristics that only become visible when looking at the AUC at individual time points. Time-dependent Brier Score The time-dependent Brier score is an extension of the mean squared error to right censored data. Given a time point $t$, it is defined as: $$ \mathrm{BS}^c(t) = \frac{1}{n} \sum_{i=1}^n I(y_i \leq t \land \delta_i = 1) \frac{(0 - \hat{\pi}(t | \mathbf{x}_i))^2}{\hat{G}(y_i)} + I(y_i > t) \frac{(1 - \hat{\pi}(t | \mathbf{x}_i))^2}{\hat{G}(t)} , $$ where $\hat{\pi}(t | \mathbf{x})$ is a model's predicted probability of remaining event-free up to time point $t$ for feature vector $\mathbf{x}$, and $1/\hat{G}(t)$ is an inverse probability of censoring weight. Note that the time-dependent Brier score is only applicable for models that are able to estimate a survival function. For instance, it cannot be used with Survival Support Vector Machines. The Brier score is often used to assess calibration. If a model predicts a 10% risk of experiencing an event at time $t$, the observed frequency in the data should match this percentage for a well calibrated model. In addition, the Brier score is also a measure of discrimination: whether a model is able to predict risk scores that allow us to correctly determine the order of events. The concordance index is probably the most common measure of discrimination. However, the concordance index disregards the actual values of predicted risk scores – it is a ranking metric – and is unable to tell us anything about calibration. Let's consider an example based on data from the German Breast Cancer Study Group 2. End of explanation rsf_gbsg = RandomSurvivalForest(max_depth=2, random_state=1) rsf_gbsg.fit(gbsg_X_train, gbsg_y_train) cph_gbsg = CoxnetSurvivalAnalysis(l1_ratio=0.99, fit_baseline_model=True) cph_gbsg.fit(gbsg_X_train, gbsg_y_train) Explanation: We want to train a model on the training data and assess its discrimination and calibration on the test data. Here, we consider a Random Survival Forest and Cox's proportional hazards model with elastic-net penalty. End of explanation score_cindex = pd.Series( [ rsf_gbsg.score(gbsg_X_test, gbsg_y_test), cph_gbsg.score(gbsg_X_test, gbsg_y_test), 0.5, ], index=["RSF", "CPH", "Random"], name="c-index", ) score_cindex.round(3) Explanation: First, let's start with discrimination as measured by the concordance index. End of explanation lower, upper = np.percentile(gbsg_y["time"], [10, 90]) gbsg_times = np.arange(lower, upper + 1) Explanation: The result indicates that both models perform equally well, achieving a concordance index of 0.688, which is significantly better than a random model with 0.5 concordance index. Unfortunately, it doesn't help us to decide which model we should choose. So let's consider the time-dependent Brier score as an alternative, which asses discrimination and calibration. We first need to determine for which time points $t$ we want to compute the Brier score for. We are going to use a data-driven approach here by selecting all time points between the 10% and 90% percentile of observed time points. End of explanation rsf_surv_prob = np.row_stack([ fn(gbsg_times) for fn in rsf_gbsg.predict_survival_function(gbsg_X_test) ]) cph_surv_prob = np.row_stack([ fn(gbsg_times) for fn in cph_gbsg.predict_survival_function(gbsg_X_test) ]) Explanation: This returns 1690 time points, for which we need to estimate the probability of survival for, which is given by the survival function. Thus, we iterate over the predicted survival functions on the test data and evaluate each at the time points from above. End of explanation random_surv_prob = 0.5 * np.ones( (gbsg_y_test.shape[0], gbsg_times.shape[0]) ) Explanation: In addition, we want to have a baseline to tell us how much better our models are from random. A random model would simply predict 0.5 every time. End of explanation km_func = StepFunction( *kaplan_meier_estimator(gbsg_y_test["cens"], gbsg_y_test["time"]) ) km_surv_prob = np.tile(km_func(gbsg_times), (gbsg_y_test.shape[0], 1)) Explanation: Another useful reference is the Kaplan-Meier estimator that does not consider any features: it estimates a survival function only from gbsg_y_test. We replicate this estimate for all samples in the test data. End of explanation score_brier = pd.Series( [ integrated_brier_score(gbsg_y, gbsg_y_test, prob, gbsg_times) for prob in (rsf_surv_prob, cph_surv_prob, random_surv_prob, km_surv_prob) ], index=["RSF", "CPH", "Random", "Kaplan-Meier"], name="IBS" ) pd.concat((score_cindex, score_brier), axis=1).round(3) Explanation: Instead of comparing calibration across all 1690 time points, we'll be using the integrated Brier score (IBS) over all time points, which will give us a single number to compare the models by. End of explanation from sklearn.model_selection import GridSearchCV, KFold from sksurv.metrics import ( as_concordance_index_ipcw_scorer, as_cumulative_dynamic_auc_scorer, as_integrated_brier_score_scorer, ) Explanation: Despite Random Survival Forest and Cox's proportional hazards model performing equally well in terms of discrimination (c-index), there seems to be a notable difference in terms of calibration (IBS), with Cox's proportional hazards model outperforming Random Survival Forest. Using Metrics in Hyper-parameter Search Usually, estimators have hyper-parameters that one wants to optimize. For example, the maximum tree depth for tree-based learners. For this purpose, we can use scikit-learn's GridSearchCV to search for the hyper-parameter configuration that on average works best. By default, estimators' performance will be evaluated in terms of Harrell's concordance index, as implemented in concordance_index_censored. For other metrics, one can wrap an estimator with one of the following classes: as_concordance_index_ipcw_scorer as_cumulative_dynamic_auc_scorer as_integrated_brier_score_scorer End of explanation cv = KFold(n_splits=3, shuffle=True, random_state=1) Explanation: To illustrate this, we are going to use the Random Survival Forest and the German Breast Cancer Study Group 2 from above. First, we define that we want to evaluate the performance of each hyper-parameter configuration by 3-fold cross-validation. End of explanation cv_param_grid = { "estimator__max_depth": np.arange(1, 10, dtype=int), } Explanation: Next, we define the set of hyper-parameters to evaluate. Here, we search for the best value for max_depth between 1 and 10 (excluding). Note that we have to prefix max_depth with estimator__, because we are going to wrap the actual RandomSurvivalForest instance with one of the classes above. End of explanation gcv_cindex = GridSearchCV( as_concordance_index_ipcw_scorer(rsf_gbsg, tau=gbsg_times[-1]), param_grid=cv_param_grid, cv=cv, n_jobs=4, ).fit(gbsg_X, gbsg_y) Explanation: Now, we can put all the pieces together and start searching for the best hyper-parameters that maximize concordance_index_ipcw. End of explanation gcv_iauc = GridSearchCV( as_cumulative_dynamic_auc_scorer(rsf_gbsg, times=gbsg_times), param_grid=cv_param_grid, cv=cv, n_jobs=4, ).fit(gbsg_X, gbsg_y) Explanation: The same process applies when optimizing hyper-parameters to maximize cumulative_dynamic_auc. End of explanation gcv_ibs = GridSearchCV( as_integrated_brier_score_scorer(rsf_gbsg, times=gbsg_times), param_grid=cv_param_grid, cv=cv, n_jobs=4, ).fit(gbsg_X, gbsg_y) Explanation: While as_concordance_index_ipcw_scorer and as_cumulative_dynamic_auc_scorer can be used with any estimator, as_integrated_brier_score_scorer is only available for estimators that provide the predict_survival_function method, which includes RandomSurvivalForest. If available, hyper-parameters that maximize the negative intergrated time-dependent Brier score will be selected, because a lower Brier score indicates better performance. End of explanation def plot_grid_search_results(gcv, ax, name): ax.errorbar( x=gcv.cv_results_["param_estimator__max_depth"].filled(), y=gcv.cv_results_["mean_test_score"], yerr=gcv.cv_results_["std_test_score"], ) ax.plot( gcv.best_params_["estimator__max_depth"], gcv.best_score_, 'ro', ) ax.set_ylabel(name) ax.yaxis.grid(True) _, axs = plt.subplots(3, 1, figsize=(6, 6), sharex=True) axs[-1].set_xlabel("max_depth") plot_grid_search_results(gcv_cindex, axs[0], "c-index") plot_grid_search_results(gcv_iauc, axs[1], "iAUC") plot_grid_search_results(gcv_ibs, axs[2], "$-$IBS") Explanation: Finally, we can visualize the results of the grid search and compare the best performing hyper-parameter configurations (marked with a red dot). End of explanation
6,487
Given the following text description, write Python code to implement the functionality described below step by step Description: For high dpi displays. Step1: 0. General note This notebook shows the magnitude of different non-static pressure terms in the EOS of platinum by Dorogokupets and Dewaele (2007, HPR). 1. General setup Step2: 2. Calculate thermal pressure Step3: 3. Calculate pressure from anharmonicity Step4: 4. Calculate pressure from electronic effects Step5: 5. Plot with respect to volume Step6: 5. Plot with respect to pressure We call the built-in dorogokupets2007 scale in pytheos.
Python Code: %config InlineBackend.figure_format = 'retina' Explanation: For high dpi displays. End of explanation import uncertainties as uct import numpy as np import matplotlib.pyplot as plt from uncertainties import unumpy as unp import pytheos as eos v0 = 3.9231**3 v = np.linspace(v0, v0 * 0.8, 20) Explanation: 0. General note This notebook shows the magnitude of different non-static pressure terms in the EOS of platinum by Dorogokupets and Dewaele (2007, HPR). 1. General setup End of explanation p_th = eos.dorogokupets2007_pth(v, 2000., v0, 2.82, 1.83, 8.11, 220., 1, 4) Explanation: 2. Calculate thermal pressure End of explanation help(eos.zharkov_panh) p_anh = eos.zharkov_panh(v, 2000., v0, -166.9e-6, 4.32, 1, 4) Explanation: 3. Calculate pressure from anharmonicity End of explanation help(eos.zharkov_pel) p_el = eos.zharkov_pel(v, 2000., v0, 260.e-6, 2.4, 1, 4) Explanation: 4. Calculate pressure from electronic effects End of explanation plt.plot(v, p_th, label='$P_{th}$') plt.plot(v, p_el, label='$P_{el}$') plt.plot(v, p_anh, label='$P_{anh}$') plt.legend(); Explanation: 5. Plot with respect to volume End of explanation dorogokupets2007_pt = eos.platinum.Dorogokupets2007() help(dorogokupets2007_pt) p = dorogokupets2007_pt.cal_p(v, 2000.) plt.plot(unp.nominal_values(p), p_th, label='$P_{th}$') plt.plot(unp.nominal_values(p), p_el, label='$P_{el}$') plt.plot(unp.nominal_values(p), p_anh, label='$P_{anh}$') plt.legend(); Explanation: 5. Plot with respect to pressure We call the built-in dorogokupets2007 scale in pytheos. End of explanation
6,488
Given the following text description, write Python code to implement the functionality described below step by step Description: Example 3 - stripy interpolation on the sphere SSRFPACK is a Fortran 77 software package that constructs a smooth interpolatory or approximating surface to data values associated with arbitrarily distributed points on the surface of a sphere. It employs automatically selected tension factors to preserve shape properties of the data and avoid overshoot and undershoot associated with steep gradients. The next three examples demonstrate the interface to SSRFPACK provided through stripy Notebook contents Incommensurable meshes Analytic function Interpolation The next example is Ex4-Gradients Define two different meshes Create a fine and a coarse mesh without common points Step1: Analytic function Define a relatively smooth function that we can interpolate from the coarse mesh to the fine mesh and analyse Step2: The analytic function on the different samplings It is helpful to be able to view a mesh in 3D to verify that it is an appropriate choice. Here, for example, is the icosahedron with additional points in the centroid of the faces. This produces triangles with a narrow area distribution. In three dimensions it is easy to see the origin of the size variations. Step3: Interpolation from coarse to fine The interpolate method of the sTriangulation takes arrays of longitude, latitude points (in radians) and an array of data on the mesh vertices. It returns an array of interpolated values and a status array that states whether each value represents an interpolation, extrapolation or neither (an error condition). The interpolation can be nearest-neighbour (order=0), linear (order=1) or cubic spline (order=3).
Python Code: import stripy as stripy cmesh = stripy.spherical_meshes.triangulated_cube_mesh(refinement_levels=3) fmesh = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=3, include_face_points=True) print(cmesh.npoints) print(fmesh.npoints) help(cmesh.interpolate) %matplotlib inline import gdal import cartopy import cartopy.crs as ccrs import matplotlib.pyplot as plt import numpy as np def mesh_fig(mesh, meshR, name): fig = plt.figure(figsize=(10, 10), facecolor="none") ax = plt.subplot(111, projection=ccrs.Orthographic(central_longitude=0.0, central_latitude=0.0, globe=None)) ax.coastlines(color="lightgrey") ax.set_global() generator = mesh refined = meshR lons0 = np.degrees(generator.lons) lats0 = np.degrees(generator.lats) lonsR = np.degrees(refined.lons) latsR = np.degrees(refined.lats) lst = generator.lst lptr = generator.lptr ax.scatter(lons0, lats0, color="Red", marker="o", s=100.0, transform=ccrs.Geodetic()) ax.scatter(lonsR, latsR, color="DarkBlue", marker="o", s=30.0, transform=ccrs.Geodetic()) segs = refined.identify_segments() for s1, s2 in segs: ax.plot( [lonsR[s1], lonsR[s2]], [latsR[s1], latsR[s2]], linewidth=0.5, color="black", transform=ccrs.Geodetic()) # fig.savefig(name, dpi=250, transparent=True) return mesh_fig(cmesh, fmesh, "Two grids" ) Explanation: Example 3 - stripy interpolation on the sphere SSRFPACK is a Fortran 77 software package that constructs a smooth interpolatory or approximating surface to data values associated with arbitrarily distributed points on the surface of a sphere. It employs automatically selected tension factors to preserve shape properties of the data and avoid overshoot and undershoot associated with steep gradients. The next three examples demonstrate the interface to SSRFPACK provided through stripy Notebook contents Incommensurable meshes Analytic function Interpolation The next example is Ex4-Gradients Define two different meshes Create a fine and a coarse mesh without common points End of explanation def analytic(lons, lats, k1, k2): return np.cos(k1*lons) * np.sin(k2*lats) coarse_afn = analytic(cmesh.lons, cmesh.lats, 5.0, 2.0) fine_afn = analytic(fmesh.lons, fmesh.lats, 5.0, 2.0) Explanation: Analytic function Define a relatively smooth function that we can interpolate from the coarse mesh to the fine mesh and analyse End of explanation import lavavu lv = lavavu.Viewer(border=False, background="#FFFFFF", resolution=[600,600], near=-10.0) ctris = lv.triangles("ctriangulation", wireframe=True, colour="#444444", opacity=0.8) ctris.vertices(cmesh.points) ctris.indices(cmesh.simplices) ctris2 = lv.triangles("ctriangles", wireframe=False, colour="#77ff88", opacity=1.0) ctris2.vertices(cmesh.points) ctris2.indices(cmesh.simplices) ctris2.values(coarse_afn) ctris2.colourmap("#990000 #FFFFFF #000099") cnodes = lv.points("cnodes", pointsize=4.0, pointtype="shiny", colour="#448080", opacity=0.75) cnodes.vertices(cmesh.points) fnodes = lv.points("fnodes", pointsize=3.0, pointtype="shiny", colour="#448080", opacity=0.75) fnodes.vertices(fmesh.points) ftris2 = lv.triangles("ftriangulation", wireframe=True, colour="#444444", opacity=0.8) ftris2.vertices(fmesh.points) ftris2.indices(fmesh.simplices) ftris = lv.triangles("ftriangles", wireframe=False, colour="#77ff88", opacity=1.0) ftris.vertices(fmesh.points) ftris.indices(fmesh.simplices) ftris.values(fine_afn) ftris.colourmap("#990000 #FFFFFF #000099") # view the pole lv.translation(0.0, 0.0, -3.0) lv.rotation(-20, 0.0, 0.0) lv.hide("fnodes") lv.hide("ftriangulation") lv.hide("ftriangules") lv.control.Panel() lv.control.Button(command="hide triangles; hide points; show cnodes; show ctriangles; show ctriangulation; redraw", label="Coarse") lv.control.Button(command="hide triangles; hide points; show fnodes; show ftriangles; show ftriangulation; redraw", label="Fine") lv.control.show() lv.camera() Explanation: The analytic function on the different samplings It is helpful to be able to view a mesh in 3D to verify that it is an appropriate choice. Here, for example, is the icosahedron with additional points in the centroid of the faces. This produces triangles with a narrow area distribution. In three dimensions it is easy to see the origin of the size variations. End of explanation interp_c2f1, err = cmesh.interpolate(fmesh.lons, fmesh.lats, order=1, zdata=coarse_afn) interp_c2f3, err = cmesh.interpolate(fmesh.lons, fmesh.lats, order=3, zdata=coarse_afn) err_c2f1 = interp_c2f1-fine_afn err_c2f3 = interp_c2f3-fine_afn import lavavu lv = lavavu.Viewer(border=False, background="#FFFFFF", resolution=[1000,600], near=-10.0) fnodes = lv.points("fnodes", pointsize=3.0, pointtype="shiny", colour="#448080", opacity=0.75) fnodes.vertices(fmesh.points) ftris = lv.triangles("ftriangles", wireframe=False, colour="#77ff88", opacity=0.8) ftris.vertices(fmesh.points) ftris.indices(fmesh.simplices) ftris.values(fine_afn, label="original") ftris.values(interp_c2f1, label="interp1") ftris.values(interp_c2f3, label="interp3") ftris.values(err_c2f1, label="interperr1") ftris.values(err_c2f3, label="interperr3") ftris.colourmap("#990000 #FFFFFF #000099") cb = ftris.colourbar() # view the pole lv.translation(0.0, 0.0, -3.0) lv.rotation(-20, 0.0, 0.0) lv.control.Panel() lv.control.Range('specular', range=(0,1), step=0.1, value=0.4) lv.control.Checkbox(property='axis') lv.control.ObjectList() ftris.control.List(["original", "interp1", "interp3", "interperr1", "interperr3"], property="colourby", value="orginal", command="redraw") lv.control.show() Explanation: Interpolation from coarse to fine The interpolate method of the sTriangulation takes arrays of longitude, latitude points (in radians) and an array of data on the mesh vertices. It returns an array of interpolated values and a status array that states whether each value represents an interpolation, extrapolation or neither (an error condition). The interpolation can be nearest-neighbour (order=0), linear (order=1) or cubic spline (order=3). End of explanation
6,489
Given the following text description, write Python code to implement the functionality described below step by step Description: GitHub API v3 - Uso de labels Para los siguientes ejercicios usaremos el repositorio del curso Programación Avanzada de la Pontificia Universidad Católica de Chile para el segundo semestre del 2015. Ve el repo aqui. Una herramienta interesante para saber el comportamiento de un repositorio es buscar la cantidad de issues (abiertas + cerradas) para cada label disponible Step1: Antes que todo, debemos preguntarnos Step2: Como podemos notar, es posible que tengamos más de una página de issues, por lo que tendremos que acceder al header y obtener la url a la siguiente página, repitiendo la operación hasta llegar a la última. Para facilitar esto, podemos agregar el parametro page a la url indicando el número de la página que queremos. ¿Cómo sabremos cuándo detenernos? Cuando en el header, en Link, no exista una url a la siguiente página ("next") Step3: Guardemos la información que nos interesa en objetos de una clase Issue, extraeremos los nombres de las labels con la funcion labels y guardaremos a estos objetos en una lista llamada ISSUES_LIST Step4: Pediremos todas las issues e iremos guardando los objetos correspondientes Step5: Ya tenemos nuestra lista de issues. Ahora, veamos cuales son las labels que se han usado Step6: Empecemos con los gráficos. Para esto usaremos matplotlib Step7: Vamos a desplegar un grafo donde cada nodo es un label y cada arista representa las uniones de labels (cuando una issue tiene más de un label). El tamaño de los nodos dependerá de la cantidad de issues etiquetadas con aquel label y el grosor de las aristas dependerá de la cantidad de issues en la que ambos labels estuvieron. Si en alguna issue hubo mas de dos etiquetas, consideraremos los pares de labels y no un conjunto. Para esto, necesitamos Step8: Para la siguiente sección usaremos nextworkx para graficar la red de nodos
Python Code: # Primero tenemos que importar las librerias que usaremos para recopilar datos import base64 import json import requests # Si queremos imprimir los json de respuesta # de una forma mas agradable a la vista podemos usar def print_pretty(jsonstring, indent=4, sort_keys=False): print(json.dumps(jsonstring, indent=indent, sort_keys=sort_keys)) # Importamos nuestras credenciales with open("credentials") as f: credentials = tuple(f.read().splitlines()) # Constantes que usaremos OWNER = "IIC2233-2016-1" REPO = "syllabus" Explanation: GitHub API v3 - Uso de labels Para los siguientes ejercicios usaremos el repositorio del curso Programación Avanzada de la Pontificia Universidad Católica de Chile para el segundo semestre del 2015. Ve el repo aqui. Una herramienta interesante para saber el comportamiento de un repositorio es buscar la cantidad de issues (abiertas + cerradas) para cada label disponible: End of explanation url = "https://api.github.com/repos/{}/{}/issues".format(OWNER, REPO) params = { "state": "closed", "sort": "created", "direction": "asc", "page": 1 } req = requests.get(url, params=params, auth=credentials) print(req.status_code) # Para obtener el json asociado: req.json() # Pero queremos imprimirlo de una manera mas legible for issue in req.json(): print(issue["number"], issue["title"]) Explanation: Antes que todo, debemos preguntarnos: ¿a qué queremos acceso? Así podemos definir los scopes que necesitamos en nuestro token y también podremos facilitar nuestro flujo de trabajo Obtener issues Obtener labels Guardar cantidades para cada label Graficar (opcional) Para hacer esto usaremos la sección de la API de GitHub para issues. Para obtener las issues de un repositorio... GET /repos/:owner/:repo/issues End of explanation # Si esta fuese la ultima pagina no existiria `rel="next"` print(dict(req.headers)["Link"]) print("¿Es la ultima pagina? {}".format("No" if 'rel="next"' in dict(req.headers)["Link"] else "Si")) Explanation: Como podemos notar, es posible que tengamos más de una página de issues, por lo que tendremos que acceder al header y obtener la url a la siguiente página, repitiendo la operación hasta llegar a la última. Para facilitar esto, podemos agregar el parametro page a la url indicando el número de la página que queremos. ¿Cómo sabremos cuándo detenernos? Cuando en el header, en Link, no exista una url a la siguiente página ("next") End of explanation class Issue: def __init__(self, number, title, labels): self.number = number self.title = title self.labels = labels def labels(labels_json): return [label_item["name"] for label_item in labels_json] ISSUES_LIST = list() Explanation: Guardemos la información que nos interesa en objetos de una clase Issue, extraeremos los nombres de las labels con la funcion labels y guardaremos a estos objetos en una lista llamada ISSUES_LIST End of explanation url = "https://api.github.com/repos/{}/{}/issues".format(OWNER, REPO) params = { "state": "closed", "sort": "created", "direction": "asc", "page": 1 } # Primera pagina req = requests.get(url, params=params, auth=credentials) for issue in req.json(): ISSUES_LIST.append(Issue(issue["number"], issue["title"], labels(issue["labels"]))) # Paginas siguientes while 'rel="next"' in dict(req.headers)["Link"]: print("Page: {} ready".format(params["page"])) params["page"] += 1 req = requests.get(url, params=params, auth=credentials) for issue in req.json(): ISSUES_LIST.append(Issue(issue["number"], issue["title"], labels(issue["labels"]))) print("Tenemos {} issues en la lista".format(len(ISSUES_LIST))) Explanation: Pediremos todas las issues e iremos guardando los objetos correspondientes: End of explanation LABELS = list(label for issue in ISSUES_LIST for label in issue.labels) print(set(LABELS)) Explanation: Ya tenemos nuestra lista de issues. Ahora, veamos cuales son las labels que se han usado End of explanation %matplotlib inline Explanation: Empecemos con los gráficos. Para esto usaremos matplotlib: End of explanation # Cantidad de issues por label LABEL_ISSUES = {label: LABELS.count(label) for label in LABELS} # Imprimiendo de mayor a menor numero de issues... print("Top {}".format(min(5, len(LABEL_ISSUES)))) for v, k in sorted(((v,k) for k,v in LABEL_ISSUES.items()), reverse=True)[:min(5, len(LABEL_ISSUES))]: print("{}: {}".format(k, v)) # Todos los pares de labels from itertools import combinations LABEL_PAIRS = dict() PAIRS = list() for issue in ISSUES_LIST: combs = combinations(issue.labels, 2) for pair in combs: k = "{} + {}".format(*pair) if k not in LABEL_PAIRS: LABEL_PAIRS[k] = 0 PAIRS.append(pair) LABEL_PAIRS[k] += 1 # Imprimiendo de mayor a menor numero de issues print("Top {}".format(min(5, len(LABEL_PAIRS)))) for v, k in sorted(((v,k) for k,v in LABEL_PAIRS.items()), reverse=True)[:min(5, len(LABEL_PAIRS))]: print("{}: {}".format(k, v)) Explanation: Vamos a desplegar un grafo donde cada nodo es un label y cada arista representa las uniones de labels (cuando una issue tiene más de un label). El tamaño de los nodos dependerá de la cantidad de issues etiquetadas con aquel label y el grosor de las aristas dependerá de la cantidad de issues en la que ambos labels estuvieron. Si en alguna issue hubo mas de dos etiquetas, consideraremos los pares de labels y no un conjunto. Para esto, necesitamos: End of explanation from pylab import rcParams rcParams['figure.figsize'] = 8, 8 import matplotlib.pylab as plt import networkx as nx # Crear grafo G = nx.Graph() # Max peso mp = LABEL_PAIRS[max(LABEL_PAIRS, key=LABEL_PAIRS.get)] # Crear nodos G.add_nodes_from(LABEL_ISSUES.keys()) # Crear arcos for o_pair in LABEL_PAIRS.keys(): pair = o_pair.split(' + ') G.add_edge(*pair, weight=mp-LABEL_PAIRS[o_pair], width=mp-LABEL_PAIRS[o_pair]) # Asignar color n_colors = list() for node in G.nodes(): color = 'white' if 'Tarea' in node: color = 'blue' elif node in ['Tengo un error', 'Setup']: color = 'purple' elif node in ['Actividades', 'Ayudantía', 'Ayudantia', 'Interrogación', 'Interrogacion', 'Materia', 'Material']: color = 'green' elif node in ['Código', 'Codigo', 'Git']: color = 'yellow' elif node in ['Duplicada', 'Invalida']: color = 'grey' elif node in ['IMPORTANTE']: color = 'red' n_colors.append(color) # Asignar tamaños de nodos sizes = list() for node in G.nodes(): sizes.append(20 * LABEL_ISSUES[node]) # Asignar grosores de aarcos average = round(sum(LABEL_PAIRS.values()) / (0.75 * len(LABEL_PAIRS))) styles = list() widths = list() for edge in G.edges(): k = "{} + {}".format(edge[0], edge[1]) if k not in LABEL_PAIRS: k = "{} + {}".format(edge[1], edge[0]) if LABEL_PAIRS[k] < average: styles.append('dashed') widths.append(LABEL_PAIRS[k] + average) else: styles.append('solid') widths.append(LABEL_PAIRS[k] + 1) # Colores de arcos e_colors = list(i + 10 for i in range(len(G.edges()))) # Desplegar graficos nx.draw(G, edge_color=e_colors, edge_cmap=plt.cm.Blues, node_color=n_colors, node_size=sizes, style=styles, width=widths, with_labels=True) nx.draw_circular(G, edge_color=e_colors, edge_cmap=plt.cm.Blues, node_color=n_colors, node_size=sizes, style=styles, width=widths, with_labels=True) Explanation: Para la siguiente sección usaremos nextworkx para graficar la red de nodos: End of explanation
6,490
Given the following text description, write Python code to implement the functionality described below step by step Description: Estimate covariance matrix from Epochs baseline We first define a set of Epochs from events and a raw file. Then we estimate the noise covariance of prestimulus data, a.k.a. baseline. Step1: Set parameters Step2: Show covariance
Python Code: # Author: Alexandre Gramfort <[email protected]> # # License: BSD (3-clause) import mne from mne import io from mne.datasets import sample print(__doc__) data_path = sample.data_path() fname = data_path + '/MEG/sample/sample_audvis_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' event_id, tmin, tmax = 1, -0.2, 0.5 raw = io.Raw(fname) Explanation: Estimate covariance matrix from Epochs baseline We first define a set of Epochs from events and a raw file. Then we estimate the noise covariance of prestimulus data, a.k.a. baseline. End of explanation raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' # Setup for reading the raw data raw = io.Raw(raw_fname) events = mne.read_events(event_fname) # Set up pick list: EEG + STI 014 - bad channels (modify to your needs) include = [] # or stim channels ['STI 014'] raw.info['bads'] += ['EEG 053'] # bads + 1 more # pick EEG channels picks = mne.pick_types(raw.info, meg=True, eeg=True, stim=False, eog=True, include=include, exclude='bads') # Read epochs, with proj off by default so we can plot either way later reject = dict(grad=4000e-13, mag=4e-12, eeg=80e-6, eog=150e-6) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject, proj=False) # Compute the covariance on baseline cov = mne.compute_covariance(epochs, tmin=None, tmax=0) print(cov) Explanation: Set parameters End of explanation mne.viz.plot_cov(cov, raw.info, colorbar=True, proj=True) # try setting proj to False to see the effect Explanation: Show covariance End of explanation
6,491
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1>Содержание<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Базовые-операции" data-toc-modified-id="Базовые-операции-1">Базовые операции</a></span><ul class="toc-item"><li><span><a href="#Создание-списка" data-toc-modified-id="Создание-списка-1.1">Создание списка</a></span></li><li><span><a href="#Доступ-к-элементам" data-toc-modified-id="Доступ-к-элементам-1.2">Доступ к элементам</a></span></li><li><span><a href="#Добавление-элемента-в-конец" data-toc-modified-id="Добавление-элемента-в-конец-1.3">Добавление элемента в конец</a></span></li><li><span><a href="#Длина-списка" data-toc-modified-id="Длина-списка-1.4">Длина списка</a></span></li><li><span><a href="#Сортировка" data-toc-modified-id="Сортировка-1.5">Сортировка</a></span></li><li><span><a href="#Сложение" data-toc-modified-id="Сложение-1.6">Сложение</a></span></li><li><span><a href="#Особенности-присваивания-списков" data-toc-modified-id="Особенности-присваивания-списков-1.7">Особенности присваивания списков</a></span></li></ul></li><li><span><a href="#Списки-и-if" data-toc-modified-id="Списки-и-if-2">Списки и if</a></span><ul class="toc-item"><li><span><a href="#Преобразование-к-логическому-значению" data-toc-modified-id="Преобразование-к-логическому-значению-2.1">Преобразование к логическому значению</a></span></li><li><span><a href="#Проверка-наличия-элемента-в-списке" data-toc-modified-id="Проверка-наличия-элемента-в-списке-2.2">Проверка наличия элемента в списке</a></span></li></ul></li><li><span><a href="#Списки-и-for" data-toc-modified-id="Списки-и-for-3">Списки и for</a></span></li><li><span><a href="#Списки-и-строки" data-toc-modified-id="Списки-и-строки-4">Списки и строки</a></span><ul class="toc-item"><li><span><a href="#Split-—-разбиение-строки" data-toc-modified-id="Split-—-разбиение-строки-4.1">Split — разбиение строки</a></span></li><li><span><a href="#Join-—-объединение-строк" data-toc-modified-id="Join-—-объединение-строк-4.2">Join — объединение строк</a></span></li><li><span><a href="#Изменение-символа-строки" data-toc-modified-id="Изменение-символа-строки-4.3">Изменение символа строки</a></span></li></ul></li><li><span><a href="#Распаковка-списков" data-toc-modified-id="Распаковка-списков-5">Распаковка списков</a></span></li><li><span><a href="#Другие-операции" data-toc-modified-id="Другие-операции-6">Другие операции</a></span><ul class="toc-item"><li><span><a href="#Минимум" data-toc-modified-id="Минимум-6.1">Минимум</a></span></li><li><span><a href="#Сумма" data-toc-modified-id="Сумма-6.2">Сумма</a></span></li><li><span><a href="#Индекс-элемента" data-toc-modified-id="Индекс-элемента-6.3">Индекс элемента</a></span></li><li><span><a href="#Удаление-элемента" data-toc-modified-id="Удаление-элемента-6.4">Удаление элемента</a></span></li><li><span><a href="#Вставка-элемента" data-toc-modified-id="Вставка-элемента-6.5">Вставка элемента</a></span></li><li><span><a href="#Переворачивание-списка" data-toc-modified-id="Переворачивание-списка-6.6">Переворачивание списка</a></span></li></ul></li></ul></div> Списки Списки (list) — это некоторое подобие массива в "привычных" вам языках программирования лишь с тем отличием, что в list могут храниться элементы разных типов. Давайте попробуем разобраться на примерах. Базовые операции Создание списка Step1: Можно создать список из однотипных элементов заданной длины Step2: Доступ к элементам Step3: Обратите внимание, что при выведении списка таким способом он выводится с квадратными скобками и запятыми. В Python есть отрицательная индексация. −1 элемент это последний элемент, −2 элемент это предпоследний элемент списка и так далее Step4: Добавление элемента в конец Метод append добавляет новый элемент в конец существующего списка Step5: Длина списка Функция len позволяет узнать длину (размер) списка Step6: Сортировка Метод sort сортирует текущий список по невозрастанию, если это возможно Step7: Сложение Списки можно складывать друг с другом. При этом создаётся новый список, в котором сначала идут элементы одного списка, затем другого. Step8: Особенности присваивания списков Во время выполнения интерпретатор хранит все данные, которые у вас есть в программе, как некоторый набор объектов. Примеры объектов Step9: Чтобы узнать как скопировать список, читайте страницу про срезы. Почему то же самое не происходит с другими типами, например числами? Рассматривайте каждое отдельное число тоже как отдельный объект, на который может вести ссылка. Например, 2 это объект, и 3 это объект. При изменении значения переменной меняется только ссылка, а не объект. Step10: Списки и if Преобразование к логическому значению Пустой список преобразуется к логическому значению False Step11: Непустой список преобразуется к True Step12: Проверка наличия элемента в списке Проверить, есть ли элемент в списке, можно с помощью оператора in. Это то же самое, что и пройти по всему списку и для каждого элемента проверить, не равен ли он искомому. Step13: Также есть аналогичный оператор not in, который проверяет, что элемента нет в списке Step14: Списки и for Есть специальный синтаксис цикла for, который позволяет пройтись по каждому элементу списка. В этом случае переменной присваивается не индекс элемента, а именно значение Step15: В каком-то смысле range тоже похож на list Step16: Списки и строки Split — разбиение строки Метод split разбивает строку на список строк по выбранному разделяющему символу или нескольким символам. Например для разбивания строки по символу # Step17: По умолчанию (без параметров) split разбивает строку по пробельным символам, игнорируя подряд идущие пробельные символы Step18: Пример строки-разделителя более чем из одной буквы Step19: Join — объединение строк Метод join возвращает строку, в которой все элементы списка записаны через разделительный символ (символы). Каждый элемент списка обязательно должен быть строкой. Step20: Разделительная строка может быть пустой Step21: Обратите внимание, что разделитель ставится только между строками, и никогда в конце. Разделитель из нескольких символов Step22: Разделитель в виде перевода строки Step23: Изменение символа строки Строки являются неизменяемым типом, но достаточно часто встречается ситуация, когда нам нужно изменить один символ в строке. В таких случаях Step24: Также существует другой способ изменить третий символ строки, используя срезы. Распаковка списков Раньше мы уже встречались с такой конструкцией как параллельное присваивание Step25: Кроме того, можно распаковывать списки прямо в цикле for Step26: Другие операции Также для списков есть набор многих других удобных функций, уже встроенных в Python. Минимум Функция min ищет минимальный элемент списка Step27: Сумма Функция sum считает сумму всех чисел списка Step28: Индекс элемента Метод index находит индекс элемента в списке Step29: ValueError Step30: Вставка элемента Метод insert(index, element) вставляет элемент element на место index, остальные элементы сдвигаются. Step31: Переворачивание списка Метод reverse переворачивает список
Python Code: a = [1, 2, "Hi"] # Создать список и присвоить переменной `а` этот список print(a[0], a[1], a[2]) # Обращение к элементам списка, индексация с нуля b = list() # Создать пустой список c = [] # Другой способ создать пустой список Explanation: <h1>Содержание<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Базовые-операции" data-toc-modified-id="Базовые-операции-1">Базовые операции</a></span><ul class="toc-item"><li><span><a href="#Создание-списка" data-toc-modified-id="Создание-списка-1.1">Создание списка</a></span></li><li><span><a href="#Доступ-к-элементам" data-toc-modified-id="Доступ-к-элементам-1.2">Доступ к элементам</a></span></li><li><span><a href="#Добавление-элемента-в-конец" data-toc-modified-id="Добавление-элемента-в-конец-1.3">Добавление элемента в конец</a></span></li><li><span><a href="#Длина-списка" data-toc-modified-id="Длина-списка-1.4">Длина списка</a></span></li><li><span><a href="#Сортировка" data-toc-modified-id="Сортировка-1.5">Сортировка</a></span></li><li><span><a href="#Сложение" data-toc-modified-id="Сложение-1.6">Сложение</a></span></li><li><span><a href="#Особенности-присваивания-списков" data-toc-modified-id="Особенности-присваивания-списков-1.7">Особенности присваивания списков</a></span></li></ul></li><li><span><a href="#Списки-и-if" data-toc-modified-id="Списки-и-if-2">Списки и if</a></span><ul class="toc-item"><li><span><a href="#Преобразование-к-логическому-значению" data-toc-modified-id="Преобразование-к-логическому-значению-2.1">Преобразование к логическому значению</a></span></li><li><span><a href="#Проверка-наличия-элемента-в-списке" data-toc-modified-id="Проверка-наличия-элемента-в-списке-2.2">Проверка наличия элемента в списке</a></span></li></ul></li><li><span><a href="#Списки-и-for" data-toc-modified-id="Списки-и-for-3">Списки и for</a></span></li><li><span><a href="#Списки-и-строки" data-toc-modified-id="Списки-и-строки-4">Списки и строки</a></span><ul class="toc-item"><li><span><a href="#Split-—-разбиение-строки" data-toc-modified-id="Split-—-разбиение-строки-4.1">Split — разбиение строки</a></span></li><li><span><a href="#Join-—-объединение-строк" data-toc-modified-id="Join-—-объединение-строк-4.2">Join — объединение строк</a></span></li><li><span><a href="#Изменение-символа-строки" data-toc-modified-id="Изменение-символа-строки-4.3">Изменение символа строки</a></span></li></ul></li><li><span><a href="#Распаковка-списков" data-toc-modified-id="Распаковка-списков-5">Распаковка списков</a></span></li><li><span><a href="#Другие-операции" data-toc-modified-id="Другие-операции-6">Другие операции</a></span><ul class="toc-item"><li><span><a href="#Минимум" data-toc-modified-id="Минимум-6.1">Минимум</a></span></li><li><span><a href="#Сумма" data-toc-modified-id="Сумма-6.2">Сумма</a></span></li><li><span><a href="#Индекс-элемента" data-toc-modified-id="Индекс-элемента-6.3">Индекс элемента</a></span></li><li><span><a href="#Удаление-элемента" data-toc-modified-id="Удаление-элемента-6.4">Удаление элемента</a></span></li><li><span><a href="#Вставка-элемента" data-toc-modified-id="Вставка-элемента-6.5">Вставка элемента</a></span></li><li><span><a href="#Переворачивание-списка" data-toc-modified-id="Переворачивание-списка-6.6">Переворачивание списка</a></span></li></ul></li></ul></div> Списки Списки (list) — это некоторое подобие массива в "привычных" вам языках программирования лишь с тем отличием, что в list могут храниться элементы разных типов. Давайте попробуем разобраться на примерах. Базовые операции Создание списка End of explanation a = [0] * 10 print(a) N = 5 b = [True] * N print(b) Explanation: Можно создать список из однотипных элементов заданной длины: End of explanation a = [1, 2, "Hi"] a[1] = 4 # Изменить отдельный элемент списка print(a) # Вывести весь список Explanation: Доступ к элементам End of explanation a = [1, 2, "Hi"] print(a[-1], '=', a[2]) print(a[-2], '=', a[1]) print(a[-3], '=', a[0]) b = ["a", 1, -2, [1, 2, 3], "b", 3.4] # Элементом списка может быть список print(b[-3]) Explanation: Обратите внимание, что при выведении списка таким способом он выводится с квадратными скобками и запятыми. В Python есть отрицательная индексация. −1 элемент это последний элемент, −2 элемент это предпоследний элемент списка и так далее: End of explanation a = [2, 3] a.append(5) print(a) Explanation: Добавление элемента в конец Метод append добавляет новый элемент в конец существующего списка: End of explanation a = [1, 3, 6] print(len(a)) Explanation: Длина списка Функция len позволяет узнать длину (размер) списка: End of explanation a = [3, 2, 4, 1, 2] a.sort() print(a) Explanation: Сортировка Метод sort сортирует текущий список по невозрастанию, если это возможно: End of explanation a = [1, 2, 3] b = [4, 5, 6] c = a + b print(c) Explanation: Сложение Списки можно складывать друг с другом. При этом создаётся новый список, в котором сначала идут элементы одного списка, затем другого. End of explanation a = [1, 2, 3, 4] # а - ссылка на список, каждый элемент списка это ссылки на объекты 1, 2, 3, 4 b = a # b - ссылка на тот же самый список # В Python у каждого объекта есть свой id (идентификатор) - # некоторое уникальное число, сопоставленное объекту print("id(a) = ", id(a)) print("id(b) = ", id(b)) a[0] = -1 # Меняем элемент списка a print("b =", b) # Значение b тоже поменялось! Explanation: Особенности присваивания списков Во время выполнения интерпретатор хранит все данные, которые у вас есть в программе, как некоторый набор объектов. Примеры объектов: 42 — число 42 "abc" — строка «abc» "42" — строка «42» 3.0 — вещественное число 3.0 False — логическое значение «Ложь» ["abc", "42", 42] — список из нескольких других объектов Переменная — это всего лишь ссылка на объект, который хранит интепретатор. С учётом ссылочного устройства, посмотрим как происходит присваивание одного списка другому: End of explanation a = 2 b = a print("a =", a, ", id(a) =", id(a)) print("b =", b, ", id(b) =", id(b)) print() a = 3 print("a =", a, ", id(a) =", id(a)) print("b =", b, ", id(b) =", id(b)) Explanation: Чтобы узнать как скопировать список, читайте страницу про срезы. Почему то же самое не происходит с другими типами, например числами? Рассматривайте каждое отдельное число тоже как отдельный объект, на который может вести ссылка. Например, 2 это объект, и 3 это объект. При изменении значения переменной меняется только ссылка, а не объект. End of explanation lst = [] if lst: print("Ветка if") else: print("Ветка else") lst = [] if not lst: print("Ветка if") else: print("Ветка else") Explanation: Списки и if Преобразование к логическому значению Пустой список преобразуется к логическому значению False: End of explanation lst = [2, 3, 4] if lst: print("Ветка if") else: print("Ветка else") lst = [[]] if lst: print("Ветка if") else: print("Ветка else") Explanation: Непустой список преобразуется к True: End of explanation lst = [1, 6, 29, 4, "a", 3, -1] if 3 in lst: print("3 is in the list") else: print("3 is not in the list") Explanation: Проверка наличия элемента в списке Проверить, есть ли элемент в списке, можно с помощью оператора in. Это то же самое, что и пройти по всему списку и для каждого элемента проверить, не равен ли он искомому. End of explanation lst = [1, 6, 29, 4, "a", 3, -1] if 5 not in lst: print("5 is not in the list") Explanation: Также есть аналогичный оператор not in, который проверяет, что элемента нет в списке: End of explanation for i in [2, 3, 5, 7]: print(i) lst = [3, "ads", [1, 2]] for i in lst: print(i) Explanation: Списки и for Есть специальный синтаксис цикла for, который позволяет пройтись по каждому элементу списка. В этом случае переменной присваивается не индекс элемента, а именно значение: End of explanation r = range(5) print(r[2], r[-1]) Explanation: В каком-то смысле range тоже похож на list: End of explanation s = "1#2#3" print(s.split("#")) Explanation: Списки и строки Split — разбиение строки Метод split разбивает строку на список строк по выбранному разделяющему символу или нескольким символам. Например для разбивания строки по символу #: End of explanation names = "Artem Irina Zhenya" print(names.split()) Explanation: По умолчанию (без параметров) split разбивает строку по пробельным символам, игнорируя подряд идущие пробельные символы: End of explanation s = "1abc2abcd3" print(s.split("abc")) Explanation: Пример строки-разделителя более чем из одной буквы: End of explanation lst = ["1", "2", "3"] print("#".join(lst)) Explanation: Join — объединение строк Метод join возвращает строку, в которой все элементы списка записаны через разделительный символ (символы). Каждый элемент списка обязательно должен быть строкой. End of explanation print("".join(["1", "2", "3"])) Explanation: Разделительная строка может быть пустой: End of explanation shopping_list = ', '.join(['apples', 'milk', 'flour', 'jam']) print(shopping_list) Explanation: Обратите внимание, что разделитель ставится только между строками, и никогда в конце. Разделитель из нескольких символов: End of explanation shopping_list = '\n'.join(['apples', 'milk', 'flour', 'jam']) print(shopping_list) Explanation: Разделитель в виде перевода строки: End of explanation a = "long string" # у нас есть строка, мы хотим уметь ее изменять print(a) l = list(a) # сделаем из строки список символов print(l) l[2] = "!" # изменим один элемент списка print(l) s = "".join(l) print(s) Explanation: Изменение символа строки Строки являются неизменяемым типом, но достаточно часто встречается ситуация, когда нам нужно изменить один символ в строке. В таких случаях: Преобразуют строку к списку и получают список символов (строк, состоящих из одного символа) Изменяют один элемент списка Список строк склеивают в одну большую строку Пример: End of explanation a, b, c = [2, "abs", 3] # Присваиваем в явном виде заданный список print(a, b, c) lst = [2, "abs", 3] a, b, c = lst # Чуть менее тривиальная запись того же самого print(a, b, c) Explanation: Также существует другой способ изменить третий символ строки, используя срезы. Распаковка списков Раньше мы уже встречались с такой конструкцией как параллельное присваивание: a, b = c, d Давайте ее немного обобщим. Если мы вместо двух значений c и d поставим один список, содержащий столько же элементов, сколько и переменных в левой части равенства, то Python проделает аналогичное параллельное присваивание. End of explanation lst = [[1, 1], [2, 4], [3, 9], [4, 16]] for a, b in lst: print(a, b) print() # То же самое в более привычной форме for x in lst: a, b = x print(a, b) Explanation: Кроме того, можно распаковывать списки прямо в цикле for: End of explanation l = [1, 2, -1, 3, 2, -2, 1, 5, 7, 3] print(min(l)) # min можно брать и от двух элементов вместо списка print(min(12, 10)) Explanation: Другие операции Также для списков есть набор многих других удобных функций, уже встроенных в Python. Минимум Функция min ищет минимальный элемент списка: End of explanation print(sum([2, 3, 11, 1])) Explanation: Сумма Функция sum считает сумму всех чисел списка: End of explanation l = [1, 2, -1, 3, 2, -2, 1, 5, 7, 3] print(l.index(3)) # В списке l у нас две тройки и index возвращает индекс первой из них l = [1, 2, -1, 3, 2, -2, 1, 5, 7, 3] print(l.index(115)) # Если элемента нет в списке, то произойдет ошибка выполнения Explanation: Индекс элемента Метод index находит индекс элемента в списке: End of explanation l = [1, 2, 3] last_element = l.pop() print(l) print(last_element) Explanation: ValueError: 115 is not in list = ОшибкаЗначения: 115 не найдено в списке Удаление элемента Следующие методы изменяют именно текущий список, а не создают новый. Ни для одного из списков не поменяется id. Метод pop удаляет и возращает последний элемент из списка: End of explanation l = [1, 2, 4, 5] l.insert(2, 3) print(l) Explanation: Вставка элемента Метод insert(index, element) вставляет элемент element на место index, остальные элементы сдвигаются. End of explanation a = [1, 4, 2, 5, 2] a.reverse() print(a) Explanation: Переворачивание списка Метод reverse переворачивает список: End of explanation
6,492
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href='http Step1: SMA Simple Moving Average We've already shown how to create a simple moving average, for a quick review Step2: EWMA Exponentially-weighted moving average We just showed how to calculate the SMA based on some window.However, basic SMA has some "weaknesses". * Smaller windows will lead to more noise, rather than signal * It will always lag by the size of the window * It will never reach to full peak or valley of the data due to the averaging. * Does not really inform you about possible future behaviour, all it really does is describe trends in your data. * Extreme historical values can skew your SMA significantly To help fix some of these issues, we can use an EWMA (Exponentially-weighted moving average). EWMA will allow us to reduce the lag effect from SMA and it will put more weight on values that occured more recently (by applying more weight to the more recent values, thus the name). The amount of weight applied to the most recent values will depend on the actual parameters used in the EWMA and the number of periods given a window size. Full details on Mathematics behind this can be found here Here is the shorter version of the explanation behind EWMA. The formula for EWMA is
Python Code: import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline airline = pd.read_csv('airline_passengers.csv', index_col = "Month") airline.dropna(inplace = True) airline.index = pd.to_datetime(airline.index) airline.head() Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> <center>Copyright Pierian Data 2017</center> <center>For more information, visit us at www.pieriandata.com</center> End of explanation airline['6-month-SMA'] = airline['Thousands of Passengers'].rolling(window = 6).mean() airline['12-month-SMA'] = airline['Thousands of Passengers'].rolling(window = 12).mean() airline.head() airline.plot(figsize = (12, 8)) Explanation: SMA Simple Moving Average We've already shown how to create a simple moving average, for a quick review: End of explanation airline['EWMA12'] = airline['Thousands of Passengers'].ewm(span = 12).mean() airline[['Thousands of Passengers','EWMA12']].plot(figsize = (12, 8)) Explanation: EWMA Exponentially-weighted moving average We just showed how to calculate the SMA based on some window.However, basic SMA has some "weaknesses". * Smaller windows will lead to more noise, rather than signal * It will always lag by the size of the window * It will never reach to full peak or valley of the data due to the averaging. * Does not really inform you about possible future behaviour, all it really does is describe trends in your data. * Extreme historical values can skew your SMA significantly To help fix some of these issues, we can use an EWMA (Exponentially-weighted moving average). EWMA will allow us to reduce the lag effect from SMA and it will put more weight on values that occured more recently (by applying more weight to the more recent values, thus the name). The amount of weight applied to the most recent values will depend on the actual parameters used in the EWMA and the number of periods given a window size. Full details on Mathematics behind this can be found here Here is the shorter version of the explanation behind EWMA. The formula for EWMA is: $ y_t = \frac{\sum\limits_{i=0}^t w_i x_{t-i}}{\sum\limits_{i=0}^t w_i} $ Where x_t is the input value, w_i is the applied weight (Note how it can change from i=0 to t), and y_t is the output. Now the question is, how to we define the weight term w_i ? This depends on the adjust parameter you provide to the .ewm() method. When adjust is True (default), weighted averages are calculated using weights: $y_t = \frac{x_t + (1 - \alpha)x_{t-1} + (1 - \alpha)^2 x_{t-2} + ... (1 - \alpha)^t x_{0}}{1 + (1 - \alpha) + (1 - \alpha)^2 + ... (1 - \alpha)^t}$ When adjust=False is specified, moving averages are calculated as: $\begin{split}y_0 &= x_0 \ y_t &= (1 - \alpha) y_{t-1} + \alpha x_t,\end{split}$ which is equivalent to using weights: \begin{split}w_i = \begin{cases} \alpha (1 - \alpha)^i & \text{if } i < t \ (1 - \alpha)^i & \text{if } i = t. \end{cases}\end{split} When adjust=True we have y0=x0 and from the last representation above we have yt=αxt+(1−α)yt−1, therefore there is an assumption that x0 is not an ordinary value but rather an exponentially weighted moment of the infinite series up to that point. One must have 0<α≤1, and while since version 0.18.0 it has been possible to pass α directly, it’s often easier to think about either the span, center of mass (com) or half-life of an EW moment: \begin{split}\alpha = \begin{cases} \frac{2}{s + 1}, & \text{for span}\ s \geq 1\ \frac{1}{1 + c}, & \text{for center of mass}\ c \geq 0\ 1 - \exp^{\frac{\log 0.5}{h}}, & \text{for half-life}\ h > 0 \end{cases}\end{split} Span corresponds to what is commonly called an “N-day EW moving average”. Center of mass has a more physical interpretation and can be thought of in terms of span: c=(s−1)/2 Half-life is the period of time for the exponential weight to reduce to one half. Alpha specifies the smoothing factor directly. End of explanation
6,493
Given the following text description, write Python code to implement the functionality described below step by step Description: syncID Step1: Navigating the Response Unlike most API call responses, the taxonomy JSON at the uppermost level has more elements that just 'data'. The other elements include Step2: Within the 'data' element is a list with entries for each taxa returned by the call. Each species entry is a dictionary with atttributes for Step3: The "dwc" at the beginning of many atttribute names indicates that the terms used for each field are matched to those used by Darwin Core, an official standard maintained for biodiversity reference. The "gbif" refers to the Global Biodiversity Information Facility. We can also print vernacular names alongside the scientific names of each species entry. Step4: Using Taxon Type Code Let's make another API call, using taxonTypeCode this time. We'll look through some of the NEON Fish Taxonomy, but try the verbose description. Step5: Choose an arbitrary species and see what data its dictionary contains. Step6: This is a more verbose entry than what we've seen, so there are more attributes, though many lack values. The 'gbif' attributes indicate terms matched to those used by the Global Biodiversity Forum. Step7: Finding a Specific Species Many NEON data products, such as the land bird breeding counts used in a previous tutorial, include species idetnification data in the form of species name. We can use the NEON taxonomy/ endpoint to search for a specific species mentioned in the NEON data. Let's look at the 2018-06 Lower Teakettle Bird Counts again, and get more detail on one of the observed species. Step8: The unique method for Pandas series, which include individual columns of dataframes, returns the series with all duplicate values removed. Step9: More information on 'Troglodytes aedon' would be interesting. When using a scientific name in a taxonomy API call, which will be encoded as a URL, we replace any spaces in the name with '%20'; also, remember to capitalize the genus name, but not the species name. Step10: Because only a single result was returned, count and total entries will be one, and there will be no urls for the previous or next batch of entries. It is important to note that the data element is still treated as a list; it is simply a list with only one element.
Python Code: import requests import json #Choose values for each option SERVER = 'http://data.neonscience.org/api/v0/' FAMILY = 'Pinaceae' OFFSET = 11 LIMIT = 20 VERBOSE = 'false' #Create 'options' portion of API call OPTIONS = '?family={family}&offset={offset}&limit={limit}&verbose={verbose}'.format( family = FAMILY, offset = OFFSET, limit = LIMIT, verbose = VERBOSE) #Print out the completed options string. This is the query string that is appended to the endpoint URL in the taxonomy API call print(OPTIONS) #Make request pine_req = requests.get(SERVER+'taxonomy/'+OPTIONS) pine_json = pine_req.json() Explanation: syncID: title: "Querying Taxonomy Data with NEON API and Python" description: "Querying the 'taxonomy/' NEON API endpoint with Python and navigating the response" dateCreated: 2020-04-24 authors: Maxwell J. Burner contributors: Donal O'Leary estimatedTime: packagesLibraries: requests, json, pandas topics: api languagesTool: python dataProduct: code1: tutorialSeries: python-neon-api-series urlTitle: neon_api_taxonomy In this tutorial we will learn to query the taxonomy/ endpoint of the NEON API using Python. <div id="ds-objectives" markdown="1"> ### Objectives After completing this tutorial, you will be able to: * Query the taxonomy endpoint of the NEON API to obtain taxonomic data * Search NEON taxonomic data using different criteria * Use the various options of the taxonomy endpoint to customize the results of a call * Navigate the data returned by a call to the taxonomy endpoint of the NEON API * Navigate the parent-child relationships between NEON locations ### Install Python Packages * **requests** * **json** * **pandas** </div> In this tutorial we will learn to use Python and the taxonomy/ endpoint of the NEON API to query information from NEON's taxonomic data. NEON maintains a great deal of taxonomic data, used in species identification during field observations and laboratory processing of samples. NEON taxonomy data can be obtained through the API, or through an interactive interface called the Taxon Viewer. Just as the locations/ endpoint can provide more context for a location referenced in NEON studies, the taxonomy/ endpoint can provide additional information on species identified in NEON observational data. Making the Request Unlike other endpoints, the locations/ endpoint does not take a single target in its URL. Instead, the query can make use of a number of different options, which are specified in the URL string itself. Each option is assigned a value with an equals sign, for example 'family=Pineceae'; these are placed after a question mark '?' at the end of the endpoint URL, which signals a 'query string' will follow. Multiple query options are separated by an ampersand '&' in the URL string. Each call must have one of the following options, but cannot use multiple: * taxonTypeCode, a four-letter that indicates which NEON taxonomy is being queried, such as FISH or BIRD * One of the major taxonomic ranks from genus through kingdom * scientificName a specific name of format genus + specific epithet + (authority); this is used to search for an exact result In addition, any number of the following options can also be added to modify the results of the query: * verbose takes a 'true' for a more detailed response or 'false' for a shorter response * offset takes an integer indicating the number of starting rows of the list of results to skip; the default is 0 * limit takes an integer indicating the maximum length of the list returned; the default is 50 Let's request data on up to 20 members of the Pine family, skipping the first 11, with the short response. End of explanation #Print out values in the top level of the pine_json taxonomy dictionary, other than the 'data' entry. for key in pine_json.keys(): if(key != 'data'): print(key,':',pine_json[key]) Explanation: Navigating the Response Unlike most API call responses, the taxonomy JSON at the uppermost level has more elements that just 'data'. The other elements include: count- how many species were returned in this response total- how many species entries are available from NEON (if offset was zero and limit was infinity). prev- the API url that could get the 'previous' set of entries (if offset was not zero) matching the other parameters. next- the API url that could get the next set of entries (if limit was not infinity, and the limit parameter resulted in some entries being excluded). The prev and next urls could be used to effectively break up a larger API call into several segments; we ask for a smaller set than we actually want, then use the "next" url to get the next set of entries in a seperate call. End of explanation #Print data for one species sample = pine_json['data'][7] for key in sample.keys(): print("{:28}: {}".format(key, sample[key])) Explanation: Within the 'data' element is a list with entries for each taxa returned by the call. Each species entry is a dictionary with atttributes for: The full taxonomy, with a separate attribute for each taxonomic level The NEON taxonomy type the data was obtained from (taxonTypeCode) The short taxon code used by NEON (taxonID, acceptedTaxonID) The author of the scientific name The common/vernacular name, if any The reference text used (nameAccordingToID) End of explanation for species in pine_json['data']: print("{:19}| {}".format(species['dwc:vernacularName'], species['dwc:scientificName'])) Explanation: The "dwc" at the beginning of many atttribute names indicates that the terms used for each field are matched to those used by Darwin Core, an official standard maintained for biodiversity reference. The "gbif" refers to the Global Biodiversity Information Facility. We can also print vernacular names alongside the scientific names of each species entry. End of explanation #Set options SERVER = 'http://data.neonscience.org/api/v0/' TAXONCODE = 'FISH' OFFSET = 0 LIMIT = 20 VERBOSE = 'true' #Create 'options' portion of API call OPTIONS = '?taxonTypeCode={taxoncode}&offset={offset}&limit={limit}&verbose={verbose}'.format( taxoncode = TAXONCODE, offset = OFFSET, limit = LIMIT, verbose = VERBOSE) print(OPTIONS) #Make request fish_req = requests.get(SERVER+'taxonomy/'+OPTIONS) fish_json = fish_req.json() Explanation: Using Taxon Type Code Let's make another API call, using taxonTypeCode this time. We'll look through some of the NEON Fish Taxonomy, but try the verbose description. End of explanation #Print data for one species in the result sample = fish_json['data'][7] for key in sample.keys(): print("{:28}: {}".format(key, sample[key])) Explanation: Choose an arbitrary species and see what data its dictionary contains. End of explanation #Print common and scientific name for each fish for species in fish_json['data']: print(species['dwc:vernacularName'],'|', species['dwc:scientificName']) Explanation: This is a more verbose entry than what we've seen, so there are more attributes, though many lack values. The 'gbif' attributes indicate terms matched to those used by the Global Biodiversity Forum. End of explanation import pandas as pd #Establish target for API search SITECODE = 'TEAK' PRODUCTCODE = 'DP1.10003.001' #Get data on available files bird_request = requests.get(SERVER+'data/'+PRODUCTCODE+'/'+SITECODE+'/'+'2018-06') bird_json = bird_request.json() #Extract the URL for just the 'basic' package of the 'count' data, #and read that csv into a pandas data.frame falled 'bird_df' for file in bird_json['data']['files']: if('count' in file['name']): if('basic' in file['name']): bird_df = pd.read_csv(file['url']) #View all columns of the first 5 rows bird_df.head() Explanation: Finding a Specific Species Many NEON data products, such as the land bird breeding counts used in a previous tutorial, include species idetnification data in the form of species name. We can use the NEON taxonomy/ endpoint to search for a specific species mentioned in the NEON data. Let's look at the 2018-06 Lower Teakettle Bird Counts again, and get more detail on one of the observed species. End of explanation #Use pandas .unique method to see what species were observed bird_df['scientificName'].unique() Explanation: The unique method for Pandas series, which include individual columns of dataframes, returns the series with all duplicate values removed. End of explanation #Make request aedon_request = requests.get(SERVER+'taxonomy/'+'?scientificname=Troglodytes%20aedon') aedon_json = aedon_request.json() Explanation: More information on 'Troglodytes aedon' would be interesting. When using a scientific name in a taxonomy API call, which will be encoded as a URL, we replace any spaces in the name with '%20'; also, remember to capitalize the genus name, but not the species name. End of explanation #Print elements of JSON other than data for key in aedon_json.keys(): if(key != 'data'): print(key,':',aedon_json[key]) #Print elements of species dict in data list for key in aedon_json['data'][0].keys(): print(key,':',aedon_json['data'][0][key]) Explanation: Because only a single result was returned, count and total entries will be one, and there will be no urls for the previous or next batch of entries. It is important to note that the data element is still treated as a list; it is simply a list with only one element. End of explanation
6,494
Given the following text description, write Python code to implement the functionality described below step by step Description: Handwritten Digit Recognition using a Convolutional Neural Network This tutorial shows you how to design a deep convolutional neural network for a classic computer vision application Step1: Let's Explore the Dataset The original MNIST dataset consists of 60000 training and 10000 validation images. In our implementation, we use 10000 out of 60000 images as our validation set. So let's plot some sample images. Step2: Building the Computational Graph Aurora uses dataflow graph approach to define neural networks. So following cell defines our simple convolutional neural network which we going to train in the next cell. Step3: Training Our Model OK, now we defined our model using Aurora. Now its time to train our model. For training, we use Adam optimizer. Step4: Reporting Testing Accuracy and Plotting Training/Validation Errors
Python Code: import numpy as np import timeit import aurora as au # import Aurora import aurora.autodiff as ad # importing Aurora's automatic differentiation framework import matplotlib.pyplot as plt import seaborn as sbn sbn.set() BATCH_SIZE = 64 LR = 1e-4 USE_GPU = False NUM_ITERS = 20 # utility functions def display_images(image_dataset, n_rows, n_cols, graph_title='Sample Training Images'): ''' Simple utility function for displaying images. ''' plt.figure(figsize=(7, 5)) image_number = 1 for row in range(n_rows): for col in range(n_cols): plt.subplot(n_rows, n_cols, image_number) plt.imshow(image_dataset[image_number-1, :], cmap='Greys_r') plt.axis('off') image_number += 1 plt.suptitle(graph_title) plt.show() def measure_accuracy(activation, X_val, y_val, batch_size=32, use_gpu=USE_GPU): executor = ad.Executor([activation], use_gpu=use_gpu) max_val = len(X_val) - len(X_val) % batch_size y_val = y_val[0:max_val] prediction = np.zeros(max_val) for i in range(0, max_val, batch_size): start = i end = i + batch_size X_batch, y_batch = X_val[start:end], y_val[start:end] prob_val, = executor.run(feed_shapes={images: X_batch}) if use_gpu: prob_val = prob_val.asnumpy() prediction[start:end] = np.argmax(prob_val, axis=1) correct = np.sum(np.equal(y_val, prediction)) percentage = (correct / len(prediction)) * 100.00 return percentage Explanation: Handwritten Digit Recognition using a Convolutional Neural Network This tutorial shows you how to design a deep convolutional neural network for a classic computer vision application: identify hand-written digits. We are going to use MNIST dataset for training, validation and our convolutional neural network model. End of explanation data = au.datasets.MNIST(batch_size=BATCH_SIZE) batch_generator = data.train_batch_generator() batch = next(batch_generator) sample = batch[0][0:15, :] display_images(sample.reshape(-1, 28, 28), 3, 5) Explanation: Let's Explore the Dataset The original MNIST dataset consists of 60000 training and 10000 validation images. In our implementation, we use 10000 out of 60000 images as our validation set. So let's plot some sample images. End of explanation def build_network(image, y, batch_size=32): rand = np.random.RandomState(seed=1024) reshaped_images = ad.reshape(image, newshape=(batch_size, 1, 28, 28)) # weight in (number_kernels, color_depth, kernel_height, kernel_width) W1 = ad.Parameter(name='W1', init=rand.normal(scale=0.1, size=(10, 1, 5, 5))) b1 = ad.Parameter(name='b1', init=rand.normal(scale=0.1, size=10)) conv1 = au.nn.conv2d(input=reshaped_images, filter=W1, bias=b1) activation1 = au.nn.relu(conv1) # size of activation1: batch_size x 10 x 24 x 24 # weight in (number_kernels, number_kernels of previous layer, kernel_height, kernel_width) W2 = ad.Parameter(name='W2', init=rand.normal(scale=0.1, size=(5, 10, 5, 5))) b2 = ad.Parameter(name='b2', init=rand.normal(scale=0.1, size=5)) conv2 = au.nn.conv2d(input=activation1, filter=W2, bias=b2) activation2 = au.nn.relu(conv2) # size of activation2: batch_size x 5 x 20 x 20 = batch_size x 2000 flatten = ad.reshape(activation2, newshape=(batch_size, 2000)) W3 = ad.Parameter(name='W3', init=rand.normal(scale=0.1, size=(2000, 500))) b3 = ad.Parameter(name='b3', init=rand.normal(scale=0.1, size=500)) Z3 = ad.matmul(flatten, W3) Z3 = Z3 + ad.broadcast_to(b3, Z3) activation3 = au.nn.relu(Z3) W4 = ad.Parameter(name='W4', init=rand.normal(scale=0.1, size=(500, 10))) b4 = ad.Parameter(name='b4', init=rand.normal(scale=0.1, size=10)) logits = ad.matmul(activation3, W4) logits = logits + ad.broadcast_to(b4, logits) loss = au.nn.cross_entropy_with_logits(logits, y) return loss, W1, b1, W2, b2, W3, b3, W4, b4, logits Explanation: Building the Computational Graph Aurora uses dataflow graph approach to define neural networks. So following cell defines our simple convolutional neural network which we going to train in the next cell. End of explanation n_iter = NUM_ITERS start = timeit.default_timer() data = au.datasets.MNIST(batch_size=BATCH_SIZE) batch_generator = data.train_batch_generator() # images in (batch_size, color_depth, height, width) images = ad.Variable(name='images') labels = ad.Variable(name='y') loss, W1, b1, W2, b2, W3, b3, W4, b4, logits = build_network(images, labels, batch_size=64) opt_params = [W1, b1, W2, b2, W3, b3, W4, b4] optimizer = au.optim.Adam(loss, params=opt_params, lr=1e-4, use_gpu=USE_GPU) training_errors = [] validation_erros = [] for i in range(n_iter): X_batch, y_batch = next(batch_generator) loss_now = optimizer.step(feed_dict={images: X_batch, labels: y_batch}) if i <= 10 or (i <= 100 and i % 10 == 0) or (i <= 1000 and i % 100 == 0) or (i <= 10000 and i % 500 == 0): fmt_str = 'iter: {0:>5d} cost: {1:>8.5f}' print(fmt_str.format(i, loss_now[0])) if i % 10 == 0: train_acc = measure_accuracy(logits, X_batch, np.argmax(y_batch, axis=1), batch_size=BATCH_SIZE, use_gpu=USE_GPU) training_errors.append((100.0 - train_acc)) X_valid, y_valid = data.validation() valid_acc = measure_accuracy(logits, X_valid[0:BATCH_SIZE], y_valid[0:BATCH_SIZE], batch_size=BATCH_SIZE, use_gpu=USE_GPU) validation_erros.append((100.0 - valid_acc)) Explanation: Training Our Model OK, now we defined our model using Aurora. Now its time to train our model. For training, we use Adam optimizer. End of explanation X_valid, y_valid = data.validation() val_acc = measure_accuracy(logits, X_valid, y_valid, batch_size=BATCH_SIZE, use_gpu=USE_GPU) print('Validation accuracy: {:>.2f}'.format(val_acc)) X_test, y_test = data.testing() test_acc = measure_accuracy(logits, X_test, y_test, batch_size=BATCH_SIZE, use_gpu=USE_GPU) print('Testing accuracy: {:>.2f}'.format(test_acc)) plt.plot(validation_erros, color='r', label='validation error') plt.plot(training_errors, color='b', label='training error') plt.legend() plt.show() Explanation: Reporting Testing Accuracy and Plotting Training/Validation Errors End of explanation
6,495
Given the following text description, write Python code to implement the functionality described below step by step Description: Assignnement 2 Step1: Problem 1 Step2: Problem 1(b) Calculate the median salary for each player and create a pandas DataFrame called medianSalaries with four columns Step3: Problem 1(c) Now, consider only team/season combinations in which the teams played 162 Games. Exclude all data from before 1947. Compute the per plate appearance rates for singles, doubles, triples, HR, and BB. Create a new pandas DataFrame called stats that has the teamID, yearID, wins and these rates. Hint Step4: Problem 1(d) Is there a noticeable time trend in the rates computed computed in Problem 1(c)? Step5: Problem 1(e) Using the stats DataFrame from Problem 1(c), adjust the singles per PA rates so that the average across teams for each year is 0. Do the same for the doubles, triples, HR, and BB rates. Step6: Problem 1(f) Build a simple linear regression model to predict the number of wins from the average adjusted singles, double, triples, HR, and BB rates. To decide which of these terms to include fit the model to data from 2002 and compute the average squared residuals from predictions to years past 2002. Use the fitted model to define a new sabermetric summary Step7: Your answer here Step8: Show the head of the playerstats DataFrame. Step9: Problem 1(h) Using the playerstats DataFrame created in Problem 1(g), create a new DataFrame called playerLS containing the player's lifetime stats. This DataFrame should contain the playerID, the year the player's career started, the year the player's career ended and the player's lifetime average for each of the quantities (singles, doubles, triples, HR, BB). For simplicity we will simply compute the avaerage of the rates by year (a more correct way is to go back to the totals). Step10: Show the head of the playerLS DataFrame. Step11: Problem 1(i) Compute the OPW for each player based on the average rates in the playerLS DataFrame. You can interpret this summary statistic as the predicted wins for a team with 9 batters exactly like the player in question. Add this column to the playerLS DataFrame. Call this colum OPW. Step12: Problem 1(j) Add four columns to the playerLS DataFrame that contains the player's position (C, 1B, 2B, 3B, SS, LF, CF, RF, or OF), first name, last name and median salary. Step13: Show the head of the playerLS DataFrame. Step14: Problem 1(k) Subset the playerLS DataFrame for players active in 2002 and 2003 and played at least three years. Plot and describe the relationship bewteen the median salary (in millions) and the predicted number of wins. Step15: Problem 1(L) Pick one players from one of each of these 10 position C, 1B, 2B, 3B, SS, LF, CF, RF, DH, or OF keeping the total median salary of all 10 players below 20 million. Report their averaged predicted wins and total salary. Step16: Problem 1(m) What do these players outperform in? Singles, doubles, triples HR or BB? Step17: Your answer here Step18: Discussion for Problem 1 The combination of grid search and 10 fold cross validation had had helped us pick a more accurate value of k for our KNN classifier. This helped us create an accurate KNN classifier for the iris dataset. Problem 2 Step19: Problem 2(a) Split the data into a train and a test set. Use a random selection of 33% of the samples as test data. Sklearn provides the train_test_split function for this purpose. Print the dimensions of all the train and test data sets you have created. Step20: Problem 2(b) Use ten fold cross validation to estimate the optimal value for $k$ for the iris data set. Note Step21: Problem 2(c) Visualize the result by plotting the score results versus values for $k$. Step22: Verify that the grid search has indeed chosen the right parameter value for $k$. Step23: Problem 2(d) Test the performance of our tuned KNN classifier on the test set. Step24: Discussion for Problem 2 Write a brief discussion of your conclusions to the questions and tasks above in 100 words or less. Problem 3 Step25: Find the mostimportant features Step26: Using 10-fold cross validation separate the test and training data sets Step27: Start with a lineal model and evaluate how well it can predict the price variable Step28: Try using Ridge regression and evaluate the result of the 10-fold cross-validation Step29: Train the Regression Tree and evaluate using 10-fold cross validation; Specify the parapmeters used and how you changed them to increase the accuracy;
Python Code: # prepare the notebook for matplotlib %matplotlib inline import requests import StringIO import zipfile import numpy as np import pandas as pd # pandas import matplotlib.pyplot as plt # module for plotting # If this module is not already installed, you may need to install it. # You can do this by typing 'pip install seaborn' in the command line import seaborn as sns import sklearn import sklearn.datasets import sklearn.cross_validation import sklearn.decomposition import sklearn.grid_search import sklearn.neighbors import sklearn.metrics Explanation: Assignnement 2: Prediction and Classification Due: Thursday, April 30, 2015 11:59 PM Introduction Problem 3 is optional - for extra credit! Problems 1 and 2 will be graded for the Lab 2. In this assignment you will be using regression and classification to explore different data sets. First: You will use data from before 2002 in the Sean Lahman's Baseball Database to create a metric for picking baseball players using linear regression. This database contains the "complete batting and pitching statistics from 1871 to 2013, plus fielding statistics, standings, team stats, managerial records, post-season data, and more". Documentation provided here. http://saberseminar.com/wp-content/uploads/2012/01/saber-web.jpg Second: You will use the famous iris data set to perform a $k$-neareast neighbor classification using cross validation. While it was introduced in 1936, it is still one of the most popular example data sets in the machine learning community. Wikipedia describes the data set as follows: "The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimetres." Here is an illustration what the four features measure: http://sebastianraschka.com/Images/2014_python_lda/iris_petal_sepal.png Third: You will investigate the influence of higher dimensional spaces on the classification using another standard data set in machine learning called the The cars data set. Load Python modules End of explanation teams = pd.read_csv("data/Teams.csv") players = pd.read_csv("data/Batting.csv") salaries = pd.read_csv("data/Salaries.csv") fielding = pd.read_csv("data/Fielding.csv") master = pd.read_csv("data/Master.csv") Explanation: Problem 1: Sabermetrics Using data preceding the 2002 season pick 10 offensive players keeping the payroll under $20 million (assign each player the median salary). Predict how many games this team would win in a 162 game season. In this problem we will be returning to the Sean Lahman's Baseball Database. From this database, we will be extract five data sets containing information such as yearly stats and standing, batting statistics, fielding statistics, player names, player salaries and biographical information. You will explore the data in this database from before 2002 and create a metric for picking players. Problem 1(a) Load in these CSV files from the Sean Lahman's Baseball Database. For this assignment, we will use the 'Teams.csv', 'Batting.csv', 'Salaries.csv', 'Fielding.csv', 'Master.csv' tables. Read these tables into separate pandas DataFrames with the following names. CSV file name | Name of pandas DataFrame :---: | :---: Teams.csv | teams Batting.csv | players Salaries.csv | salaries Fielding.csv | fielding Master.csv | master End of explanation medians = salaries.groupby(["playerID"]).median() merged = pd.merge(medians, master, left_index=True, right_on="playerID") medianSalaries=merged[["playerID", "nameFirst", "nameLast", "salary"]] medianSalaries.reset_index(inplace=True, drop=True) medianSalaries.head() Explanation: Problem 1(b) Calculate the median salary for each player and create a pandas DataFrame called medianSalaries with four columns: (1) the player ID, (2) the first name of the player, (3) the last name of the player and (4) the median salary of the player. Show the head of the medianSalaries DataFrame. End of explanation g_filter = teams["G"] == 162 year_filter = teams["yearID"] > 1947 filtered_teams = teams[g_filter & year_filter] stats = filtered_teams[["teamID", "yearID", "W", "2B", "3B", "HR", "BB", "H", "AB"]].copy() stats["S"] = stats["H"] - (stats["2B"] + stats["3B"] + stats["HR"]) ## Calculate PA = AB + BB stats['PA'] = stats['AB'] + stats['BB'] ## For each of singles, doubles, triples, HR and BB, calculate plate appearance rate for i in ['S','2B','3B','HR','BB']: stats[i] = stats[i]/stats['PA'] stats.drop('H', axis=1, inplace=True) stats.drop('PA', axis=1, inplace=True) stats.drop('AB', axis=1, inplace=True) stats.sort("yearID", inplace=True) stats.head() #the yearly mean for each metric across teams -- in the next step I make these values 0 stats.groupby('yearID')["S","2B","3B","HR","BB"].mean().head() Explanation: Problem 1(c) Now, consider only team/season combinations in which the teams played 162 Games. Exclude all data from before 1947. Compute the per plate appearance rates for singles, doubles, triples, HR, and BB. Create a new pandas DataFrame called stats that has the teamID, yearID, wins and these rates. Hint: Singles are hits that are not doubles, triples, nor HR. Plate appearances are base on balls plus at bats. End of explanation for col in ["S","2B","3B","HR","BB"]: plt.scatter(stats.yearID, stats[col], alpha=0.5, marker='x') plt.title(col) plt.xlabel('Year') plt.ylabel('Rate') plt.show() Explanation: Problem 1(d) Is there a noticeable time trend in the rates computed computed in Problem 1(c)? End of explanation years = stats['yearID'].unique() year_stats_collection = [] for year in years: year_selector = stats["yearID"] == year year_stats = stats[year_selector] year_stats[['S', '2B', '3B', 'HR', 'BB']] = stats[year_selector][['S', '2B', '3B', 'HR', 'BB']].apply(lambda df: (df - df.mean()) / df.std()) year_stats_collection.append(year_stats) stats_adj = pd.concat(year_stats_collection) #yearly averages should eq 0 stats_adj.groupby('yearID')["S","2B","3B","HR","BB"].mean().head() Explanation: Problem 1(e) Using the stats DataFrame from Problem 1(c), adjust the singles per PA rates so that the average across teams for each year is 0. Do the same for the doubles, triples, HR, and BB rates. End of explanation from sklearn import linear_model from sklearn.cross_validation import train_test_split # labels=stats_adj[['S', '2B', '3B', 'HR', 'BB']].values training_selector = stats_adj['yearID'] < 2002 test_selector = stats_adj['yearID'] >= 2002 labels=["S", "2B", "3B", "HR", "BB"] training_data = stats_adj[training_selector] test_data = stats_adj[test_selector] X_train = training_data[labels].values y_train = training_data[['W']].values X_test = test_data[labels].values y_test = test_data[['W']].values # (stats_adj.shape, X_train.shape, y_train.shape, X_test.shape, y_test.shape) labels=["S", "2B", "3B", "HR", "BB"] data = stats_adj[labels].values target = stats_adj[['W']].values X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.4, random_state=9) (X_test.shape, y_test.shape, X_train.shape) regr = linear_model.LinearRegression() regr.fit(X_train, y_train) # The coefficients print('Coefficients: \n', regr.coef_) # The mean square error print("Residual sum of squares: %.2f" % np.mean((regr.predict(X_test) - y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(X_test, y_test)) # Plot outputs # plt.scatter(X_test, y_test, color='black') # plt.plot(X_test, regr.predict(X_test), color='blue',linewidth=3) # plt.xticks(()) # plt.yticks(()) # plt.show() pd.DataFrame(regr.coef_) plt.scatter(stats_adj[["HR"]].values, stats_adj[["W"]].values, color='black', marker="x") plt.scatter(stats_adj[["3B"]].values, stats_adj[["W"]].values, color='red', marker="x") plt.scatter(stats_adj[["2B"]].values, stats_adj[["W"]].values, color='blue', marker="x") plt.scatter(stats_adj[["S"]].values, stats_adj[["W"]].values, color='green', marker="x") plt.scatter(stats_adj[["BB"]].values, stats_adj[["W"]].values, color='purple', marker="x") Explanation: Problem 1(f) Build a simple linear regression model to predict the number of wins from the average adjusted singles, double, triples, HR, and BB rates. To decide which of these terms to include fit the model to data from 2002 and compute the average squared residuals from predictions to years past 2002. Use the fitted model to define a new sabermetric summary: offensive predicted wins (OPW). Hint: the new summary should be a linear combination of one to five of the five rates. End of explanation players["PA"] = players["AB"] + players["BB"] appearances_selector = players["PA"] >= 500 #assuming appearances = AB year_selector = players["yearID"] > 1947 games_selector = players["G"] == 162 filtered_players = players[appearances_selector & year_selector & games_selector] playerstats = filtered_players[["playerID", "yearID", "2B", "3B", "HR", "BB", "H", "AB"]].copy() playerstats["S"] = playerstats["H"] - (playerstats["2B"] + playerstats["3B"] + playerstats["HR"]) ## Calculate PA = AB + BB playerstats['PA'] = playerstats['AB'] + playerstats['BB'] ## For each of singles, doubles, triples, HR and BB, calculate plate appearance rate for i in ['S','2B','3B','HR','BB']: playerstats[i] = playerstats[i]/playerstats['PA'] playerstats.drop('AB', axis=1, inplace=True) playerstats.drop('PA', axis=1, inplace=True) playerstats.drop('H', axis=1, inplace=True) playerstats.index playerstats.sort("yearID", inplace=True) playerstats.tail() years = playerstats['yearID'].unique() year_stats_collection = [] for year in years: year_selector = playerstats["yearID"]==year year_stats = playerstats[year_selector] year_stats[['S', '2B', '3B', 'HR', 'BB']] = playerstats[year_selector][['S', '2B', '3B', 'HR', 'BB']].apply(lambda df: (df - df.mean()) / df.std()) year_stats_collection.append(year_stats) playerstats_adj = pd.concat(year_stats_collection) year_sel = playerstats_adj['yearID'] == 2002 playerstats_adj[year_sel].mean(0) Explanation: Your answer here: Problem 1(g) Now we will create a similar database for individual players. Consider only player/year combinations in which the player had at least 500 plate appearances. Consider only the years we considered for the calculations above (after 1947 and seasons with 162 games). For each player/year compute singles, doubles, triples, HR, BB per plate appearance rates. Create a new pandas DataFrame called playerstats that has the playerID, yearID and the rates of these stats. Remove the average for each year as for these rates as done in Problem 1(e). End of explanation playerstats_adj.head() Explanation: Show the head of the playerstats DataFrame. End of explanation left = playerstats_adj.groupby(["playerID"]).mean().reset_index() playerLS = master[["playerID","debut","finalGame"]].merge(left, how='inner', on="playerID") playerLS["debut"] = playerLS.debut.apply(lambda x: int(x[0:4])) playerLS["finalGame"] = playerLS.finalGame.apply(lambda x: int(x[0:4])) playerLS.drop('yearID', axis=1, inplace=True) Explanation: Problem 1(h) Using the playerstats DataFrame created in Problem 1(g), create a new DataFrame called playerLS containing the player's lifetime stats. This DataFrame should contain the playerID, the year the player's career started, the year the player's career ended and the player's lifetime average for each of the quantities (singles, doubles, triples, HR, BB). For simplicity we will simply compute the avaerage of the rates by year (a more correct way is to go back to the totals). End of explanation playerLS.head() Explanation: Show the head of the playerLS DataFrame. End of explanation playerLS["OPW"] = regr.predict(playerLS[["S", "2B", "3B", "HR", "BB"]].values) playerLS.head() Explanation: Problem 1(i) Compute the OPW for each player based on the average rates in the playerLS DataFrame. You can interpret this summary statistic as the predicted wins for a team with 9 batters exactly like the player in question. Add this column to the playerLS DataFrame. Call this colum OPW. End of explanation pos_df = fielding.groupby(["playerID"])["POS"].agg(lambda x:x.value_counts().index[0]) pos_df = pd.DataFrame(pos_df).reset_index() playerLS = pos_df.merge(playerLS, how='inner', on="playerID") playerLS_merged = playerLS.merge(medianSalaries, how='inner', on=['playerID']) Explanation: Problem 1(j) Add four columns to the playerLS DataFrame that contains the player's position (C, 1B, 2B, 3B, SS, LF, CF, RF, or OF), first name, last name and median salary. End of explanation playerLS_merged.head() Explanation: Show the head of the playerLS DataFrame. End of explanation _3_year_selector = playerLS_merged["finalGame"] - playerLS_merged["debut"] >= 3 start_selector = playerLS_merged["debut"] <= 2002 end_selector = playerLS_merged["finalGame"] >= 2003 active = playerLS_merged[_3_year_selector & start_selector & end_selector] fig = plt.figure() ax = fig.gca() ax.scatter(active.salary, active.OPW) ax.set_xlabel('Salary ') ax.set_ylabel('OPW') ax.set_title('Relationship between Salary and Predicted Number of Wins') plt.show() Explanation: Problem 1(k) Subset the playerLS DataFrame for players active in 2002 and 2003 and played at least three years. Plot and describe the relationship bewteen the median salary (in millions) and the predicted number of wins. End of explanation positions = list(set(playerLS_merged['POS'].values)) team = playerLS_merged.groupby('POS').min().reset_index() team team["salary"].sum() team["OPW"].mean() Explanation: Problem 1(L) Pick one players from one of each of these 10 position C, 1B, 2B, 3B, SS, LF, CF, RF, DH, or OF keeping the total median salary of all 10 players below 20 million. Report their averaged predicted wins and total salary. End of explanation attr = ['POS','2B', '3B', 'HR', 'BB', 'S'] team.sort_values(by='OPW', ascending=False)[attr].drop_duplicates().sum() Explanation: Problem 1(m) What do these players outperform in? Singles, doubles, triples HR or BB? End of explanation from sklearn.cross_validation import train_test_split from sklearn.ensemble import RandomForestClassifier #Use AllstarFull.csv to determine which players are Allstars allstar = pd.read_csv('Data/AllstarFull.csv') allstar = allstar[['playerID','yearID','GP']].drop_duplicates() players_allstar = pd.merge(players,allstar,on=['playerID'],how ='inner') players_allstar['GP'] = players_allstar['GP'].fillna(0) x = players_allstar.drop('GP',axis=1)._get_numeric_data().fillna(0) y = players_allstar['GP'] x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.20) rfc = RandomForestClassifier() rfc.fit(x_train, y_train) z = pd.DataFrame(zip(rfc.predict(x_test),y_test),columns = ['predicted','actual']) z['new'] = z['actual']-z['predicted'] print 'Score:','%.2f'%(100 - z[z['new']!=0]['new'].count()*100.0/z['new'].count()),'%' Explanation: Your answer here: Our team excells in 2B Use one of the classification methods to predict wheather a player will be an Allstar? End of explanation #load the iris data set from sklearn.datasets import load_iris iris = load_iris() Explanation: Discussion for Problem 1 The combination of grid search and 10 fold cross validation had had helped us pick a more accurate value of k for our KNN classifier. This helped us create an accurate KNN classifier for the iris dataset. Problem 2: $k$-Nearest Neighbors and Cross Validation What is the optimal $k$ for predicting species using $k$-nearest neighbor classification on the four features provided by the iris dataset. In this problem you will get to know the famous iris data set, and use cross validation to select the optimal $k$ for a $k$-nearest neighbor classification. This problem set makes heavy use of the sklearn library. In addition to Pandas, it is one of the most useful libraries for data scientists. For the Iris data set sklearn provides an extra function to load it - since it is one of the very commonly used data sets. End of explanation from sklearn.cross_validation import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(iris.data, iris.target, test_size=0.33, random_state=9) Explanation: Problem 2(a) Split the data into a train and a test set. Use a random selection of 33% of the samples as test data. Sklearn provides the train_test_split function for this purpose. Print the dimensions of all the train and test data sets you have created. End of explanation # use cross validation to find the optimal value for k k = np.arange(20)+1 parameters = {'n_neighbors': k} knn = sklearn.neighbors.KNeighborsClassifier() clf = sklearn.grid_search.GridSearchCV(knn, parameters, cv=10) clf.fit(X_train, Y_train) Explanation: Problem 2(b) Use ten fold cross validation to estimate the optimal value for $k$ for the iris data set. Note: For your convenience sklearn does not only include the KNN classifier, but also a grid search function. The function is called grid search, because if you have to optimize more than one parameter, it is common practice to define a range of possible values for each parameter. An exhaustive search then runs over the complete grid defined by all the possible parameter combinations. This can get very computation heavy, but luckily our KNN classifier only requires tuning of a single parameter for this problem set. End of explanation a = clf.grid_scores_ scores = [b.cv_validation_scores for b in a] score_means = np.mean(scores, axis=1) sns.boxplot(scores) plt.scatter(k,score_means, c='k', zorder=2) plt.ylim(0.8, 1.1) plt.title('Accuracy as a function of $k$') plt.ylabel('Accuracy') plt.xlabel('Choice of k') plt.show() Explanation: Problem 2(c) Visualize the result by plotting the score results versus values for $k$. End of explanation clf.best_params_ Explanation: Verify that the grid search has indeed chosen the right parameter value for $k$. End of explanation from sklearn.metrics import classification_report for params, mean_score, scores in clf.grid_scores_: print("%0.3f (+/-%0.03f) for %r" % (mean_score, scores.std() * 2, params)) print("Detailed classification report:") print() print("The model is trained on the full development set.") print("The scores are computed on the full evaluation set.") print() y_true, y_pred = Y_test, clf.predict(X_test) print(classification_report(y_true, y_pred)) clf.best_score_ Explanation: Problem 2(d) Test the performance of our tuned KNN classifier on the test set. End of explanation header = ['symboling','normalized-losses','make','fuel-type','aspiration', 'num-of-doors', 'body-style','drive-wheels','engine-location', 'wheel-base','length','width','height','curb-weight','engine-type', 'num-of-cylinders','engine-size','fuel-system','bore','stroke', 'compression-ratio','horsepower','peak-rpm','city-mpg','highway-mpg', 'price'] auto = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data', names=header) auto.head() Explanation: Discussion for Problem 2 Write a brief discussion of your conclusions to the questions and tasks above in 100 words or less. Problem 3: Supervised Learning - Estimating Boston house pricing using Linear Regression and Regression Trees Import the Boston House Pricing Dataset; it comes with the scikit or you can dowload it FROM UCI ML cars dataset. (https://archive.ics.uci.edu/ml/datasets/Automobile) End of explanation #Replace '?' with NaN auto = auto.replace(to_replace='?', value=np.nan) #Convert relevant numeric data from object to float change_num = ['bore','stroke','horsepower','peak-rpm','price'] auto[change_num]=auto[change_num].astype(float, inplace=True) #Select columns with numbers num_auto = auto._get_numeric_data().dropna() num_auto.head() import math from math import isnan def corr(list1, list2): '''Find correlation between 2 datasets. Removes NaNs''' #Remove Nan: new_list1 = [] new_list2 = [] for i1,i2 in zip(list1,list2): if (isnan(i1)==False) & (isnan(i2)==False): new_list1.append(float(i1)) new_list2.append(float(i2)) cov = np.cov([new_list1,new_list2])[0][1] std1 = np.std(new_list1) std2 = np.std(new_list2) corr = cov/(std1*std2) return corr #Taking features that has an abolute correlation of 0.5+ against price feature_list = num_auto.columns[0:-1] price = num_auto['price'] for feature in feature_list: attr = num_auto[feature] c = corr(attr,price) if np.sqrt(c**2) > 0.7: print feature,':\t\t%.2f'%c Explanation: Find the mostimportant features End of explanation #Attributes with highest correlation attr = ['width', 'curb-weight', 'engine-size', 'horsepower', 'highway-mpg','price','city-mpg'] #Picking important features new_auto = auto[attr].dropna() X = new_auto[new_auto.columns - ['price']] y = new_auto['price'] k_fold = sklearn.cross_validation.KFold(len(X), n_folds=10, shuffle=True, random_state=44) #Normalize the data X = (X - X.mean()) / (X.max() - X.min()) X.head() Explanation: Using 10-fold cross validation separate the test and training data sets End of explanation scores = [] for train_index, test_index in k_fold: X_train, X_test = X.values[train_index], X.values[test_index] y_train, y_test = y.values[train_index], y.values[test_index] lm = sklearn.linear_model.LinearRegression() lm.fit(X_train, y_train) scores.append(lm.score(X_test, y_test)) print 'Score:\t', '%.2f'%(np.mean(scores)*100),'%' Explanation: Start with a lineal model and evaluate how well it can predict the price variable End of explanation ### Your code here ### scores = [] for train_index, test_index in k_fold: X_train, X_test = X.values[train_index], X.values[test_index] y_train, y_test = y.values[train_index], y.values[test_index] rm = sklearn.linear_model.Ridge() rm.fit(X_train, y_train) predict = rm.predict(X_test) score = float(np.sum([i==j for i,j in zip(predict,y_test)])/len(y_test)) scores.append(rm.score(X_test, y_test)) print 'Score:\t', '%.2f'%(np.mean(scores)*100),'%' Explanation: Try using Ridge regression and evaluate the result of the 10-fold cross-validation End of explanation ### Your code here ### dtr = sklearn.tree.DecisionTreeRegressor() grid_search = sklearn.grid_search.GridSearchCV(dtr, {'min_samples_split':list(range(1,10)), 'min_samples_leaf':list(range(1,10)), 'min_weight_fraction_leaf':np.linspace(0,0.5,10)}, cv=10) grid_search.fit(X_train, y_train) print 'Best Parameters:' grid_search.best_params_ score = grid_search.score(X_test, y_test) print 'Score:\t','%.2f'%(score*100),'%' Explanation: Train the Regression Tree and evaluate using 10-fold cross validation; Specify the parapmeters used and how you changed them to increase the accuracy; End of explanation
6,496
Given the following text description, write Python code to implement the functionality described below step by step Description: 20141230_2DPlotsonPythonP2.ipynb Two-dimensional plots on Python [Part II] Support material for the blog post "Two-dimensional plots on Python [Part II]", on Programming Science. Author Step1: Custom plot line Step2: A custom 2D plot, based on our first example.
Python Code: from pylab import * t = arange(0.0, 2.0,0.01) y = sin(2*pi*t) plot(t, y) xlabel('Time (s)') ylabel('Voltage (mV)') title('The simplest one, buddies') grid(True) show() Explanation: 20141230_2DPlotsonPythonP2.ipynb Two-dimensional plots on Python [Part II] Support material for the blog post "Two-dimensional plots on Python [Part II]", on Programming Science. Author: Alexandre 'Jaguar' Fioravante de Siqueira Contact: http://programmingscience.org/?page_id=26 Support material: http://www.github.com/programmingscience/code In order to cite this material, please use the reference below (this is a Chicago-like style): de Siqueira, Alexandre Fioravante. "Two-dimensional plots on Python [Part II]". Programming Science. 2014, Dec 30. Available at http://www.programmingscience.org/?p=33. Access date: (please put your access date here). Copyright (C) Alexandre Fioravante de Siqueira This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/. Custom 2D plots. Generating a simple 2D plot. End of explanation from pylab import * t = arange(0.0, 2.0,0.01) y = sin(2*pi*t) plot(t, y, color='red') xlabel('Time (s)') ylabel('Voltage (mV)') title('The simplest one, buddies') grid(True) show() Explanation: Custom plot line: color='red'. End of explanation from pylab import * t = arange(0.0, 2.0,0.01) y = sin(2*pi*t) plot(t, y, color='green', linestyle='-.', linewidth=3) xlabel('Time (s)', fontweight='bold', fontsize=14) ylabel('Voltage (mV)', fontweight='bold', fontsize=14) title('The simplest one, buddies') grid(True) show() Explanation: A custom 2D plot, based on our first example. End of explanation
6,497
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1 align="center">First of all -- Checking Questions</h1> Вопрос 1 Step1: Mah Neural Network Step2: Training You first have to implement a batch generator Than the network will get trained the usual way Step3: Main loop We recommend you to periodically evaluate the network using the next "apply trained model" block its safe to interrupt training, run a few examples and start training again Step4: apply trained model Step5: Generate caption
Python Code: %%time # Read Dataset import numpy as np import pickle img_codes = np.load("data/image_codes.npy") captions = pickle.load(open('data/caption_tokens.pcl', 'rb')) print "each image code is a 1000-unit vector:", img_codes.shape print img_codes[0,:10] print '\n\n' print "for each image there are 5-7 descriptions, e.g.:\n" print '\n'.join(captions[0]) #split descriptions into tokens for img_i in range(len(captions)): for caption_i in range(len(captions[img_i])): sentence = captions[img_i][caption_i] captions[img_i][caption_i] = ["#START#"]+sentence.split(' ')+["#END#"] # Build a Vocabulary ############# TO CODE IT BY YOURSELF ################## from collections import Counter word_counts = Counter(word for pic in captions for line in pic for word in line[1:-1]) # <here should be dict word:number of entrances> # not to double START and END vocab = ['#UNK#', '#START#', '#END#'] vocab += [k for k, v in word_counts.items() if v >= 5] n_tokens = len(vocab) assert 10000 <= n_tokens <= 10500 word_to_index = {w: i for i, w in enumerate(vocab)} PAD_ix = -1 UNK_ix = vocab.index('#UNK#') def as_matrix(sequences,max_len=None): max_len = max_len or max(map(len,sequences)) matrix = np.zeros((len(sequences),max_len),dtype='int32')+PAD_ix for i,seq in enumerate(sequences): row_ix = [word_to_index.get(word,UNK_ix) for word in seq[:max_len]] matrix[i,:len(row_ix)] = row_ix return matrix #try it out on several descriptions of a random image as_matrix(captions[1337]) Explanation: <h1 align="center">First of all -- Checking Questions</h1> Вопрос 1: Можно ли использовать сверточные сети для классификации текстов? Если нет обоснуйте :D, если да то как? как решить проблему с произвольной длинной входа? <Ответ> Да, есть 2 подхода. 1. Через кодирование букв. Пусть алфавит размера $m$, тогда закодируем все буквы векторами длины m. Теперь возьмем первые $l$ символов текста, причем $l$ достаточно большим, чтобы было достаточно информацию. Теперь получили матрицу $m \times l$, дадим ее в сверточную сеть (как бы одномерный фильтр с $m$ каналов). 2. Каждому слово сопостовляем вектор фиксированной длины (one-hot, W2V и т.д.). Таким образом тексту сопоставляется матрица, которую мы как картинку отдаем сверточной сети. Для произвольной длины можно дополнять матрицы текстов, если они короче чем нужно, пока они влезают в память. Если не влезает, то обрабатывать по кускам. Вопрос 2: Чем LSTM лучше/хуже чем обычная RNN? <Ответ> В обычной RNN градиент может "взрываться" или наоборот "затухать", что очевидно плохо сказывается на обучении. В LSTM мы навешиваем контроль памяти, этим контролируя затухание градиента, например с помощью forget_gate (обнуления градиентов от некоторых предыдущих состояний) и другая работа со скрытым состоянием. При этом остается проблема "взрыва" градиента (которую можно частично решать grad_clipping-ом) Вопрос 3: Выпишите производную $\frac{d c_{n+1}}{d c_{k}}$ для LSTM http://colah.github.io/posts/2015-08-Understanding-LSTMs/, объясните формулу, когда производная затухает, когда взрывается? <Ответ> $\frac{dc_{n+1}}{dc_k} = \frac{dc_{n+1}}{dc_n} \cdot \frac{dc_n}{dc_k} = diag(\prod_{i=k+1}^{n+1} f_i)$, diag(вектора) - матрица с координатами вектора как диагональные элементы матрицы. Если $\frac{dc_{n+1}}{dc_k} > 1$ (имеется в виду диагональный элемент), то градиент "взрывается" и это плохо, если $<1$, то затухает, но это мы контролируем через forget_gate в LSTM. Вопрос 4: Зачем нужен TBPTT почему BPTT плох? <Ответ> BPTT плох тем, что на большом удалении градиент сильно затухает и практически не влияет ни на что. Поэтому, чтобы не тратить время на подсчет там, где это бесполезно, можно "обрезать" распространение градиента, что и делает TBPTT. Вопрос 5: Как комбинировать рекуррентные и сверточные сети, а главное зачем? Приведите несколько примеров реальных задач. <Ответ> Выход сверточных сетей, если убрать последний слой (подсчет вероятностей в softmax), дает векторное описание картинок, которое мы используем как начальное скрытое состояние для рекурентной сети. Соответственно применяем комбинацию этих сетей для image captioning и похожих задач, например, генерация речи, подпись видео. Вопрос 6: Объясните интуицию выбора размера эмбединг слоя? почему это опасное место? <Ответ> Если размер Embedding-а меньше нужного, то мы не сможем все закодировать (т.е. потеряем часть информации), что, очевидно, плохо и приводит к недообучению. С другой стороны избыточный размер приводит к тому, что мы тратим лишнюю память, а кроме того добавляем неинформативные фичи(так как все уже закодировано) и приводит к переобучению. Arseniy Ashuha, you can text me [email protected], Александр Панин <h1 align="center"> Image Captioning </h1> In this seminar you'll be going through the image captioning pipeline. It can help u https://ars-ashuha.ru/slides/2016.11.11_ImageCaptioning/image_captionong.pdf To begin with, let us download the dataset of image features from a pre-trained GoogleNet. !wget https://www.dropbox.com/s/3hj16b0fj6yw7cc/data.tar.gz?dl=1 -O data.tar.gz !tar -xvzf data.tar.gz Data preprocessing End of explanation # network shapes. CNN_FEATURE_SIZE = img_codes.shape[1] EMBED_SIZE = 128 * 2 #pls change me if u want LSTM_UNITS = 700 #pls change me if u want import theano import lasagne import theano.tensor as T from lasagne.layers import * # Input Variable sentences = T.imatrix()# [batch_size x time] of word ids image_vectors = T.matrix() # [batch size x unit] of CNN image features sentence_mask = T.neq(sentences, PAD_ix) #network inputs l_words = InputLayer((None, None), sentences) l_mask = InputLayer((None, None), sentence_mask) #embeddings for words ############# TO CODE IT BY YOURSELF ################## l_word_embeddings = EmbeddingLayer(l_words, n_tokens, EMBED_SIZE) # input layer for image features l_image_features = InputLayer((None, CNN_FEATURE_SIZE), image_vectors) ############# TO CODE IT BY YOURSELF ################## #convert 1000 image features from googlenet to whatever LSTM_UNITS you have set #it's also a good idea to add some dropout here and there l_image_features_small = DropoutLayer(l_image_features) l_image_features_small = DenseLayer(l_image_features_small, LSTM_UNITS) assert l_image_features_small.output_shape == (None, LSTM_UNITS) ############# TO CODE IT BY YOURSELF ################## # Concatinate image features and word embedings in one sequence decoder = LSTMLayer(l_word_embeddings, num_units=LSTM_UNITS, cell_init=l_image_features_small, mask_input=l_mask, grad_clipping=1e30) # Decoding of rnn hiden states from broadcast import BroadcastLayer,UnbroadcastLayer #apply whatever comes next to each tick of each example in a batch. Equivalent to 2 reshapes broadcast_decoder_ticks = BroadcastLayer(decoder, (0, 1)) print "broadcasted decoder shape = ",broadcast_decoder_ticks.output_shape predicted_probabilities_each_tick = DenseLayer( broadcast_decoder_ticks,n_tokens, nonlinearity=lasagne.nonlinearities.softmax) #un-broadcast back into (batch,tick,probabilities) predicted_probabilities = UnbroadcastLayer( predicted_probabilities_each_tick, broadcast_layer=broadcast_decoder_ticks) print "output shape = ", predicted_probabilities.output_shape #remove if you know what you're doing (e.g. 1d convolutions or fixed shape) assert predicted_probabilities.output_shape == (None, None, 10371) next_word_probas = get_output(predicted_probabilities) reference_answers = sentences[:,1:] output_mask = sentence_mask[:,1:] #write symbolic loss function to train NN for loss = lasagne.objectives.categorical_crossentropy( next_word_probas[:, :-1].reshape((-1, n_tokens)), reference_answers.reshape((-1,)) ).reshape(reference_answers.shape) ############# TO CODE IT BY YOURSELF ################## loss = (loss * output_mask).sum() / output_mask.sum() #trainable NN weights ############# TO CODE IT BY YOURSELF ################## weights = get_all_params(predicted_probabilities) updates = lasagne.updates.adam(loss, weights) #compile a function that takes input sentence and image mask, outputs loss and updates weights #please not that your functions must accept image features as FIRST param and sentences as second one ############# TO CODE IT BY YOURSELF ################## train_step = theano.function([image_vectors, sentences], loss, updates=updates) val_step = theano.function([image_vectors, sentences], loss) Explanation: Mah Neural Network End of explanation captions = np.array(captions) from random import choice def generate_batch(images,captions,batch_size,max_caption_len=None): #sample random numbers for image/caption indicies random_image_ix = np.random.randint(0, len(images), size=batch_size) #get images batch_images = images[random_image_ix] #5-7 captions for each image captions_for_batch_images = captions[random_image_ix] #pick 1 from 5-7 captions for each image batch_captions = map(choice, captions_for_batch_images) #convert to matrix batch_captions_ix = as_matrix(batch_captions,max_len=max_caption_len) return batch_images, batch_captions_ix generate_batch(img_codes,captions, 3) Explanation: Training You first have to implement a batch generator Than the network will get trained the usual way End of explanation batch_size = 200 #50 #adjust me n_epochs = 100 #adjust me n_batches_per_epoch = 50 #adjust me n_validation_batches = 5 #how many batches are used for validation after each epoch from tqdm import tqdm for epoch in range(n_epochs): train_loss=0 for _ in tqdm(range(n_batches_per_epoch)): train_loss += train_step(*generate_batch(img_codes,captions,batch_size)) train_loss /= n_batches_per_epoch val_loss=0 for _ in range(n_validation_batches): val_loss += val_step(*generate_batch(img_codes,captions,batch_size)) val_loss /= n_validation_batches print('\nEpoch: {}, train loss: {}, val loss: {}'.format(epoch, train_loss, val_loss)) print("Finish :)") from tqdm import tqdm for epoch in range(n_epochs): train_loss=0 for _ in tqdm(range(n_batches_per_epoch)): train_loss += train_step(*generate_batch(img_codes,captions,batch_size)) train_loss /= n_batches_per_epoch val_loss=0 for _ in range(n_validation_batches): val_loss += val_step(*generate_batch(img_codes,captions,batch_size)) val_loss /= n_validation_batches print('\nEpoch: {}, train loss: {}, val loss: {}'.format(epoch, train_loss, val_loss)) print("Finish :)") Explanation: Main loop We recommend you to periodically evaluate the network using the next "apply trained model" block its safe to interrupt training, run a few examples and start training again End of explanation #the same kind you did last week, but a bit smaller from pretrained_lenet import build_model,preprocess,MEAN_VALUES # build googlenet lenet = build_model() #load weights lenet_weights = pickle.load(open('data/blvc_googlenet.pkl'))['param values'] set_all_param_values(lenet["prob"], lenet_weights) #compile get_features cnn_input_var = lenet['input'].input_var cnn_feature_layer = lenet['loss3/classifier'] get_cnn_features = theano.function([cnn_input_var], lasagne.layers.get_output(cnn_feature_layer)) from matplotlib import pyplot as plt %matplotlib inline #sample image img = plt.imread('data/Dog-and-Cat.jpg') img = preprocess(img) #deprocess and show, one line :) from pretrained_lenet import MEAN_VALUES plt.imshow(np.transpose((img[0] + MEAN_VALUES)[::-1],[1,2,0]).astype('uint8')) Explanation: apply trained model End of explanation last_word_probas_det = get_output(predicted_probabilities,deterministic=False)[:,-1] get_probs = theano.function([image_vectors,sentences], last_word_probas_det) #this is exactly the generation function from week5 classwork, #except now we condition on image features instead of words def generate_caption(image,caption_prefix = ("START",),t=1,sample=True,max_len=100): image_features = get_cnn_features(image) caption = list(caption_prefix) for _ in range(max_len): next_word_probs = get_probs(image_features,as_matrix([caption]) ).ravel() #apply temperature next_word_probs = next_word_probs**t / np.sum(next_word_probs**t) if sample: next_word = np.random.choice(vocab,p=next_word_probs) else: next_word = vocab[np.argmax(next_word_probs)] caption.append(next_word) if next_word=="#END#": break return caption for i in range(10): print ' '.join(generate_caption(img,t=1.)[1:-1]) Explanation: Generate caption End of explanation
6,498
Given the following text description, write Python code to implement the functionality described below step by step Description: Extinction Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details. Step2: Adopt system parameters from Rebassa-Mansergas+ 2019. Step3: Now we'll create datasets for LSST u,g,r, and i bands. Step4: And set options for the atmospheres and limb-darkening. Step5: We'll set the inclination to 90 degrees and set some compute options. Step6: For comparison, we'll first compute a model with zero extinction. Step7: And then a second model with extinction. Step8: Finally we'll convert the output fluxes to magnitudes and format the figure.
Python Code: !pip install -I "phoebe>=2.2,<2.3" Explanation: Extinction: White Dwarf - Subdwarf Binary In this example, we'll reproduce Figure 4 in the extinction release paper (Jones et al. 2020). "SDSS J2355 is a short-period post-CE binary comprising a relatively cool white dwarf (Teff∼13,250 K) and a low-mass, metal-poor, sub-dwarf star (spectral type ∼sdK7). As before, calculating synthetic light curves for the system with no extinction and then with extinction consistent with the Galactic bulge, we now see significant deviations between the two models in u, g and r bands" (Jones et al. 2020) <img src="jones+20_fig4.png" alt="Figure 4" width="600px"/> Setup Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). End of explanation import matplotlib matplotlib.rcParams['text.usetex'] = True matplotlib.rcParams['pdf.fonttype'] = 42 matplotlib.rcParams['ps.fonttype'] = 42 matplotlib.rcParams['mathtext.fontset'] = 'stix' matplotlib.rcParams['font.family'] = 'STIXGeneral' from matplotlib import gridspec %matplotlib inline import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger('error') b = phoebe.default_binary() Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details. End of explanation b.set_value('period', component='binary', value=0.0897780065*u.d) b.set_value('teff', component='primary', value=13247*u.K) b.set_value('teff', component='secondary', value=3650*u.K) b.set_value('requiv', component='primary', value=0.0160*u.solRad) b.set_value('requiv', component='secondary', value=0.1669*u.solRad) b.flip_constraint('mass@primary', solve_for='sma@binary') b.set_value('mass', component='primary', value=0.4477*u.solMass) b.flip_constraint('mass@secondary', solve_for='q') b.set_value('mass', component='secondary', value=0.1501*u.solMass) Explanation: Adopt system parameters from Rebassa-Mansergas+ 2019. End of explanation period = b.get_value('period', component='binary') times=phoebe.linspace(-0.1*period, 0.6*period, 501) b.add_dataset('lc', times=times, dataset='u', passband="LSST:u") b.add_dataset('lc', times=times, dataset='g', passband="LSST:g") b.add_dataset('lc', times=times, dataset='r', passband="LSST:r") b.add_dataset('lc', times=times, dataset='i', passband="LSST:i") Explanation: Now we'll create datasets for LSST u,g,r, and i bands. End of explanation b.set_value_all('atm', component='primary', value='blackbody') b.set_value_all('ld_mode', component='primary', value='manual') b.set_value_all('ld_func', component='primary', value='quadratic') b.set_value('ld_coeffs', component='primary', dataset='u', value=[0.2665,0.2544]) b.set_value('ld_coeffs', component='primary', dataset='g', value=[0.1421,0.3693]) b.set_value('ld_coeffs', component='primary', dataset='r', value=[0.1225,0.3086]) b.set_value('ld_coeffs', component='primary', dataset='i', value=[0.1063,0.2584]) b.set_value_all('ld_mode_bol@primary','manual') b.set_value_all('ld_func_bol@primary','quadratic') b.set_value('ld_coeffs_bol', component='primary', value=[0.1421,0.3693]) b.set_value_all('atm', component='secondary', value='phoenix') b.set_value('abun', component='secondary', value=-1.55) Explanation: And set options for the atmospheres and limb-darkening. End of explanation b.set_value('incl', component='binary', value=90.0*u.deg) b.set_value_all('ntriangles', value=10000) b.set_value_all('intens_weighting', value='photon') b.set_value_all('Rv', value=2.5) Explanation: We'll set the inclination to 90 degrees and set some compute options. End of explanation b.set_value_all('Av', value=0.0) b.run_compute(model='noext',overwrite=True) Explanation: For comparison, we'll first compute a model with zero extinction. End of explanation b.set_value_all('Av',2.0) b.run_compute(model='ext',overwrite=True) Explanation: And then a second model with extinction. End of explanation uextmags=-2.5*np.log10(b['value@fluxes@u@ext@model']) unoextmags=-2.5*np.log10(b['value@fluxes@u@noext@model']) uextmags_norm=uextmags-uextmags.min()+1 unoextmags_norm=unoextmags-unoextmags.min()+1 uresid=uextmags_norm-unoextmags_norm gextmags=-2.5*np.log10(b['value@fluxes@g@ext@model']) gnoextmags=-2.5*np.log10(b['value@fluxes@g@noext@model']) gextmags_norm=gextmags-gextmags.min()+1 gnoextmags_norm=gnoextmags-gnoextmags.min()+1 gresid=gextmags_norm-gnoextmags_norm rextmags=-2.5*np.log10(b['value@fluxes@r@ext@model']) rnoextmags=-2.5*np.log10(b['value@fluxes@r@noext@model']) rextmags_norm=rextmags-rextmags.min()+1 rnoextmags_norm=rnoextmags-rnoextmags.min()+1 rresid=rextmags_norm-rnoextmags_norm iextmags=-2.5*np.log10(b['value@fluxes@i@ext@model']) inoextmags=-2.5*np.log10(b['value@fluxes@i@noext@model']) iextmags_norm=iextmags-iextmags.min()+1 inoextmags_norm=inoextmags-inoextmags.min()+1 iresid=iextmags_norm-inoextmags_norm fig=plt.figure(figsize=(12,12)) gs=gridspec.GridSpec(4,2,height_ratios=[4,1,4,1],width_ratios=[1,1]) ax=plt.subplot(gs[0,0]) ax.plot(b['value@times@u@noext@model']/7.,unoextmags_norm,color='k',linestyle="--") ax.plot(b['value@times@u@ext@model']/7.,uextmags_norm,color='k',linestyle="-") ax.set_ylabel('Magnitude') ax.set_xticklabels([]) ax.set_ylim([6.2,0.95]) ax.set_title('(a) LSST u') ax2=plt.subplot(gs[0,1]) ax2.plot(b['value@times@g@noext@model']/b['period@orbit'].quantity,gnoextmags_norm,color='k',linestyle="--") ax2.plot(b['value@times@g@ext@model']/b['period@orbit'].quantity,gextmags_norm,color='k',linestyle="-") ax2.set_ylabel('Magnitude') ax2.set_xticklabels([]) ax2.set_ylim([3.2,0.95]) ax2.set_title('(b) LSST g') ax_1=plt.subplot(gs[1,0]) ax_1.plot(b['value@times@u@noext@model']/b['period@orbit'].quantity,uresid,color='k',linestyle='-') ax_1.set_ylabel(r'$\Delta m$') ax_1.set_xlabel('Phase') ax_1.set_ylim([0.05,-0.3]) ax_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5) ax2_1=plt.subplot(gs[1,1]) ax2_1.plot(b['value@times@g@noext@model']/b['period@orbit'].quantity,gresid,color='k',linestyle='-') ax2_1.set_ylabel(r'$\Delta m$') ax2_1.set_xlabel('Phase') ax2_1.set_ylim([0.05,-0.3]) ax2_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5) ax3=plt.subplot(gs[2,0]) ax3.plot(b['value@times@r@noext@model']/b['period@orbit'].quantity,rnoextmags_norm,color='k',linestyle="--") ax3.plot(b['value@times@r@ext@model']/b['period@orbit'].quantity,rextmags_norm,color='k',linestyle="-") ax3.set_ylabel('Magnitude') ax3.set_xticklabels([]) ax3.set_ylim([2.0,0.95]) ax3.set_title('(c) LSST r') ax4=plt.subplot(gs[2,1]) ax4.plot(b['value@times@i@noext@model']/b['period@orbit'].quantity,inoextmags_norm,color='k',linestyle="--") ax4.plot(b['value@times@i@ext@model']/b['period@orbit'].quantity,iextmags_norm,color='k',linestyle="-") ax4.set_ylabel('Magnitude') ax4.set_xticklabels([]) ax4.set_ylim([1.6,0.95]) ax4.set_title('(d) LSST i') ax3_1=plt.subplot(gs[3,0]) ax3_1.plot(b['value@times@r@noext@model']/b['period@orbit'].quantity,rresid,color='k',linestyle='-') ax3_1.set_ylabel(r'$\Delta m$') ax3_1.set_xlabel('Phase') ax3_1.set_ylim([0.01,-0.03]) ax3_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5) ax4_1=plt.subplot(gs[3,1]) ax4_1.plot(b['value@times@i@noext@model']/b['period@orbit'].quantity,iresid,color='k',linestyle='-') ax4_1.set_ylabel(r'$\Delta m$') ax4_1.set_xlabel('Phase') ax4_1.set_ylim([0.01,-0.03]) ax4_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5) ax_1.axhspan(-0.0075,0.0075,color='lightgray') ax2_1.axhspan(-0.005,0.005,color='lightgray') ax3_1.axhspan(-0.005,0.005,color='lightgray') ax4_1.axhspan(-0.005,0.005,color='lightgray') plt.tight_layout() fig.canvas.draw() Explanation: Finally we'll convert the output fluxes to magnitudes and format the figure. End of explanation
6,499
Given the following text description, write Python code to implement the functionality described below step by step Description: 함수 정의하기 및 실행하기 함수 정의하기 Step2: 함수정의의 문서화 프로그래밍 코드를 저장한 파일에는 코드 이외에 코드와 관련된 주석을 적절하게 포함하고 있어야 한다. 이를 "문서화"라 한다. 문서화는 코드 이상으로 중요하다. 문서화가 제대로 되어있지 않은 프로그램 코드 파일은 코드 개발 및 관리를 매우 어렵게 만든다. 문서화의 기본은 함수에 주석을 다는 것이다. 함수정의에 사용되는 주석을 "docstring(문서화 문자열)"이라 부른다. 함수에 주석을 달아주면 help 함수를 이용하여 해당 함수의 역할 및 사용법을 확인할 수 있다. 앞서 help("abs") 명령을 실행하여 abs 함수가 절대값을 return하는 함수임을 알 수 있었음에 주의하라. Step3: 함수 관련 용어 함수를 호출(실행)하기 위해서는 함수이름(인자1, 인자2, ...) 형태로 사용한다. 예를 들어 mysum(2,3) 처럼 mysum 함수를 호출한다. Step4: mysum(x,y)에서 x와 y는 mysum 함수를 호출할 때 사용되는 "인자(argument)" 들이다. z는 mysum 함수를 두 개의 인자 x, y에 대해 호출하였을 때 "되돌려주는 값(리턴값, return value)"이다. 함수를 호출할 때 사용되는 인자의 개수는 0개 이상이다. 리턴값이 없는 것처럼 보이는 함수도 존재한다. 대표적으로 print 함수가 그렇다. 즉, print 함수의 정의를 확인하면 return 값이 정의되어 있지 않다. 하지만 return 값이 명시되어 있지 않아도 파이썬은 None이라는 특별한 값을 리턴한다고 간주한다. 리턴값이 있는/없는 함수 Step5: 주의 Step6: 모듈(Module) 모듈은 동일한 분야에서 사용되는 함수 및 코드들을 한데로 모아놓은 파이썬 파일(확장자가 .py인 파일)이다. 예를 들어 operator 모듈은 파이썬에서 기본적으로 제공하는 계산 관련 함수들의 정의들을 갖고 있는 파일이다. 덧셈, 뺄셈, 나눗셈 등 기초 연산자들이 아닌 다른 함수들, 예를 들어, sin, cos, log 등 고등수학 관련 함수들은 math 라는 모듈에 정의되어 있다. operator 모듈 +, -, *, /, //, ** 등은 파이썬이 기본적으로 제공하는 함수들을 대신하는 기호들이다. 각 기호들에 해당하는 진짜 함수들은 operator 모듈에 정의되어 있다. 예를 들어, + 기호는 operator 모듈의 add 함수를 나타낸다. 기타 기호들의 함수들에 대해서는 아래 링크를 참조하면 된다. https Step7: dir 함수를 이용하면 operator 모듈에 정의되어 있는 함수들의 목록을 확인할 수 있다. Step8: 모듈 사용하는 법 모듈을 사용하려면 해당 모듈을 import 해야 한다. 예를 들어, sin(10) 값을 구하기 위해 math 모듈을 임포트해야 하는데 임포트 방식은 크게 세 가지로 나뉜다. import math 명령을 실행한 후에 math.sin(10) 등 처럼 실행하면 된다. 모듈 이름을 임의로 고쳐서 임포트 할 수 있다 Step9: math 모듈에 들어있는 함수들을 종종 확인할 필요가 있다. 강의에서 가장 많이 사용하는 모듈 중에 하나이다.
Python Code: def mysum(a, b): return a + b Explanation: 함수 정의하기 및 실행하기 함수 정의하기: def 키워드를 이용 함수를 정의하려면 def 키워드를 이용한다. def 함수이름(인자, ...): 형태로 정의한다. 콜론(:)을 항상 사용해야 함에 주의할 것. 함수의 본체(body)는 들여쓰기를 해야한다. 들여쓰기는 선택이 아닌 의무사항이다. End of explanation def mysum(a, b): 내가 정의한 덧셈이다. 인자 a와 b에 각각 두 숫자를 입력받아 합을 되돌려준다. return a + b help(mysum) Explanation: 함수정의의 문서화 프로그래밍 코드를 저장한 파일에는 코드 이외에 코드와 관련된 주석을 적절하게 포함하고 있어야 한다. 이를 "문서화"라 한다. 문서화는 코드 이상으로 중요하다. 문서화가 제대로 되어있지 않은 프로그램 코드 파일은 코드 개발 및 관리를 매우 어렵게 만든다. 문서화의 기본은 함수에 주석을 다는 것이다. 함수정의에 사용되는 주석을 "docstring(문서화 문자열)"이라 부른다. 함수에 주석을 달아주면 help 함수를 이용하여 해당 함수의 역할 및 사용법을 확인할 수 있다. 앞서 help("abs") 명령을 실행하여 abs 함수가 절대값을 return하는 함수임을 알 수 있었음에 주의하라. End of explanation x = 2 y = 3 z = mysum(x,y) Explanation: 함수 관련 용어 함수를 호출(실행)하기 위해서는 함수이름(인자1, 인자2, ...) 형태로 사용한다. 예를 들어 mysum(2,3) 처럼 mysum 함수를 호출한다. End of explanation def print42(): print(42) def return42(): return 42 b = return42() b a = print42() print(a) Explanation: mysum(x,y)에서 x와 y는 mysum 함수를 호출할 때 사용되는 "인자(argument)" 들이다. z는 mysum 함수를 두 개의 인자 x, y에 대해 호출하였을 때 "되돌려주는 값(리턴값, return value)"이다. 함수를 호출할 때 사용되는 인자의 개수는 0개 이상이다. 리턴값이 없는 것처럼 보이는 함수도 존재한다. 대표적으로 print 함수가 그렇다. 즉, print 함수의 정의를 확인하면 return 값이 정의되어 있지 않다. 하지만 return 값이 명시되어 있지 않아도 파이썬은 None이라는 특별한 값을 리턴한다고 간주한다. 리턴값이 있는/없는 함수 End of explanation return42() print42() Explanation: 주의: IPython의 경우 리턴값이 있는 함수의 경우에만 out[] 문장이 함께 사용된다. End of explanation import operator help(operator.add) Explanation: 모듈(Module) 모듈은 동일한 분야에서 사용되는 함수 및 코드들을 한데로 모아놓은 파이썬 파일(확장자가 .py인 파일)이다. 예를 들어 operator 모듈은 파이썬에서 기본적으로 제공하는 계산 관련 함수들의 정의들을 갖고 있는 파일이다. 덧셈, 뺄셈, 나눗셈 등 기초 연산자들이 아닌 다른 함수들, 예를 들어, sin, cos, log 등 고등수학 관련 함수들은 math 라는 모듈에 정의되어 있다. operator 모듈 +, -, *, /, //, ** 등은 파이썬이 기본적으로 제공하는 함수들을 대신하는 기호들이다. 각 기호들에 해당하는 진짜 함수들은 operator 모듈에 정의되어 있다. 예를 들어, + 기호는 operator 모듈의 add 함수를 나타낸다. 기타 기호들의 함수들에 대해서는 아래 링크를 참조하면 된다. https://docs.python.org/2/library/operator.html operator 모듈에 있는 함수들의 이름을 직접 사용하려면 import operator 명령을 실행한 후에 operator.add(1,2) 형태로 실행하면 된다. dir(operator) 를 실행하면 operator 모듈에 어떤 함수들이 정의되어 있는지 확인할 수 있다 End of explanation dir(operator) Explanation: dir 함수를 이용하면 operator 모듈에 정의되어 있는 함수들의 목록을 확인할 수 있다. End of explanation import math math.sin(10) import math as m m.pi from math import sin, pi sin(10) pi math.cos(10) from math import * exp(1) Explanation: 모듈 사용하는 법 모듈을 사용하려면 해당 모듈을 import 해야 한다. 예를 들어, sin(10) 값을 구하기 위해 math 모듈을 임포트해야 하는데 임포트 방식은 크게 세 가지로 나뉜다. import math 명령을 실행한 후에 math.sin(10) 등 처럼 실행하면 된다. 모듈 이름을 임의로 고쳐서 임포트 할 수 있다: import math as m from math import sin 명령을 실행한 후에 sin(10)을 실행하면 된다. from math import * 명령을 실행한 후에 sin(10)을 실행하면 된다. 여기서 * 는 모든 함수를 의미한다. End of explanation dir(math) help(math.sqrt) Explanation: math 모듈에 들어있는 함수들을 종종 확인할 필요가 있다. 강의에서 가장 많이 사용하는 모듈 중에 하나이다. End of explanation