repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
sequence
types
sequence
statsmodels/statsmodels
examples/notebooks/quasibinomial.ipynb
bsd-3-clause
[ "Quasi-binomial regression\nThis notebook demonstrates using custom variance functions and non-binary data\nwith the quasi-binomial GLM family to perform a regression analysis using\na dependent variable that is a proportion.\nThe notebook uses the barley leaf blotch data that has been discussed in\nseveral textbooks. See below for one reference:\nhttps://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_glimmix_sect016.htm", "import statsmodels.api as sm\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom io import StringIO", "The raw data, expressed as percentages. We will divide by 100\nto obtain proportions.", "raw = StringIO(\n \"\"\"0.05,0.00,1.25,2.50,5.50,1.00,5.00,5.00,17.50\n0.00,0.05,1.25,0.50,1.00,5.00,0.10,10.00,25.00\n0.00,0.05,2.50,0.01,6.00,5.00,5.00,5.00,42.50\n0.10,0.30,16.60,3.00,1.10,5.00,5.00,5.00,50.00\n0.25,0.75,2.50,2.50,2.50,5.00,50.00,25.00,37.50\n0.05,0.30,2.50,0.01,8.00,5.00,10.00,75.00,95.00\n0.50,3.00,0.00,25.00,16.50,10.00,50.00,50.00,62.50\n1.30,7.50,20.00,55.00,29.50,5.00,25.00,75.00,95.00\n1.50,1.00,37.50,5.00,20.00,50.00,50.00,75.00,95.00\n1.50,12.70,26.25,40.00,43.50,75.00,75.00,75.00,95.00\"\"\"\n)", "The regression model is a two-way additive model with\nsite and variety effects. The data are a full unreplicated\ndesign with 10 rows (sites) and 9 columns (varieties).", "df = pd.read_csv(raw, header=None)\ndf = df.melt()\ndf[\"site\"] = 1 + np.floor(df.index / 10).astype(int)\ndf[\"variety\"] = 1 + (df.index % 10)\ndf = df.rename(columns={\"value\": \"blotch\"})\ndf = df.drop(\"variable\", axis=1)\ndf[\"blotch\"] /= 100", "Fit the quasi-binomial regression with the standard variance\nfunction.", "model1 = sm.GLM.from_formula(\n \"blotch ~ 0 + C(variety) + C(site)\", family=sm.families.Binomial(), data=df\n)\nresult1 = model1.fit(scale=\"X2\")\nprint(result1.summary())", "The plot below shows that the default variance function is\nnot capturing the variance structure very well. Also note\nthat the scale parameter estimate is quite small.", "plt.clf()\nplt.grid(True)\nplt.plot(result1.predict(linear=True), result1.resid_pearson, \"o\")\nplt.xlabel(\"Linear predictor\")\nplt.ylabel(\"Residual\")", "An alternative variance function is mu^2 * (1 - mu)^2.", "class vf(sm.families.varfuncs.VarianceFunction):\n def __call__(self, mu):\n return mu ** 2 * (1 - mu) ** 2\n\n def deriv(self, mu):\n return 2 * mu - 6 * mu ** 2 + 4 * mu ** 3", "Fit the quasi-binomial regression with the alternative variance\nfunction.", "bin = sm.families.Binomial()\nbin.variance = vf()\nmodel2 = sm.GLM.from_formula(\"blotch ~ 0 + C(variety) + C(site)\", family=bin, data=df)\nresult2 = model2.fit(scale=\"X2\")\nprint(result2.summary())", "With the alternative variance function, the mean/variance relationship\nseems to capture the data well, and the estimated scale parameter is\nclose to 1.", "plt.clf()\nplt.grid(True)\nplt.plot(result2.predict(linear=True), result2.resid_pearson, \"o\")\nplt.xlabel(\"Linear predictor\")\nplt.ylabel(\"Residual\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
machinelearningnanodegree/stanford-cs231
solutions/pranay/assignment1/features.ipynb
mit
[ "Image features exercise\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.\nWe have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.\nAll of your work for this exercise will be done in this notebook.", "import random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading extenrnal modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2", "Load data\nSimilar to previous exercises, we will load CIFAR-10 data from disk.", "from cs231n.features import color_histogram_hsv, hog_feature\n\ndef get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):\n # Load the raw CIFAR-10 data\n cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\n X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n \n # Subsample the data\n mask = range(num_training, num_training + num_validation)\n X_val = X_train[mask]\n y_val = y_train[mask]\n mask = range(num_training)\n X_train = X_train[mask]\n y_train = y_train[mask]\n mask = range(num_test)\n X_test = X_test[mask]\n y_test = y_test[mask]\n\n return X_train, y_train, X_val, y_val, X_test, y_test\n\nX_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()", "Extract Features\nFor each image we will compute a Histogram of Oriented\nGradients (HOG) as well as a color histogram using the hue channel in HSV\ncolor space. We form our final feature vector for each image by concatenating\nthe HOG and color histogram feature vectors.\nRoughly speaking, HOG should capture the texture of the image while ignoring\ncolor information, and the color histogram represents the color of the input\nimage while ignoring texture. As a result, we expect that using both together\nought to work better than using either alone. Verifying this assumption would\nbe a good thing to try for the bonus section.\nThe hog_feature and color_histogram_hsv functions both operate on a single\nimage and return a feature vector for that image. The extract_features\nfunction takes a set of images and a list of feature functions and evaluates\neach feature function on each image, storing the results in a matrix where\neach column is the concatenation of all feature vectors for a single image.", "from cs231n.features import *\n\nnum_color_bins = 10 # Number of bins in the color histogram\nfeature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]\nX_train_feats = extract_features(X_train, feature_fns, verbose=True)\nX_val_feats = extract_features(X_val, feature_fns)\nX_test_feats = extract_features(X_test, feature_fns)\n\n# Preprocessing: Subtract the mean feature\nmean_feat = np.mean(X_train_feats, axis=0, keepdims=True)\nX_train_feats -= mean_feat\nX_val_feats -= mean_feat\nX_test_feats -= mean_feat\n\n# Preprocessing: Divide by standard deviation. This ensures that each feature\n# has roughly the same scale.\nstd_feat = np.std(X_train_feats, axis=0, keepdims=True)\nX_train_feats /= std_feat\nX_val_feats /= std_feat\nX_test_feats /= std_feat\n\n# Preprocessing: Add a bias dimension\nX_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])\nX_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])\nX_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])", "Train SVM on features\nUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.", "# Use the validation set to tune the learning rate and regularization strength\n\nfrom cs231n.classifiers.linear_classifier import LinearSVM\n\nlearning_rates = [1e-9, 1e-8, 1e-7]\nregularization_strengths = [1e5, 1e6, 1e7]\n\nresults = {}\nbest_val = -1\nbest_svm = None\n\npass\n################################################################################\n# TODO: #\n# Use the validation set to set the learning rate and regularization strength. #\n# This should be identical to the validation that you did for the SVM; save #\n# the best trained classifer in best_svm. You might also want to play #\n# with different numbers of bins in the color histogram. If you are careful #\n# you should be able to get accuracy of near 0.44 on the validation set. #\n################################################################################\npass\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out results.\nfor lr, reg in sorted(results):\n train_accuracy, val_accuracy = results[(lr, reg)]\n print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (\n lr, reg, train_accuracy, val_accuracy)\n \nprint 'best validation accuracy achieved during cross-validation: %f' % best_val\n\n# Evaluate your trained SVM on the test set\ny_test_pred = best_svm.predict(X_test_feats)\ntest_accuracy = np.mean(y_test == y_test_pred)\nprint test_accuracy\n\n# An important way to gain intuition about how an algorithm works is to\n# visualize the mistakes that it makes. In this visualization, we show examples\n# of images that are misclassified by our current system. The first column\n# shows images that our system labeled as \"plane\" but whose true label is\n# something other than \"plane\".\n\nexamples_per_class = 8\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nfor cls, cls_name in enumerate(classes):\n idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]\n idxs = np.random.choice(idxs, examples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)\n plt.imshow(X_test[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls_name)\nplt.show()", "Inline question 1:\nDescribe the misclassification results that you see. Do they make sense?\nNeural Network on image features\nEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. \nFor completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.", "print X_train_feats.shape\n\nfrom cs231n.classifiers.neural_net import TwoLayerNet\n\ninput_dim = X_train_feats.shape[1]\nhidden_dim = 500\nnum_classes = 10\n\nnet = TwoLayerNet(input_dim, hidden_dim, num_classes)\nbest_net = None\n\n################################################################################\n# TODO: Train a two-layer neural network on image features. You may want to #\n# cross-validate various parameters as in previous sections. Store your best #\n# model in the best_net variable. #\n################################################################################\npass\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Run your neural net classifier on the test set. You should be able to\n# get more than 55% accuracy.\n\ntest_acc = (net.predict(X_test_feats) == y_test).mean()\nprint test_acc", "Bonus: Design your own features!\nYou have seen that simple image features can improve classification performance. So far we have tried HOG and color histograms, but other types of features may be able to achieve even better classification performance.\nFor bonus points, design and implement a new type of feature and use it for image classification on CIFAR-10. Explain how your feature works and why you expect it to be useful for image classification. Implement it in this notebook, cross-validate any hyperparameters, and compare its performance to the HOG + Color histogram baseline.\nBonus: Do something extra!\nUse the material and code we have presented in this assignment to do something interesting. Was there another question we should have asked? Did any cool ideas pop into your head as you were working on the assignment? This is your chance to show off!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jeffzhengye/pylearn
tensorflow_learning/tf2/notebooks/.ipynb_checkpoints/transfer_learning-中文-checkpoint-checkpoint.ipynb
unlicense
[ "迁移学习与微调,Transfer learning & fine-tuning\nAuthor: fchollet<br>\nDate created: 2020/04/15<br>\nLast modified: 2020/05/12<br>\nDescription: Complete guide to transfer learning & fine-tuning in Keras. <br>\n翻译: 叶正\n设置,Setup", "import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras", "介绍,Introduction\nTransfer learning consists of taking features learned on one problem, and\nleveraging them on a new, similar problem. For instance, features from a model that has\nlearned to identify racoons may be useful to kick-start a model meant to identify\n tanukis.\n迁移学习 就是把从一个问题中学习到的特征使用新的、类似的问题。例如,一个识别浣熊的模型中学到的特征,可能用来初始化识别狸猫的模型中是有用的。\nTransfer learning is usually done for tasks where your dataset has too little data to\n train a full-scale model from scratch.\n迁移学习通常在当你只有较少的数据难以从头开始训练一个模型的时候使用。\nThe most common incarnation of transfer learning in the context of deep learning is the\n following workflow:\n深度学习中最典型的迁移学习工作流如下:\n\n从已经训练好的模型中取出部分层\n固定这些层的参数,在后面的训练中不破坏之前学到的信息\n在这些固定的层上面,添加一些新的、可以训练的层。这些层将学习如何把之前学到的老的特征变成在新的数据集上的预测。\n\n在新的数据集上,开始训练新的层。\n\n\nTake layers from a previously trained model.\n\nFreeze them, so as to avoid destroying any of the information they contain during\n future training rounds.\nAdd some new, trainable layers on top of the frozen layers. They will learn to turn\n the old features into predictions on a new dataset.\nTrain the new layers on your dataset.\n\nA last, optional step, is fine-tuning, which consists of unfreezing the entire\nmodel you obtained above (or part of it), and re-training it on the new data with a\nvery low learning rate. This can potentially achieve meaningful improvements, by\n incrementally adapting the pretrained features to the new data.\n最后,也是可选的步骤,就是所谓的微调fine-tuning。这里包括unfreezing整个模型(或者部分模型),并且使用非常小的学习率重新训练。通过逐步地在新的数据上调节预训练好的特征,有潜力获得可观的提高。\nFirst, we will go over the Keras trainable API in detail, which underlies most\n transfer learning & fine-tuning workflows.\n首先,我们将复习keras trainable API,这套api是大多数迁移学习和微调工作量的基础。\nThen, we'll demonstrate the typical workflow by taking a model pretrained on the\nImageNet dataset, and retraining it on the Kaggle \"cats vs dogs\" classification\n dataset.\n然后,展示典型的工作量:选取一个从ImageNet 数据集上训练好的模型,然后在Kaggle的\"cats vs dogs\"分类数据集上重新训练\n该实例从下列例子中修改的来。\nDeep Learning with Python\n and the 2016 blog post\n\"building powerful image classification models using very little\n data\".\nFreezing layers: understanding the trainable attribute\n冻住一些层:理解trainable属性\nLayers & models have three weight attributes:\n层和模型有三个权重属性:\n\nweights :层所有权重变量的列表 list\ntrainable_weights : 训练过程中,以上中可以更新(通过梯度下降)来最小化损失函数。\n\nnon_trainable_weights: 以上中不能被训练的固定参数。\n通常,这些变量在前向传播过程中被更新。\n\n\nweights is the list of all weights variables of the layer.\n\ntrainable_weights is the list of those that are meant to be updated (via gradient\n descent) to minimize the loss during training.\nnon_trainable_weights is the list of those that aren't meant to be trained.\n Typically they are updated by the model during the forward pass.\n\nExample: the Dense layer has 2 trainable weights (kernel & bias)\n例子:Dense层有2个可训练的权重(kernel & bias)", "layer = keras.layers.Dense(3)\nlayer.build((None, 4)) # Create the weights\n\nprint(\"weights:\", len(layer.weights), layer.weights)\nprint(\"trainable_weights:\", len(layer.trainable_weights),layer.trainable_weights)\nprint(\"non_trainable_weights:\", len(layer.non_trainable_weights))", "通常,所有的权重都是可以训练的权重。keras自动的layer中只有BatchNormalization有不可训练的权重。BatchNormalization使用不可训练的权重来跟踪训练过程中输入的mean和variance。\n学习在自定义layers,如何使用不可训练权重,请看\nguide to writing new layers from scratch.\nIn general, all weights are trainable weights. The only built-in layer that has\nnon-trainable weights is the BatchNormalization layer. It uses non-trainable weights\n to keep track of the mean and variance of its inputs during training.\nTo learn how to use non-trainable weights in your own custom layers, see the\nguide to writing new layers from scratch.\nExample: the BatchNormalization layer has 2 trainable weights and 2 non-trainable\n weights\n例子:BatchNormalization层有3个可训练权重和2个不可训练权重", "layer = keras.layers.BatchNormalization()\nlayer.build((None, 4)) # Create the weights\n\nprint(\"weights:\", len(layer.weights))\nprint(\"trainable_weights:\", len(layer.trainable_weights))\nprint(\"non_trainable_weights:\", len(layer.non_trainable_weights))", "Layers & models also feature a boolean attribute trainable. Its value can be changed.\nSetting layer.trainable to False moves all the layer's weights from trainable to\nnon-trainable. This is called \"freezing\" the layer: the state of a frozen layer won't\nbe updated during training (either when training with fit() or when training with\n any custom loop that relies on trainable_weights to apply gradient updates).\nExample: setting trainable to False\n层和模型都有一个布尔型的属性trainable,其值可以改变。把layer.trainable的值设置为False就可以把该层所有权重都变成不可训练。该过程我们称为冷冻\"freezing\"该层:该层的状态在训练过程中不会改变。\n例子:设置trainable to `False", "layer = keras.layers.Dense(3)\nlayer.build((None, 4)) # Create the weights\nlayer.trainable = False # Freeze the layer\n\nprint(\"weights:\", len(layer.weights))\nprint(\"trainable_weights:\", len(layer.trainable_weights))\nprint(\"non_trainable_weights:\", len(layer.non_trainable_weights))", "When a trainable weight becomes non-trainable, its value is no longer updated during\n training.\n当可训练的权重被设置成不可训练non-trainable,其值在训练过程中不再可以更新。", "# Make a model with 2 layers\nlayer1 = keras.layers.Dense(3, activation=\"relu\")\nlayer2 = keras.layers.Dense(3, activation=\"sigmoid\")\nmodel = keras.Sequential([keras.Input(shape=(3,)), layer1, layer2])\n\n# Freeze the first layer\nlayer1.trainable = False\n\n# Keep a copy of the weights of layer1 for later reference\ninitial_layer1_weights_values = layer1.get_weights()\n\n# Train the model\nmodel.compile(optimizer=\"adam\", loss=\"mse\")\nmodel.fit(np.random.random((2, 3)), np.random.random((2, 3)))\n\n# Check that the weights of layer1 have not changed during training\nfinal_layer1_weights_values = layer1.get_weights()\nnp.testing.assert_allclose(\n initial_layer1_weights_values[0], final_layer1_weights_values[0]\n)\nnp.testing.assert_allclose(\n initial_layer1_weights_values[1], final_layer1_weights_values[1]\n)", "不要混淆layer.trainable属性和layer.__call__()中参数training ,后者是控制该层在推理或者训练模式下是否运行前向过程。\nDo not confuse the layer.trainable attribute with the argument training in\nlayer.__call__() (which controls whether the layer should run its forward pass in\n inference mode or training mode). For more information, see the\nKeras FAQ.\nRecursive setting of the trainable attribute 递归地设置 trainable\nIf you set trainable = False on a model or on any layer that has sublayers,\nall children layers become non-trainable as well.\n如果我们再一个模型或者任何一个layer上设置trainable = False,则其所有子layers 也会变成non-trainable\nExample:", "inner_model = keras.Sequential(\n [\n keras.Input(shape=(3,)),\n keras.layers.Dense(3, activation=\"relu\"),\n keras.layers.Dense(3, activation=\"relu\"),\n ]\n)\n\nmodel = keras.Sequential(\n [keras.Input(shape=(3,)), inner_model, keras.layers.Dense(3, activation=\"sigmoid\"),]\n)\n\nmodel.trainable = False # Freeze the outer model\n\nassert inner_model.trainable == False # All layers in `model` are now frozen\nassert inner_model.layers[0].trainable == False # `trainable` is propagated recursively", "典型的迁移学习流程, The typical transfer-learning workflow\nThis leads us to how a typical transfer learning workflow can be implemented in Keras:\n\nInstantiate a base model and load pre-trained weights into it.\nFreeze all layers in the base model by setting trainable = False.\nCreate a new model on top of the output of one (or several) layers from the base\n model.\nTrain your new model on your new dataset.\n\nkeras中典型迁移学习流程实现如下:\n\n生成一个基础模型并加载预训练好的权重\n冷冻该模型中所有参数:trainable = False\n在该基础模型的输出层(或者几个不同的输出层)基础上创建新的模型\n在新的数据集上训练新的模型。\n\nNote that an alternative, more lightweight workflow could also be:\n\nInstantiate a base model and load pre-trained weights into it.\nRun your new dataset through it and record the output of one (or several) layers\n from the base model. This is called feature extraction.\nUse that output as input data for a new, smaller model.\n\n注意一个可替换且更轻量级的流程如下:\n\n生成一个基础模型并加载预训练好的权重\n把新数据放入该模型,然后记录该模型的输出(或者几个不同的输出层的输出)。这个过程称作为 特征抽取\n使用以上输出为一个新的且更小的模型的输入\n\nA key advantage of that second workflow is that you only run the base model once on\n your data, rather than once per epoch of training. So it's a lot faster & cheaper.\n该流程的一个关键好处是:你只需要运行base模型一次,而不需要每个epoch的训练中都运行。所以将会更快和低成本。\nAn issue with that second workflow, though, is that it doesn't allow you to dynamically\nmodify the input data of your new model during training, which is required when doing\ndata augmentation, for instance. Transfer learning is typically used for tasks when\nyour new dataset has too little data to train a full-scale model from scratch, and in\nsuch scenarios data augmentation is very important. So in what follows, we will focus\n on the first workflow.\n但是第二个流程的问题是,在训练过程中我们无法动态的修改模型的输入,也就是说我们无法做数据的增强等操作。\n 迁移学习一般用在只有很少数据可用于训练的任务中,因为数据太少我们无法从头开始训练一个好的模型,而且该条件下旺旺数据增强是非常重要的。接下来,我们主要关注第一个流程。\nHere's what the first workflow looks like in Keras:\nFirst, instantiate a base model with pre-trained weights.\n以下是第一个流程在Keras中使用的样例:\n首先,使用预训练好的权重实例化一个基础模型:\npython\nbase_model = keras.applications.Xception(\n weights='imagenet', # Load weights pre-trained on ImageNet.\n input_shape=(150, 150, 3),\n include_top=False) # Do not include the ImageNet classifier at the top.\nThen, freeze the base model.\n然后,冷冻该模型。\npython\nbase_model.trainable = False\nCreate a new model on top.\n在旧模型上,创建一个新的模型:\n```python\ninputs = keras.Input(shape=(150, 150, 3))\nWe make sure that the base_model is running in inference mode here,\nby passing training=False. This is important for fine-tuning, as you will\nlearn in a few paragraphs.\nx = base_model(inputs, training=False)\nConvert features of shape base_model.output_shape[1:] to vectors\nx = keras.layers.GlobalAveragePooling2D()(x)\nA Dense classifier with a single unit (binary classification)\noutputs = keras.layers.Dense(1)(x)\nmodel = keras.Model(inputs, outputs)\n```\nTrain the model on new data.\n在新数据上训练该模型:\npython\nmodel.compile(optimizer=keras.optimizers.Adam(),\n loss=keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[keras.metrics.BinaryAccuracy()])\nmodel.fit(new_dataset, epochs=20, callbacks=..., validation_data=...)\n微调,Fine-tuning\nOnce your model has converged on the new data, you can try to unfreeze all or part of\n the base model and retrain the whole model end-to-end with a very low learning rate.\n一旦模型在新的数据上收敛了,我们就可以解冻所有或者部分base模型,并使用非常小的学习率端到端地重新训练整个模型。\nThis is an optional last step that can potentially give you incremental improvements.\n It could also potentially lead to quick overfitting -- keep that in mind.\n最后一步是可选的,但有潜力进一步提高模型的性能,也可以导致overfitting过学习。\nIt is critical to only do this step after the model with frozen layers has been\ntrained to convergence. If you mix randomly-initialized trainable layers with\ntrainable layers that hold pre-trained features, the randomly-initialized layers will\ncause very large gradient updates during training, which will destroy your pre-trained\n features.\n重要的是仅当带冷冻层的模型训练到收敛再进行这一步。\n如果把随机初始化的可训练的层与保存预训练特征的可训练层会在一起的话,随机初始化的层会导致非常大的梯度更新,\n从而破坏预训练的特征。\nIt's also critical to use a very low learning rate at this stage, because\nyou are training a much larger model than in the first round of training, on a dataset\n that is typically very small.\nAs a result, you are at risk of overfitting very quickly if you apply large weight\n updates. Here, you only want to readapt the pretrained weights in an incremental way.\n另外一个关键点是,微调阶段使用非常小的学习率,因为该阶段要训练的模型比第一阶段大不少而且数据集较小。\n如果使用较大的权重更新,过学习的风险很大。这里,我们只应该逐步的改造预训练好的权重。\nThis is how to implement fine-tuning of the whole base model:\n以下是如何实现对整个基础模型的微调\n```python\nUnfreeze the base model,解冻基础模型\nbase_model.trainable = True\nIt's important to recompile your model after you make any changes\nto the trainable attribute of any inner layer, so that your changes\nare take into account\n重新编译模型,使得更改生效\nmodel.compile(optimizer=keras.optimizers.Adam(1e-5), # Very low learning rate\n loss=keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[keras.metrics.BinaryAccuracy()])\nTrain end-to-end. Be careful to stop before you overfit!\n端到端训练,overfit前停止训练\nmodel.fit(new_dataset, epochs=10, callbacks=..., validation_data=...)\n```\nImportant note about compile() and trainable\n重要注意:compile() 和 trainable\nCalling compile() on a model is meant to \"freeze\" the behavior of that model. This\n implies that the trainable\nattribute values at the time the model is compiled should be preserved throughout the\n lifetime of that model,\nuntil compile is called again. Hence, if you change any trainable value, make sure\n to call compile() again on your\nmodel for your changes to be taken into account.\n调用compile()意味着其对应的模型的行为就固定了。这意味着trainable属性在模型编译后就不能改了,除非compile被重新调用。因此,修改trainable 后必须重新编译。\nImportant notes about BatchNormalization layer\nBatchNormalization层的重要注意事项\nMany image models contain BatchNormalization layers. That layer is a special case on\n every imaginable count. Here are a few things to keep in mind.\n很多图像模型包含BatchNormalization层。\n 以下几点需要注意。\n\nBatchNormalization 包含2个在训练过程中需要更新得non-trainable的权重。这些事跟踪输入的均值和方差。\n\n当bn_layer.trainable = False时, BatchNormalization层将运行推理模式,并且不会更新mean & variance。\n 这跟其它层基本都不一样,因为权重可训练性与推理/训练模型是两个正交的概念。\nweight trainability & inference/training modes are two orthogonal concepts.\n但是这两个概念在BatchNormalization中纠缠在一起\n\n\n当我们解冻一个包含BatchNormalization层的模型来做微调时,需要记住在推理模式时设置training=False。\n否则学习到的模型整个都会被破坏。\n\n\n在该指南的的最后面,将在一个端到端的例子中实践展示这种模式。\n\nBatchNormalization contains 2 non-trainable weights that get updated during\ntraining. These are the variables tracking the mean and variance of the inputs.\nWhen you set bn_layer.trainable = False, the BatchNormalization layer will\nrun in inference mode, and will not update its mean & variance statistics. This is not\nthe case for other layers in general, as\nweight trainability & inference/training modes are two orthogonal concepts.\nBut the two are tied in the case of the BatchNormalization layer.\nWhen you unfreeze a model that contains BatchNormalization layers in order to do\nfine-tuning, you should keep the BatchNormalization layers in inference mode by\n passing training=False when calling the base model.\nOtherwise the updates applied to the non-trainable weights will suddenly destroy\nwhat the model has learned.\n\nYou'll see this pattern in action in the end-to-end example at the end of this guide.\nTransfer learning & fine-tuning with a custom training loop\n迁移学习&微调中自定义训练loop\nIf instead of fit(), you are using your own low-level training loop, the workflow\nstays essentially the same. You should be careful to only take into account the list\n model.trainable_weights when applying gradient updates:\n如果不使用fit(),需要使用自定义的底层的训练循环,该流程本质上还是一样。\n你需要非常小心,在应用梯度更新时,仅对 model.trainable_weights列表中的更新。\n```python\nCreate base model\nbase_model = keras.applications.Xception(\n weights='imagenet',\n input_shape=(150, 150, 3),\n include_top=False)\nFreeze base model\nbase_model.trainable = False\nCreate new model on top.\ninputs = keras.Input(shape=(150, 150, 3))\nx = base_model(inputs, training=False)\nx = keras.layers.GlobalAveragePooling2D()(x)\noutputs = keras.layers.Dense(1)(x)\nmodel = keras.Model(inputs, outputs)\nloss_fn = keras.losses.BinaryCrossentropy(from_logits=True)\noptimizer = keras.optimizers.Adam()\nIterate over the batches of a dataset.\nfor inputs, targets in new_dataset:\n # Open a GradientTape.\n with tf.GradientTape() as tape:\n # Forward pass.\n predictions = model(inputs)\n # Compute the loss value for this batch.\n loss_value = loss_fn(targets, predictions)\n# Get gradients of loss wrt the *trainable* weights.\ngradients = tape.gradient(loss_value, model.trainable_weights)\n# Update the weights of the model.\noptimizer.apply_gradients(zip(gradients, model.trainable_weights))\n\n```\nLikewise for fine-tuning.\n微调也类似\nAn end-to-end example: fine-tuning an image classification model on a cats vs. dogs\n端到端例子:在cats vs. dogs数据集上微调图像分类模型\ndataset 数据集\nTo solidify these concepts, let's walk you through a concrete end-to-end transfer\nlearning & fine-tuning example. We will load the Xception model, pre-trained on\n ImageNet, and use it on the Kaggle \"cats vs. dogs\" classification dataset.\n为了巩固这些概念,我们一起过一个具体的端到端的迁移学习和微调的例子。本例子中使用Xception模型,在ImageNet上训练的,并且使用它来解决kaggle的\"cats vs. dogs\"的数据集分类问题。\nGetting the data\n获取数据\nFirst, let's fetch the cats vs. dogs dataset using TFDS. If you have your own dataset,\nyou'll probably want to use the utility\ntf.keras.preprocessing.image_dataset_from_directory to generate similar labeled\n dataset objects from a set of images on disk filed into class-specific folders.\n首先,使用TFDS获取该数据集。如果你已经有了该数据集,可以使用tf.keras.preprocessing.image_dataset_from_directory来从硬盘上图像集中生产类似的带标签的数据集对象。\nTransfer learning is most useful when working with very small datasets. To keep our\ndataset small, we will use 40% of the original training data (25,000 images) for\n training, 10% for validation, and 10% for testing.\n迁移学习在数据集非常小的时候是最有用的。为了保持我们的数据集较小,将使用40%的原始数据 (25,000 images)来训练,10% 为验证集,10%为测试集。", "import tensorflow_datasets as tfds\n\ntfds.disable_progress_bar()\n\ntrain_ds, validation_ds, test_ds = tfds.load(\n \"cats_vs_dogs\",\n # Reserve 10% for validation and 10% for test\n split=[\"train[:40%]\", \"train[40%:50%]\", \"train[50%:60%]\"],\n as_supervised=True, # Include labels\n)\n\nprint(\"Number of training samples: %d\" % tf.data.experimental.cardinality(train_ds))\nprint(\n \"Number of validation samples: %d\" % tf.data.experimental.cardinality(validation_ds)\n)\nprint(\"Number of test samples: %d\" % tf.data.experimental.cardinality(test_ds))", "These are the first 9 images in the training dataset -- as you can see, they're all\n different sizes.\n以下展示训练集中前9张图片,都不一样。", "import matplotlib.pyplot as plt\n\nplt.figure(figsize=(10, 10))\nfor i, (image, label) in enumerate(train_ds.take(9)):\n ax = plt.subplot(3, 3, i + 1)\n plt.imshow(image)\n plt.title(int(label))\n plt.axis(\"off\")", "We can also see that label 1 is \"dog\" and label 0 is \"cat\".\n标签为1的是 \"dog\",0是\"cat\".\nStandardizing the data\n数据标准化\nOur raw images have a variety of sizes. In addition, each pixel consists of 3 integer\nvalues between 0 and 255 (RGB level values). This isn't a great fit for feeding a\n neural network. We need to do 2 things:\n\nStandardize to a fixed image size. We pick 150x150.\nNormalize pixel values between -1 and 1. We'll do this using a Normalization layer as\n part of the model itself.\n\n原始图片有着不同的大小。此外,每个像素点包含3个0-255的整型数值(RGB)。不便于直接放入神经网络训练,我们需要做一下两件事:\n\n标准化每张图片到固定大小。本例子选择150*150\n规范化像素值到-1到1之间,我们使用Normalization层来实现这个,该层也作为模型的一层。\n\nIn general, it's a good practice to develop models that take raw data as input, as\nopposed to models that take already-preprocessed data. The reason being that, if your\nmodel expects preprocessed data, any time you export your model to use it elsewhere\n(in a web browser, in a mobile app), you'll need to reimplement the exact same\npreprocessing pipeline. This gets very tricky very quickly. So we should do the least\n possible amount of preprocessing before hitting the model.\n通常来说,相对使用已经处理好的数据作为输入,使用原始数据作为输入input来开发模型是很好的做法。\n原因是在不同应用环境(app,浏览器),我们不必重新实现预处理过程。\nHere, we'll do image resizing in the data pipeline (because a deep neural network can\nonly process contiguous batches of data), and we'll do the input value scaling as part\n of the model, when we create it.\n这里,我们将在数据pipeline中实现图像resizing大小调整(因为深度神经网络只能处理了连续小批的数据),\n并且将把输入数据缩放作为模型的一部分。\nLet's resize images to 150x150:\n调整图像大小为150x150:", "size = (150, 150)\n\ntrain_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size), y))\nvalidation_ds = validation_ds.map(lambda x, y: (tf.image.resize(x, size), y))\ntest_ds = test_ds.map(lambda x, y: (tf.image.resize(x, size), y))", "Besides, let's batch the data and use caching & prefetching to optimize loading speed.\n此外,把数据准备成batch小批,并使用缓冲和预读取来优化读取速度。", "batch_size = 32\n\ntrain_ds = train_ds.cache().batch(batch_size).prefetch(buffer_size=10)\nvalidation_ds = validation_ds.cache().batch(batch_size).prefetch(buffer_size=10)\ntest_ds = test_ds.cache().batch(batch_size).prefetch(buffer_size=10)", "Using random data augmentation\n使用随机数据增强\nWhen you don't have a large image dataset, it's a good practice to artificially\n introduce sample diversity by applying random yet realistic transformations to\nthe training images, such as random horizontal flipping or small random rotations. This\nhelps expose the model to different aspects of the training data while slowing down\n overfitting.\n当没有大的数据集时,人工地通过使用随机但现实的变换原始训练数据,从而引入样本的多样性是一个好的办法。\n例如随机水平翻转或小的随机旋转。这有助于把训练数据的不同方面暴露给模型,从而减慢过学习。", "from tensorflow import keras\nfrom tensorflow.keras import layers\n\ndata_augmentation = keras.Sequential(\n [\n layers.experimental.preprocessing.RandomFlip(\"horizontal\"),\n layers.experimental.preprocessing.RandomRotation(0.1),\n ]\n)", "Let's visualize what the first image of the first batch looks like after various random\n transformations:\n下面可视化第一个batch中的第一张图片随机变换后的样子:", "import numpy as np\n\nfor images, labels in train_ds.take(1):\n plt.figure(figsize=(10, 10))\n first_image = images[0]\n for i in range(9):\n ax = plt.subplot(3, 3, i + 1)\n augmented_image = data_augmentation(\n tf.expand_dims(first_image, 0), training=True\n )\n plt.imshow(augmented_image[0].numpy().astype(\"int32\"))\n plt.title(int(labels[i]))\n plt.axis(\"off\")", "Build a model\n构建模型\nNow let's built a model that follows the blueprint we've explained earlier.\n现在开始构建模型,如前面所述。\nNote that:\n\nWe add a Normalization layer to scale input values (initially in the [0, 255]\n range) to the [-1, 1] range.\nWe add a Dropout layer before the classification layer, for regularization.\nWe make sure to pass training=False when calling the base model, so that\nit runs in inference mode, so that batchnorm statistics don't get updated\neven after we unfreeze the base model for fine-tuning.\n\n注意:\n\n我们需要增加一个Normalization层把输入数值缩放到[-1, 1]\n在分类层中增加Dropout层进行规范化\n调用基础模型时确保传入training=False,因为此刻是推理模式批量。这样规范化统计量不会被更新,即使当我们解冻基础模型进行微调时也一样。", "base_model = keras.applications.Xception(\n weights=\"imagenet\", # Load weights pre-trained on ImageNet.\n input_shape=(150, 150, 3),\n include_top=False,\n) # Do not include the ImageNet classifier at the top.\n\n# Freeze the base_model\nbase_model.trainable = False\n\n# Create new model on top\ninputs = keras.Input(shape=(150, 150, 3))\nx = data_augmentation(inputs) # Apply random data augmentation\n\n# Pre-trained Xception weights requires that input be normalized\n# from (0, 255) to a range (-1., +1.), the normalization layer\n# does the following, outputs = (inputs - mean) / sqrt(var)\nnorm_layer = keras.layers.experimental.preprocessing.Normalization()\nmean = np.array([127.5] * 3)\nvar = mean ** 2\n# Scale inputs to [-1, +1]\nx = norm_layer(x)\nnorm_layer.set_weights([mean, var])\n\n# The base model contains batchnorm layers. We want to keep them in inference mode\n# when we unfreeze the base model for fine-tuning, so we make sure that the\n# base_model is running in inference mode here.\nx = base_model(x, training=False)\nx = keras.layers.GlobalAveragePooling2D()(x)\nx = keras.layers.Dropout(0.2)(x) # Regularize with dropout\noutputs = keras.layers.Dense(1)(x)\nmodel = keras.Model(inputs, outputs)\n\nmodel.summary()", "Train the top layer\n训练最上面的层", "model.compile(\n optimizer=keras.optimizers.Adam(),\n loss=keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[keras.metrics.BinaryAccuracy()],\n)\n\nepochs = 20\nmodel.fit(train_ds, epochs=epochs, validation_data=validation_ds)", "Do a round of fine-tuning of the entire model\n微调整个模型一轮\nFinally, let's unfreeze the base model and train the entire model end-to-end with a low\n learning rate.\n最后,解冻基础模型并且使用较小的学习率端到端训练整个模型\nImportantly, although the base model becomes trainable, it is still running in\ninference mode since we passed training=False when calling it when we built the\nmodel. This means that the batch normalization layers inside won't update their batch\nstatistics. If they did, they would wreck havoc on the representations learned by the\n model so far.\n重要的是,尽管基础模型变得可以训练了,它任然是运行在推理模式,因为构建模型时传入了 training=False 。\n内部的batch normalization层将不会更新batch统计量。如果更新了,将会破坏模型到目前为止所学习到的表示。", "# Unfreeze the base_model. Note that it keeps running in inference mode\n# since we passed `training=False` when calling it. This means that\n# the batchnorm layers will not update their batch statistics.\n# This prevents the batchnorm layers from undoing all the training\n# we've done so far.\nbase_model.trainable = True\nmodel.summary()\n\nmodel.compile(\n optimizer=keras.optimizers.Adam(1e-5), # Low learning rate\n loss=keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[keras.metrics.BinaryAccuracy()],\n)\n\nepochs = 10\nmodel.fit(train_ds, epochs=epochs, validation_data=validation_ds)", "After 10 epochs, fine-tuning gains us a nice improvement here.\n训练10个epochs后,微调获得非常好的提高。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
linsalrob/PyFBA
iPythonNotebooks/PATRIC to FBA.ipynb
mit
[ "How to create and run a gap-filled FBA from PATRIC\nThe PATRIC (the Pathosystems Resource Integration Center) contains the best collection of well annotated genomes. They also happen to have been annotated by RAST, and so we should be able to use those integrations directly.\nHere we'll walk through taking a genome from PATRIC, building a model, and running it. PATRIC also has model reconstruction built in, but when I tried it (05/24/16) it was not working.\nAs usual, we'll start by loading some modules that we'll need for our analysis.", "import sys\nimport os\nimport copy\nimport PyFBA\nimport re\n\nimport inspect\ninspect.getfile(PyFBA)", "Find a genome and download the annotations\nYou need to find your genome in PATRIC and download the annotations.\nOnce you have identified the genome you would like to build the model for, choose Feature Table from the menu bar:\n<img src=\"img/patric_ft.png\">\nNext, choose Download and save as a text file (.txt). \n<img src=\"img/patric_dl.png\">\nThat will save a file called FeatureTable.txt to your Downloads location. That file has the following columns:\n| Genome | Genome ID | Accession | PATRIC ID | RefSeq Locus Tag | Alt Locus Tag | Feature ID | \n| Annotation | Feature Type | Start | End | Length | Strand | FIGfam ID |\n| PATRIC genus-specific families (PLfams) | PATRIC cross-genus families (PGfams) | Protein ID | AA Length | Gene Symbol | Product | GO\nThe key columns are PATRIC ID (Column 3) and Product (Column 19) [Column numbers are 0 based!]\nNow that we know that, we need to convert these feature names into functional roles. The key here is to split on adjoiners, such as ' / ', ' # ', and ' @ '.", "assigned_functions = {}\nwith open(os.path.join('workspace/Citrobacter_sedlakii_genome_features.txt'), 'r') as f:\n for l in f:\n p=l.strip().split(\"\\t\")\n assigned_functions[p[3]]=PyFBA.parse.roles_of_function(p[19])\nroles = set([i[0] for i in [list(j) for j in assigned_functions.values()]])\nprint(\"There are {} unique roles in this genome\".format(len(roles)))", "Next, we convert those roles to reactions. We start with a dict of roles and reactions, but we only need a list of unique reactions, so we convert the keys to a set.", "roles_to_reactions = PyFBA.filters.roles_to_reactions(roles, organism_type=\"Gram_Negative\", verbose=False)", "If you toggle verbose=True, you will see that there are a lot of roles that we skip, even though we have an EC number for them: for whatever reason, the annotation is not quite right. We can check for those too, because our model seed parsed data has EC numbers with reactions.", "# ecr2r = PyFBA.filters.roles_to_ec_reactions(roles, organism_type=\"Gram_Negative\", verbose=False)\necr2r = set()", "We combine roles_to_reactions and ecr2r and figure out what the unique set of reactions is for our genome.", "roles_to_reactions.update(ecr2r)\nreactions_to_run = set()\nfor role in roles_to_reactions:\n reactions_to_run.update(roles_to_reactions[role])\nprint(\"There are {}\".format(len(reactions_to_run)) +\n \" unique reactions associated with this genome\".format(len(reactions_to_run)))", "Read all the reactions and compounds in our database\nWe read all the reactions, compounds, and enzymes in the ModelSEEDDatabase into three data structures. Note, the first time you call this it is a bit slow as it has to parse the files, but if we've parsed them once, we don't need to do it again!\nWe modify the reactions specifically for Gram negative models (there are also options for Gram positive models, Mycobacterial models, general microbial models, and plant models).", "compounds, reactions, enzymes = \\\n PyFBA.parse.model_seed.compounds_reactions_enzymes('gramnegative')\nprint(f\"There are {len(compounds):,} compounds, {len(reactions):,} reactions, and {len(enzymes):,} enzymes in total\")\n\nfor r in reactions:\n for c in reactions[r].all_compounds():\n if c.uptake_secretion:\n print(f\"US: {c}\")", "Update reactions to run, making sure that all reactions are in the list!\nThere are some reactions that come from functional roles that do not appear in the reactions list. We're working on tracking these down, but for now we just check that all reaction IDs in reactions_to_run are in reactions, too.", "tempset = set()\nfor r in reactions_to_run:\n if r in reactions:\n tempset.add(r)\n else:\n sys.stderr.write(\"Reaction ID {} is not in our reactions list. Skipped\\n\".format(r))\nreactions_to_run = tempset", "Test whether these reactions grow on ArgonneLB media\nWe can test whether this set of reactions grows on ArgonneLB media. The media is the same one we used above, and you can download the ArgonneLB.txt and text file and put it in the same directory as this iPython notebook to run it.\n(Note: we don't need to convert the media components, because the media and compounds come from the same source.)", "media = PyFBA.parse.read_media_file(\"/home/redwards/test_media/ArgonneLB.txt\")\nprint(\"Our media has {} components\".format(len(media)))", "Define a biomass equation\nThe biomass equation is the part that says whether the model will grow! This is a metabolism.reaction.Reaction object.", "biomass_equation = PyFBA.metabolism.biomass_equation()\n\nbiomass_equation.equation\n\nwith open('rbad.txt', 'w') as out:\n for r in reactions_to_run:\n out.write(f\"{r}\\n\")", "Run the FBA\nWith the reactions, compounds, reactions_to_run, media, and biomass model, we can test whether the model grows on this media.", "print(f\"Before running FBA there are {len(reactions)} reactions\")\nstatus, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,\n media, biomass_equation)\nprint(f\"After running FBA there are {len(reactions)} reactions\")\nprint(\"Initial run has a biomass flux value of {} --> Growth: {}\".format(value, growth))\n\nprint(f\"There are {len(reactions_to_run)} reactions to run\")\n\nupsr = 0\nfor r in reactions_to_run:\n if r.startswith('upsr'):\n upsr += 1\nprint(f\"There are {upsr} uptake secretion reactions in reactions_to_run\")\nupsr = 0\nfor r in reactions:\n if r.startswith('upsr'):\n upsr += 1\nprint(f\"There are {upsr} uptake secretion reactions in reactions\")", "Will gap filling work?\nThese are the reactions from the C. sedlakii SBML file, and so if we add these, we should get growth!", "sbml_addnl = {'rxn00868', 'rxn01923', 'rxn02268', 'rxn10215', 'rxn10219', 'rxn08089', 'rxn10212', 'rxn08083', 'rxn10214', 'rxn10211', 'rxn10218', 'rxn08086', 'rxn10217', 'rxn08087', 'rxn08088', 'rxn08085', 'rxn10216', 'rxn08084', 'rxn10213', 'rxn05572', 'rxn05565', 'rxn00541', 'rxn10155', 'rxn10157', 'rxn05536', 'rxn05544', 'rxn12848', 'rxn12851', 'rxn05539', 'rxn05541', 'rxn05537', 'rxn05543', 'rxn12849', 'rxn05533', 'rxn05540', 'rxn05534', 'rxn05547', 'rxn05546', 'rxn05542', 'rxn05535', 'rxn12850', 'rxn05545', 'rxn05538', 'rxn05168', 'rxn05179', 'rxn05161', 'rxn03061', 'rxn09313', 'rxn08354', 'rxn08356', 'rxn09315', 'rxn05549', 'rxn05160', 'rxn05644', 'rxn05330', 'rxn05335', 'rxn05334', 'rxn05329', 'rxn05333', 'rxn05332', 'rxn05331', 'rxn05415', 'rxn05381', 'rxn05386', 'rxn05427', 'rxn05431', 'rxn05373', 'rxn05377', 'rxn05398', 'rxn05419', 'rxn05402', 'rxn05369', 'rxn05361', 'rxn05394', 'rxn05406', 'rxn05365', 'rxn05390', 'rxn05423', 'rxn05462', 'rxn05411', 'rxn03492', 'rxn04050', 'rxn08258', 'rxn04713', 'rxn00990', 'rxn00875', 'rxn08471', 'rxn05737', 'rxn08467', 'rxn10067', 'rxn08468', 'rxn08469', 'rxn08470', 'rxn02160', 'rxn05422', 'rxn05372', 'rxn05341', 'rxn05376', 'rxn05342', 'rxn05337', 'rxn05385', 'rxn05397', 'rxn05340', 'rxn05461', 'rxn05368', 'rxn05418', 'rxn05393', 'rxn05336', 'rxn05426', 'rxn05364', 'rxn05430', 'rxn05410', 'rxn05339', 'rxn05401', 'rxn05338', 'rxn05360', 'rxn05414', 'rxn05405', 'rxn05389', 'rxn05380', 'rxn03164', 'rxn05229', 'rxn07586', 'rxn05054', 'rxn04384', 'rxn00503', 'rxn00183', 'rxn05187', 'rxn05515', 'rxn02056', 'rxn09134', 'rxn09125', 'rxn09157', 'rxn09128', 'rxn09142', 'rxn09161', 'rxn09147', 'rxn09164', 'rxn09152', 'rxn09124', 'rxn09131', 'rxn09133', 'rxn09138', 'rxn09143', 'rxn09153', 'rxn09160', 'rxn09158', 'rxn09148', 'rxn09144', 'rxn09150', 'rxn09130', 'rxn09149', 'rxn09163', 'rxn09159', 'rxn09132', 'rxn09127', 'rxn09140', 'rxn09145', 'rxn09137', 'rxn09154', 'rxn09151', 'rxn09146', 'rxn09123', 'rxn09139', 'rxn09126', 'rxn09141', 'rxn09135', 'rxn09136', 'rxn09155', 'rxn09162', 'rxn09129', 'rxn09156', 'rxn02949', 'rxn03241', 'rxn03245', 'rxn02911', 'rxn02167', 'rxn03250', 'rxn02934', 'rxn03240', 'rxn03247', 'rxn05316', 'rxn09687', 'rxn05198', 'rxn09688', 'rxn05199', 'rxn05200', 'rxn09685', 'rxn05318', 'rxn05205', 'rxn05621', 'rxn05656', 'rxn05585', 'rxn05172', 'rxn05594', 'rxn05552', 'rxn05599', 'rxn05512', 'rxn05620', 'rxn01277', 'rxn05518', 'rxn05145', 'rxn05460', 'rxn05396', 'rxn05363', 'rxn05359', 'rxn05367', 'rxn05417', 'rxn05421', 'rxn05392', 'rxn05413', 'rxn05349', 'rxn05388', 'rxn05429', 'rxn05371', 'rxn05400', 'rxn05425', 'rxn05409', 'rxn05404', 'rxn05375', 'rxn05379', 'rxn05384', 'rxn04139', 'rxn00640', 'rxn05507', 'rxn05506', 'rxn01893', 'rxn00671', 'rxn00501', 'rxn10340', 'rxn10334', 'rxn10337', 'rxn10338', 'rxn10341', 'rxn10335', 'rxn10342', 'rxn10339', 'rxn10336', 'rxn00160', 'rxn01285', 'rxn04143', 'rxn01847', 'rxn01103', 'rxn00227', 'rxn05175', 'rxn05163', 'rxn05958', 'rxn05683', 'rxn05484', 'rxn02933', 'rxn04750', 'rxn03244', 'rxn01451', 'rxn03239', 'rxn03246', 'rxn03242', 'rxn03249', 'rxn06777', 'rxn05500', 'rxn01637', 'rxn01122', 'rxn04602', 'rxn02416', 'rxn04601', 'rxn04928', 'rxn05596', 'rxn02775', 'rxn04046', 'rxn07589', 'rxn03491', 'rxn10117', 'rxn10119', 'rxn08333', 'rxn04673', 'rxn10308', 'rxn10311', 'rxn10315', 'rxn10309', 'rxn10307', 'rxn10312', 'rxn10310', 'rxn10314', 'rxn08040', 'rxn10313', 'rxn12147', 'rxn03931', 'rxn03916', 'rxn04674', 'rxn03397', 'rxn10094', 'rxn02286', 'rxn00555', 'rxn08709', 'rxn04052', 'rxn03512', 'rxn04045', 'rxn12224', 'rxn09188', 'rxn02359', 'rxn02008', 'rxn03643', 'rxn09177', 'rxn12512', 'rxn07587', 'rxn02507', 'rxn05202', 'rxn08291', 'rxn06865', 'rxn00303', 'rxn00222', 'rxn09978', 'rxn09979', 'rxn07588', 'rxn03919', 'rxn03435', 'rxn02187', 'rxn02186', 'rxn03436', 'rxn03068', 'rxn05317', 'rxn01219', 'rxn00364', 'rxn03514', 'rxn04048', 'rxn02792', 'rxn00350', 'rxn02791', 'rxn00171', 'rxn01000', 'rxn00675', 'rxn00175', 'rxn00986', 'rxn03932', 'rxn08712', 'rxn04113', 'rxn04996', 'rxn08756', 'rxn08352', 'rxn06023', 'rxn03136', 'rxn00800', 'rxn05165', 'rxn05181', 'rxn08194', 'rxn09180', 'rxn00670', 'rxn00173', 'rxn03644', 'rxn08619', 'rxn09289', 'rxn00776', 'rxn01360', 'rxn08335', 'rxn08336', 'rxn12500', 'rxn02287', 'rxn02774', 'rxn09167', 'rxn08708', 'rxn05156', 'rxn05151', 'rxn01629', 'rxn12146', 'rxn01123', 'rxn05147', 'rxn05173', 'rxn08707', 'rxn00927', 'rxn01299', 'rxn01226', 'rxn01545', 'rxn02476', 'rxn02011', 'rxn05201', 'rxn01895', 'rxn04604', 'rxn00830', 'rxn01403', 'rxn00179', 'rxn03991', 'rxn03990', 'rxn03975', 'rxn03974', 'rxn00818', 'rxn03838', 'rxn00817', 'rxn02596', 'rxn05555', 'rxn00056', 'rxn00212', 'rxn06979', 'rxn11544', 'rxn03918', 'rxn05559', 'rxn08345', 'rxn00509', 'rxn00006', 'rxn00834', 'rxn05293', 'rxn00634', 'rxn08618', 'rxn06848', 'rxn09997', 'rxn05938', 'rxn04783', 'rxn05206', 'rxn00102', 'rxn05937', 'rxn01644', 'rxn02938', 'rxn00792', 'rxn08711', 'rxn03513', 'rxn04047', 'rxn01265', 'rxn03394', 'rxn00777', 'rxn01106', 'rxn07492', 'rxn03538', 'rxn01480', 'rxn00119', 'rxn01517', 'rxn01966', 'rxn01132', 'rxn05162', 'rxn02277', 'rxn08257', 'rxn01352', 'rxn03540', 'rxn00789', 'rxn00508', 'rxn04386', 'rxn10481', 'rxn05528', 'rxn06077', 'rxn01671', 'rxn02929', 'rxn03917', 'rxn03135', 'rxn00469', 'rxn00791', 'rxn00756', 'rxn03087', 'rxn01329', 'rxn01917', 'rxn01879', 'rxn02285', 'rxn08710', 'rxn07438', 'rxn02321', 'rxn00787', 'rxn01289', 'rxn00851', 'rxn05297', 'rxn00062', 'rxn04132', 'rxn04133', 'rxn05319', 'rxn05467', 'rxn05468', 'rxn02374', 'rxn03012', 'rxn05064', 'rxn02666', 'rxn04457', 'rxn04456', 'rxn01664', 'rxn02916', 'rxn05667', 'rxn10571', 'rxn05195', 'rxn05645', 'rxn05144', 'rxn02988', 'rxn01256', 'rxn12604', 'rxn05039', 'rxn10904', 'rxn05499', 'rxn01152', 'rxn05691', 'rxn12893', 'rxn11116', 'rxn00880', 'rxn05593', 'rxn05469', 'rxn00186', 'rxn05694', 'rxn05491', 'rxn05682', 'rxn01748', 'rxn00327', 'rxn01746', 'rxn09656'}\n\nr2r_plussbml = copy.copy(reactions_to_run)\nprint(f\"Before adding sbml reactions there were {len(r2r_plussbml)}\")\nr2r_plussbml.update(sbml_addnl)\nprint(f\"After adding sbml reactions there were {len(r2r_plussbml)}\")\n\nprint(f\"Before running FBA there are {len(reactions)} reactions\")\nstatus, value, growth = PyFBA.fba.run_fba(compounds, reactions, r2r_plussbml,\n media, biomass_equation, verbose=True)\nprint(f\"After running FBA there are {len(reactions)} reactions\")\nprint(\"Initial run has a biomass flux value of {} --> Growth: {}\".format(value, growth))\n\nprint(f\"Before adding upsr reactions there were {len(r2r_plussbml)} reactions\")\nfor r in reactions:\n if r.startswith('upsr'):\n r2r_plussbml.update({r})\nprint(f\"After adding upsr reactions there were {len(r2r_plussbml)} reactions\")\n\nprint(f\"Before running FBA there are {len(reactions)} reactions\")\nstatus, value, growth = PyFBA.fba.run_fba(compounds, reactions, r2r_plussbml,\n media, biomass_equation, verbose=True)\nprint(f\"After running FBA there are {len(reactions)} reactions\")\nprint(\"Initial run has a biomass flux value of {} --> Growth: {}\".format(value, growth))\n\n# seems like we need EX_cpd00034\n\nupsr = 0\nfor r in reactions_to_run:\n if r.startswith('EX'):\n upsr += 1\nprint(f\"There are {upsr} EX reactions in reactions_to_run\")\nupsr = 0\nfor r in reactions:\n if r.startswith('EX'):\n upsr += 1\nprint(f\"There are {upsr} EX reactions in reactions\")\n\nbiomass_equation = PyFBA.metabolism.biomass_equation('standard')\nbiomass_equation.equation\n\nprint(f\"Before running FBA there are {len(reactions)} reactions\")\nstatus, value, growth = PyFBA.fba.run_fba(compounds, reactions, r2r_plussbml,\n media, biomass_equation, verbose=True)\nprint(f\"After running FBA there are {len(reactions)} reactions\")\nprint(\"Initial run has a biomass flux value of {} --> Growth: {}\".format(value, growth))\n\nuptake_secretion_reactions\n\nall_compounds = compounds\n# Filter for compounds that are boundary compounds\nfiltered_compounds = set()\nfor c in all_compounds:\n if not compounds[c].uptake_secretion:\n filtered_compounds.add(c)\nprint(f\"There are {len(all_compounds)} total compounds and {len(filtered_compounds)} filtered compounds\")\n\nwithout_ex = set()\nwith open('rwex.txt', 'r') as fin:\n for l in fin:\n l = l.strip()\n without_ex.add(l)\nwithout_ex\n\nprint(f\"Before running FBA there are {len(reactions)} reactions\")\nstatus, value, growth = PyFBA.fba.run_fba(compounds, reactions, without_ex,\n media, biomass_equation, verbose=True)\nprint(f\"After running FBA there are {len(reactions)} reactions\")\nprint(\"Initial run has a biomass flux value of {} --> Growth: {}\".format(value, growth))\n\nlen(without_ex)\n\nlen(reactions_to_run)", "it is the biomass model that is the problem\nLets take the biomass model from the SBML and see if this work.", "sbml_equation = '(0.00778132482043096) cpd00063: Ca2 (location: c) + (0.352889948968272) cpd00156: L_Valine (location: e) + (0.00778132482043096) cpd00030: Mn2 (location: e) + (0.00778132482043096) cpd00205: K (location: c) + (0.428732289454499) cpd00035: L_Alanine (location: e) + (0.128039715997337) cpd00060: L_Methionine (location: e) + (0.15480760087483) cpd00066: L_Phenylalanine (location: c) + (0.00778132482043096) cpd00017: S_Adenosyl_L_methionine (location: c) + (0.00778132482043096) cpd00010: CoA (location: c) + (0.0609084652443221) cpd15665: Peptidoglycan_polymer_n_subunits (location: c) + (0.0841036156544863) cpd00052: CTP (location: c) + (0.00778132482043096) cpd10516: fe3 (location: e) + (0.01468498342018) cpd00357: TTP (location: c) + (0.00778132482043096) cpd00099: Cl_ (location: e) + (0.01468498342018) cpd00356: dCTP (location: c) + (0.00778132482043096) cpd10515: Fe2 (location: e) + (0.00778132482043096) cpd00254: Mg (location: c) + (0.242249358141304) cpd00322: L_Isoleucine (location: e) + (0.00778132482043096) cpd00058: Cu2 (location: e) + (0.00778132482043096) cpd00149: Co2 (location: c) + (0.201205267995816) cpd00041: L_Aspartate (location: e) + (1) cpd17043: RNA_transcription (location: c) + (0.219496655995436) cpd00023: L_Glutamate (location: e) + (0.219496655995436) cpd00053: L_Glutamine (location: e) + (0.376088782528765) cpd00107: L_Leucine (location: e) + (0.00778132482043096) cpd00220: Riboflavin (location: e) + (0.179790960093822) cpd00054: L_Serine (location: e) + (0.0472899299502361) cpd00065: L_Tryptophan (location: e) + (0.0609084652443221) cpd02229: Bactoprenyl_diphosphate (location: c) + (0.00778132482043096) cpd11493: ACP (location: c) + (1) cpd17041: Protein_biosynthesis (location: c) + (0.184698405654696) cpd00129: L_Proline (location: e) + (0.135406821203723) cpd00038: GTP (location: c) + (0.01468498342018) cpd00241: dGTP (location: c) + (1) cpd17042: DNA_replication (location: c) + (0.211466290532188) cpd00161: L_Threonine (location: e) + (40.1101757365074) cpd00002: ATP (location: c) + (0.00778132482043096) cpd00016: Pyridoxal_phosphate (location: c) + (0.00778132482043096) cpd00048: Sulfate (location: e) + (0.00778132482043096) cpd00003: NAD (location: c) + (0.01468498342018) cpd00115: dATP (location: c) + (0.115101904973216) cpd00069: L_Tyrosine (location: e) + (0.00778132482043096) cpd00015: FAD (location: c) + (0.201205267995816) cpd00132: L_Asparagine (location: e) + (0.00778132482043096) cpd00006: NADP (location: c) + (35.5386858537513) cpd00001: H2O (location: e) + (0.0762884719008526) cpd00084: L_Cysteine (location: c) + (0.0794113918032267) cpd00119: L_Histidine (location: e) + (0.285970236774541) cpd00039: L_Lysine (location: e) + (0.0908319049068452) cpd00062: UTP (location: c) + (0.00778132482043096) cpd00034: Zn2 (location: e) + (0.247156803702178) cpd00051: L_Arginine (location: e) + (0.510820469745475) cpd00033: Glycine (location: e) > (40) cpd00008: ADP (location: c) + (39.9922186751796) cpd00009: Phosphate (location: e) + (0.00778132482043096) cpd12370: apo_ACP (location: c) + (1) cpd11416: Biomass (location: c) + (40) cpd00067: H (location: e) + (0.0609084652443221) cpd15666: Peptidoglycan_polymer_n_1_subunits (location: c) + (0.405833094852252) cpd00012: PPi (location: e)'\n\nsbml_left_compounds = {'cpd00066: L_Phenylalanine (location: c)' : 0.15480760087483, 'cpd00016: Pyridoxal_phosphate (location: c)' : 0.00778132482043096, 'cpd00132: L_Asparagine (location: e)' : 0.201205267995816, 'cpd00156: L_Valine (location: e)' : 0.352889948968272, 'cpd00099: Cl_ (location: e)' : 0.00778132482043096, 'cpd00038: GTP (location: c)' : 0.135406821203723, 'cpd00003: NAD (location: c)' : 0.00778132482043096, 'cpd17041: Protein_biosynthesis (location: c)' : 1.0, 'cpd00033: Glycine (location: e)' : 0.510820469745475, 'cpd00322: L_Isoleucine (location: e)' : 0.242249358141304, 'cpd00254: Mg (location: c)' : 0.00778132482043096, 'cpd17043: RNA_transcription (location: c)' : 1.0, 'cpd00048: Sulfate (location: e)' : 0.00778132482043096, 'cpd10515: Fe2 (location: e)' : 0.00778132482043096, 'cpd02229: Bactoprenyl_diphosphate (location: c)' : 0.0609084652443221, 'cpd11493: ACP (location: c)' : 0.00778132482043096, 'cpd00161: L_Threonine (location: e)' : 0.211466290532188, 'cpd00006: NADP (location: c)' : 0.00778132482043096, 'cpd00060: L_Methionine (location: e)' : 0.128039715997337, 'cpd00119: L_Histidine (location: e)' : 0.0794113918032267, 'cpd00052: CTP (location: c)' : 0.0841036156544863, 'cpd00051: L_Arginine (location: e)' : 0.247156803702178, 'cpd15665: Peptidoglycan_polymer_n_subunits (location: c)' : 0.0609084652443221, 'cpd00017: S_Adenosyl_L_methionine (location: c)' : 0.00778132482043096, 'cpd00030: Mn2 (location: e)' : 0.00778132482043096, 'cpd10516: fe3 (location: e)' : 0.00778132482043096, 'cpd00065: L_Tryptophan (location: e)' : 0.0472899299502361, 'cpd00084: L_Cysteine (location: c)' : 0.0762884719008526, 'cpd00023: L_Glutamate (location: e)' : 0.219496655995436, 'cpd17042: DNA_replication (location: c)' : 1.0, 'cpd00356: dCTP (location: c)' : 0.01468498342018, 'cpd00035: L_Alanine (location: e)' : 0.428732289454499, 'cpd00069: L_Tyrosine (location: e)' : 0.115101904973216, 'cpd00220: Riboflavin (location: e)' : 0.00778132482043096, 'cpd00129: L_Proline (location: e)' : 0.184698405654696, 'cpd00357: TTP (location: c)' : 0.01468498342018, 'cpd00205: K (location: c)' : 0.00778132482043096, 'cpd00149: Co2 (location: c)' : 0.00778132482043096, 'cpd00063: Ca2 (location: c)' : 0.00778132482043096, 'cpd00054: L_Serine (location: e)' : 0.179790960093822, 'cpd00001: H2O (location: e)' : 35.5386858537513, 'cpd00010: CoA (location: c)' : 0.00778132482043096, 'cpd00015: FAD (location: c)' : 0.00778132482043096, 'cpd00062: UTP (location: c)' : 0.0908319049068452, 'cpd00107: L_Leucine (location: e)' : 0.376088782528765, 'cpd00241: dGTP (location: c)' : 0.01468498342018, 'cpd00053: L_Glutamine (location: e)' : 0.219496655995436, 'cpd00039: L_Lysine (location: e)' : 0.285970236774541, 'cpd00034: Zn2 (location: e)' : 0.00778132482043096, 'cpd00058: Cu2 (location: e)' : 0.00778132482043096, 'cpd00002: ATP (location: c)' : 40.1101757365074, 'cpd00041: L_Aspartate (location: e)' : 0.201205267995816, 'cpd00115: dATP (location: c)' : 0.01468498342018}\n\nsbml_right_compounds = {'cpd00067: H (location: e)' : 40.0, 'cpd00012: PPi (location: e)' : 0.405833094852252, 'cpd00008: ADP (location: c)' : 40.0, 'cpd11416: Biomass (location: c)' : 1.0, 'cpd12370: apo_ACP (location: c)' : 0.00778132482043096, 'cpd00009: Phosphate (location: e)' : 39.9922186751796, 'cpd15666: Peptidoglycan_polymer_n_1_subunits (location: c)' : 0.0609084652443221}\n\nsbml_biomass = PyFBA.metabolism.Reaction('sbml_biomass', 'sbml_biomass')\nsbml_biomass.equation = sbml_equation\nparsecomp = re.compile('^(cpd\\\\d+): (.*?) \\(location: (.)\\)')\nfor c in sbml_left_compounds:\n m = parsecomp.match(c)\n if not m:\n sys.stderr.write(f\"Can't parse {c}\\n\")\n if m.group(1) in compounds:\n if False and compounds[m.group(1)] != m.group(2):\n sys.stderr.write(f\"We had |{compounds[m.group(1)]}| for {m.group(1)} in the SBML, but now have |{m.group(2)}|\\n\")\n newcomp = PyFBA.metabolism.CompoundWithLocation.from_compound(compounds[m.group(1)], m.group(3))\n sbml_biomass.add_left_compounds({newcomp})\n sbml_biomass.set_left_compound_abundance(newcomp, sbml_left_compounds[c])\n else:\n print(f\"{m.group(1)} not found\")\n\nfor c in sbml_right_compounds:\n m = parsecomp.match(c)\n if not m:\n sys.stderr.write(f\"Can't parse {c}\\n\")\n if m.group(1) in compounds:\n if True and compounds[m.group(1)] != m.group(2):\n sys.stderr.write(f\"We had |{compounds[m.group(1)]}| for {m.group(1)} in the SBML, but now have |{m.group(2)}|\\n\")\n newcomp = PyFBA.metabolism.CompoundWithLocation.from_compound(compounds[m.group(1)], m.group(3))\n sbml_biomass.add_right_compounds({newcomp})\n sbml_biomass.set_right_compound_abundance(newcomp, sbml_right_compounds[c])\n else:\n print(f\"{m.group(1)} not found\")\n\nprint(f\"Before running FBA there are {len(reactions)} reactions\")\nstatus, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,\n media, sbml_biomass, verbose=True)\nprint(f\"After running FBA there are {len(reactions)} reactions\")\nprint(\"Initial run has a biomass flux value of {} --> Growth: {}\".format(value, growth))", "Add the missing reactions", "all_reactions = {'rxn00868', 'rxn01923', 'rxn02268', 'rxn10215', 'rxn10219', 'rxn08089', 'rxn10212', 'rxn08083', 'rxn10214', 'rxn10211', 'rxn10218', 'rxn08086', 'rxn10217', 'rxn08087', 'rxn08088', 'rxn08085', 'rxn10216', 'rxn08084', 'rxn10213', 'rxn05572', 'rxn05565', 'rxn00541', 'rxn10155', 'rxn10157', 'rxn05536', 'rxn05544', 'rxn12848', 'rxn12851', 'rxn05539', 'rxn05541', 'rxn05537', 'rxn05543', 'rxn12849', 'rxn05533', 'rxn05540', 'rxn05534', 'rxn05547', 'rxn05546', 'rxn05542', 'rxn05535', 'rxn12850', 'rxn05545', 'rxn05538', 'rxn05168', 'rxn05179', 'rxn05161', 'rxn09313', 'rxn08354', 'rxn08356', 'rxn09315', 'rxn05549', 'rxn05160', 'rxn05644', 'rxn05330', 'rxn05335', 'rxn05334', 'rxn05329', 'rxn05333', 'rxn05332', 'rxn05331', 'rxn05415', 'rxn05381', 'rxn05386', 'rxn05427', 'rxn05431', 'rxn05373', 'rxn05377', 'rxn05398', 'rxn05419', 'rxn05402', 'rxn05369', 'rxn05361', 'rxn05394', 'rxn05406', 'rxn05365', 'rxn05390', 'rxn05423', 'rxn05462', 'rxn05411', 'rxn03492', 'rxn04050', 'rxn08258', 'rxn04713', 'rxn00990', 'rxn00875', 'rxn08471', 'rxn05737', 'rxn08467', 'rxn10067', 'rxn08468', 'rxn08469', 'rxn08470', 'rxn01302', 'rxn01301', 'rxn05422', 'rxn05372', 'rxn05341', 'rxn05376', 'rxn05342', 'rxn05337', 'rxn05385', 'rxn05397', 'rxn05340', 'rxn05461', 'rxn05368', 'rxn05418', 'rxn05393', 'rxn05336', 'rxn05426', 'rxn05364', 'rxn05430', 'rxn05410', 'rxn05339', 'rxn05401', 'rxn05338', 'rxn05360', 'rxn05414', 'rxn05405', 'rxn05389', 'rxn05380', 'rxn03164', 'rxn05229', 'rxn07586', 'rxn05054', 'rxn04384', 'rxn00503', 'rxn00183', 'rxn05187', 'rxn05515', 'rxn02056', 'rxn09134', 'rxn09125', 'rxn09157', 'rxn09128', 'rxn09142', 'rxn09161', 'rxn09147', 'rxn09164', 'rxn09152', 'rxn09124', 'rxn09131', 'rxn09133', 'rxn09138', 'rxn09143', 'rxn09153', 'rxn09160', 'rxn09158', 'rxn09148', 'rxn09144', 'rxn09150', 'rxn09130', 'rxn09149', 'rxn09163', 'rxn09159', 'rxn09132', 'rxn09127', 'rxn09140', 'rxn09145', 'rxn09137', 'rxn09154', 'rxn09151', 'rxn09146', 'rxn09123', 'rxn09139', 'rxn09126', 'rxn09141', 'rxn09135', 'rxn09136', 'rxn09155', 'rxn09162', 'rxn09129', 'rxn09156', 'rxn02949', 'rxn03241', 'rxn03245', 'rxn02911', 'rxn02167', 'rxn03250', 'rxn02934', 'rxn03240', 'rxn03247', 'rxn05316', 'rxn09687', 'rxn05198', 'rxn09688', 'rxn05199', 'rxn05200', 'rxn09685', 'rxn05318', 'rxn05205', 'rxn05621', 'rxn05656', 'rxn05585', 'rxn05172', 'rxn05594', 'rxn05552', 'rxn05599', 'rxn05512', 'rxn05620', 'rxn01277', 'rxn05518', 'rxn05145', 'rxn05460', 'rxn05396', 'rxn05363', 'rxn05359', 'rxn05367', 'rxn05417', 'rxn05421', 'rxn05392', 'rxn05413', 'rxn05349', 'rxn05388', 'rxn05429', 'rxn05371', 'rxn05400', 'rxn05425', 'rxn05409', 'rxn05404', 'rxn05375', 'rxn05379', 'rxn05384', 'rxn04139', 'rxn00640', 'rxn05507', 'rxn05506', 'rxn01893', 'rxn00671', 'rxn00501', 'rxn10340', 'rxn10334', 'rxn10337', 'rxn10338', 'rxn10341', 'rxn10335', 'rxn10342', 'rxn10339', 'rxn10336', 'rxn00160', 'rxn01285', 'rxn04143', 'rxn01847', 'rxn01103', 'rxn00227', 'rxn05175', 'rxn05163', 'rxn05683', 'rxn05484', 'rxn02933', 'rxn04750', 'rxn03244', 'rxn01451', 'rxn03239', 'rxn03246', 'rxn03242', 'rxn03249', 'rxn06777', 'rxn05500', 'rxn01637', 'rxn01122', 'rxn04602', 'rxn02416', 'rxn04601', 'rxn04928', 'rxn05596', 'rxn02762', 'rxn02521', 'rxn02522', 'rxn03483', 'rxn02775', 'rxn04046', 'rxn07589', 'rxn03491', 'rxn10117', 'rxn10119', 'rxn08333', 'rxn04673', 'rxn10308', 'rxn10311', 'rxn10315', 'rxn10309', 'rxn10307', 'rxn10312', 'rxn10310', 'rxn10314', 'rxn08040', 'rxn10313', 'rxn12147', 'rxn03931', 'rxn03916', 'rxn04674', 'rxn03397', 'rxn10094', 'rxn02286', 'rxn02474', 'rxn00555', 'rxn08709', 'rxn04052', 'rxn03512', 'rxn12224', 'rxn09188', 'rxn02359', 'rxn02008', 'rxn08179', 'rxn08178', 'rxn03643', 'rxn09177', 'rxn12512', 'rxn07587', 'rxn02507', 'rxn08291', 'rxn06865', 'rxn00303', 'rxn00222', 'rxn09978', 'rxn09979', 'rxn07588', 'rxn04413', 'rxn03537', 'rxn03536', 'rxn03919', 'rxn03435', 'rxn02187', 'rxn02186', 'rxn03436', 'rxn03068', 'rxn05317', 'rxn01219', 'rxn00364', 'rxn03514', 'rxn04048', 'rxn00544', 'rxn02792', 'rxn00350', 'rxn02791', 'rxn05221', 'rxn00675', 'rxn00175', 'rxn00986', 'rxn01507', 'rxn02400', 'rxn01670', 'rxn00363', 'rxn00708', 'rxn01218', 'rxn01521', 'rxn01445', 'rxn00913', 'rxn01145', 'rxn00132', 'rxn01961', 'rxn00831', 'rxn08712', 'rxn04113', 'rxn04996', 'rxn08756', 'rxn08352', 'rxn06023', 'rxn02449', 'rxn05165', 'rxn05181', 'rxn08194', 'rxn01093', 'rxn09180', 'rxn03644', 'rxn08619', 'rxn09289', 'rxn00776', 'rxn01360', 'rxn08335', 'rxn08336', 'rxn12500', 'rxn02287', 'rxn02774', 'rxn09167', 'rxn08708', 'rxn05156', 'rxn05151', 'rxn01629', 'rxn12146', 'rxn01123', 'rxn05147', 'rxn05173', 'rxn08707', 'rxn00927', 'rxn01299', 'rxn01226', 'rxn01545', 'rxn02476', 'rxn02011', 'rxn05201', 'rxn01895', 'rxn04604', 'rxn00830', 'rxn00179', 'rxn03991', 'rxn03990', 'rxn03975', 'rxn03974', 'rxn00818', 'rxn03838', 'rxn00817', 'rxn02596', 'rxn05555', 'rxn00056', 'rxn06979', 'rxn11544', 'rxn03918', 'rxn05559', 'rxn08345', 'rxn00509', 'rxn00205', 'rxn00006', 'rxn02473', 'rxn00834', 'rxn05293', 'rxn00105', 'rxn00634', 'rxn08618', 'rxn06848', 'rxn09997', 'rxn05938', 'rxn04783', 'rxn05206', 'rxn00102', 'rxn01644', 'rxn02938', 'rxn00792', 'rxn08711', 'rxn03513', 'rxn04047', 'rxn01265', 'rxn01404', 'rxn03394', 'rxn00777', 'rxn01106', 'rxn07492', 'rxn03538', 'rxn01480', 'rxn00119', 'rxn01517', 'rxn01966', 'rxn01132', 'rxn05162', 'rxn02277', 'rxn08257', 'rxn05197', 'rxn01352', 'rxn03540', 'rxn00789', 'rxn00508', 'rxn04386', 'rxn10481', 'rxn05528', 'rxn06077', 'rxn01671', 'rxn02929', 'rxn03917', 'rxn03135', 'rxn00469', 'rxn00756', 'rxn03087', 'rxn01329', 'rxn01917', 'rxn01879', 'rxn01538', 'rxn02285', 'rxn08710', 'rxn07438', 'rxn02321', 'rxn00787', 'rxn01289', 'rxn00851', 'rxn05297', 'rxn00062', 'rxn04132', 'rxn04133', 'rxn05319', 'rxn05467', 'rxn05468', 'rxn02374', 'rxn03012', 'rxn05064', 'rxn02666', 'rxn04457', 'rxn04456', 'rxn01664', 'rxn02916', 'rxn05667', 'rxn10571', 'rxn05195', 'rxn05645', 'rxn05144', 'rxn02988', 'rxn01256', 'rxn12604', 'rxn05039', 'rxn10904', 'rxn05499', 'rxn01152', 'rxn05691', 'rxn12893', 'rxn11116', 'rxn00880', 'rxn05593', 'rxn05469', 'rxn00186', 'rxn05694', 'rxn05491', 'rxn05682', 'rxn01748', 'rxn00327', 'rxn01746', 'rxn09656'}\n\nprint(f\"Before updating there are {len(reactions_to_run)} reactions\")\nr2ra = copy.copy(reactions_to_run)\nr2ra.update(all_reactions)\nprint(f\"After updating there are {len(r2ra)} reactions\")\n\nprint(f\"Before running FBA there are {len(reactions)} reactions\")\nstatus, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,\n media, sbml_biomass, verbose=True)\nprint(f\"After running FBA there are {len(reactions)} reactions\")\nprint(\"Initial run has a biomass flux value of {} --> Growth: {}\".format(value, growth))\n\nnew_reactions = PyFBA.gapfill.suggest_from_media(compounds, reactions,\n reactions_to_run, media, verbose=False)\n\nprint(f\"There are {len(new_reactions)} new reactions to add\")\n\ntransrct = set()\nfor r in new_reactions:\n if reactions[r].is_transport:\n transrct.add(r)\nprint(f\"There are {len(transrct)} new transport reactions\")\n\nreactions_to_run.update(transrct)\n\nprint(f\"Before running FBA there are {len(reactions)} reactions\")\nstatus, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,\n media, biomass_equation)\nprint(f\"After running FBA there are {len(reactions)} reactions\")\nprint(\"Initial run has a biomass flux value of {} --> Growth: {}\".format(value, growth))\n\nprint(f\"There are {len(reactions_to_run)} reactions to run\")", "Gap-fill the model\nSince the model does not grow on ArgonneLB we need to gap-fill it to ensure growth. There are several ways that we can gap-fill, and we will work through them until we get growth.\nAs you will see, we update the reactions_to_run list each time, and keep the media and everything else consistent. Then we just need to run the FBA like we have done above and see if we get growth.\nWe also keep a copy of the original reactions_to_run, and a list with all the reactions that we are adding, so once we are done we can go back and bisect the reactions that are added.", "added_reactions = []\noriginal_reactions_to_run = copy.copy(reactions_to_run)", "Media import reactions\nWe need to make sure that the cell can import everything that is in the media... otherwise it won't be able to grow. Be sure to only do this step if you are certain that the cell can grow on the media you are testing.", "update_type = 'media'\nnew_reactions = PyFBA.gapfill.suggest_from_media(compounds, reactions,\n reactions_to_run, media, verbose=True)\nadded_reactions.append((update_type, new_reactions))\n\nprint(f\"Before adding {update_type} reactions, we had {len(reactions_to_run)} reactions.\")\nreactions_to_run.update(new_reactions)\nprint(f\"After adding {update_type} reactions, we had {len(reactions_to_run)} reactions.\")\n\nfor r in reactions:\n if reactions[r].is_transport:\n print(r)\n\nfor r in reactions:\n for c in reactions[r].left_compounds:\n if c.location == 'e':\n if not reactions[r].is_transport:\n print(f\"Check {r}\")\n\nstatus, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,\n media, biomass_equation)\nprint(\"Run has a biomass flux value of {} --> Growth: {}\".format(value, growth))", "Essential reactions\nThere are ~100 reactions that are in every model we have tested, and we construe these to be essential for all models, so we typically add these next!", "update_type = 'essential'\nnew_reactions = PyFBA.gapfill.suggest_essential_reactions()\nadded_reactions.append((update_type, new_reactions))\nprint(f\"Before adding {update_type} reactions, we had {len(reactions_to_run)} reactions.\")\nreactions_to_run.update(new_reactions)\nprint(f\"After adding {update_type} reactions, we had {len(reactions_to_run)} reactions.\")\n\nstatus, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,\n media, biomass_equation)\nprint(\"Run has a biomass flux value of {} --> Growth: {}\".format(value, growth))", "Subsystems\nThe reactions connect us to subsystems (see Overbeek et al. 2014), and this test ensures that all the subsystems are complete. We add reactions required to complete the subsystem.", "update_type = 'subsystems'\nnew_reactions = \\\n PyFBA.gapfill.suggest_reactions_from_subsystems(reactions,\n reactions_to_run,\n threshold=0.5)\nadded_reactions.append((update_type, new_reactions))\nprint(f\"Before adding {update_type} reactions, we had {len(reactions_to_run)} reactions.\")\nreactions_to_run.update(new_reactions)\nprint(f\"After adding {update_type} reactions, we had {len(reactions_to_run)} reactions.\")\n\nstatus, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,\n media, biomass_equation)\nprint(\"Run has a biomass flux value of {} --> Growth: {}\".format(value, growth))\n\npre_orphan=copy.copy(reactions_to_run)\npre_o_added=copy.copy(added_reactions)\nprint(\"Pre orphan has {} reactions\".format(len(pre_orphan)))", "Orphan compounds\nOrphan compounds are those compounds which are only associated with one reaction. They are either produced, or trying to be consumed. We need to add reaction(s) that complete the network of those compounds.\nYou can change the maximum number of reactions that a compound is in to be considered an orphan (try increasing it to 2 or 3).", "update_type = 'orphan compounds'\nnew_reactions = PyFBA.gapfill.suggest_by_compound(compounds, reactions,\n reactions_to_run,\n max_reactions=1)\nadded_reactions.append((update_type, new_reactions))\nprint(f\"Before adding {update_type} reactions, we had {len(reactions_to_run)} reactions.\")\nreactions_to_run.update(new_reactions)\nprint(f\"After adding {update_type} reactions, we had {len(reactions_to_run)} reactions.\")\n\nstatus, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,\n media, biomass_equation)\nprint(\"Run has a biomass flux value of {} --> Growth: {}\".format(value, growth))", "Trimming the model\nNow that the model has been shown to grow on ArgonneLB media after several gap-fill iterations, we should trim down the reactions to only the required reactions necessary to observe growth.", "reqd_additional = set()\n\n# Begin loop through all gap-filled reactions\nwhile added_reactions:\n ori = copy.copy(original_reactions_to_run)\n ori.update(reqd_additional)\n # Test next set of gap-filled reactions\n # Each set is based on a method described above\n how, new = added_reactions.pop()\n sys.stderr.write(\"Testing reactions from {}\\n\".format(how))\n \n # Get all the other gap-filled reactions we need to add\n for tple in added_reactions:\n ori.update(tple[1])\n \n # Use minimization function to determine the minimal\n # set of gap-filled reactions from the current method\n new_essential = PyFBA.gapfill.minimize_additional_reactions(ori, new, compounds,\n reactions, media,\n biomass_equation)\n sys.stderr.write(\"Saved {} reactions from {}\\n\".format(len(new_essential), how))\n for r in new_essential:\n sys.stderr.write(r + \"\\n\")\n # Record the method used to determine\n # how the reaction was gap-filled\n for new_r in new_essential:\n reactions[new_r].is_gapfilled = True\n reactions[new_r].gapfill_method = how\n reqd_additional.update(new_essential)\n\n# Combine old and new reactions\nall_reactions = original_reactions_to_run.union(reqd_additional)\n\nstatus, value, growth = PyFBA.fba.run_fba(compounds, reactions, all_reactions,\n media, biomass_equation)\nprint(\"The biomass reaction has a flux of {} --> Growth: {}\".format(value, growth))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rpmunoz/topicos_ingenieria_1
clase_1/02 - Lectura de datos con Pandas.ipynb
gpl-3.0
[ "Lectura y manipulación de datos con Pandas\nAutor: Roberto Muñoz <br />\nE-mail: &#114;&#109;&#117;&#110;&#111;&#122;&#64;&#117;&#99;&#46;&#99;&#108;\nThis notebook shows how to create Series and Dataframes with Pandas. Also, how to read CSV files and creaate pivot tables. The first part is based on the chapter 3 of the <a href=\" http://nbviewer.jupyter.org/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.01-Introducing-Pandas-Objects.ipynb\">Python Data Science Handbook</a>.", "import numpy as np\n\nfrom __future__ import print_function \n\nimport pandas as pd\npd.__version__", "1. The Pandas Series Object\nA Pandas Series is a one-dimensional array of indexed data. It can be created from a list or array as follows:", "data = pd.Series([0.25, 0.5, 0.75, 1.0])\ndata", "As we see in the output, the Series wraps both a sequence of values and a sequence of indices, which we can access with the values and index attributes. The values are simply a familiar NumPy array:", "data.values", "The index is an array-like object of type pd.Index, which we'll discuss in more detail momentarily.", "data.index", "Like with a NumPy array, data can be accessed by the associated index via the familiar Python square-bracket notation:", "data[1]", "Series as generalized NumPy array\nFrom what we've seen so far, it may look like the Series object is basically interchangeable with a one-dimensional NumPy array. The essential difference is the presence of the index: while the Numpy Array has an implicitly defined integer index used to access the values, the Pandas Series has an explicitly defined index associated with the values.", "data = pd.Series([0.25, 0.5, 0.75, 1.0],\n index=['a', 'b', 'c', 'd'])\ndata", "And the item access works as expected:", "data['b']", "Series as specialized dictionary\nIn this way, you can think of a Pandas Series a bit like a specialization of a Python dictionary. A dictionary is a structure that maps arbitrary keys to a set of arbitrary values, and a Series is a structure which maps typed keys to a set of typed values. This typing is important: just as the type-specific compiled code behind a NumPy array makes it more efficient than a Python list for certain operations, the type information of a Pandas Series makes it much more efficient than Python dictionaries for certain operations.", "population_dict = {'Arica y Parinacota': 243149,\n 'Antofagasta': 631875,\n 'Metropolitana de Santiago': 7399042,\n 'Valparaiso': 1842880,\n 'Bíobío': 2127902,\n 'Magallanes y Antártica Chilena': 165547}\npopulation = pd.Series(population_dict)\npopulation", "You can notice the indexes were sorted lexicographically. That's the default behaviour in Pandas", "population['Arica y Parinacota']", "Unlike a dictionary, though, the Series also supports array-style operations such as slicing:", "population['Metropolitana':'Valparaíso']", "2. The Pandas DataFrame Object\nThe next fundamental structure in Pandas is the DataFrame. Like the Series object discussed in the previous section, the DataFrame can be thought of either as a generalization of a NumPy array, or as a specialization of a Python dictionary. We'll now take a look at each of these perspectives.\nDataFrame as a generalized NumPy array\nIf a Series is an analog of a one-dimensional array with flexible indices, a DataFrame is an analog of a two-dimensional array with both flexible row indices and flexible column names.", "# Area in km^2\narea_dict = {'Arica y Parinacota': 16873.3,\n 'Antofagasta': 126049.1,\n 'Metropolitana de Santiago': 15403.2,\n 'Valparaiso': 16396.1,\n 'Bíobío': 37068.7,\n 'Magallanes y Antártica Chilena': 1382291.1}\narea = pd.Series(area_dict)\narea", "Now that we have this along with the population Series from before, we can use a dictionary to construct a single two-dimensional object containing this information:", "regions = pd.DataFrame({'population': population,\n 'area': area})\nregions\n\nregions.index\n\nregions.columns", "DataFrame as specialized dictionary\nSimilarly, we can also think of a DataFrame as a specialization of a dictionary. Where a dictionary maps a key to a value, a DataFrame maps a column name to a Series of column data. For example, asking for the 'area' attribute returns the Series object containing the areas we saw earlier:", "regions['area']", "Constructing DataFrame objects\nA Pandas DataFrame can be constructed in a variety of ways. Here we'll give several examples.\nFrom a single Series object¶\nA DataFrame is a collection of Series objects, and a single-column DataFrame can be constructed from a single Series:", "pd.DataFrame(population, columns=['population'])", "From a dictionary of Series objects\nAs we saw before, a DataFrame can be constructed from a dictionary of Series objects as well:", "pd.DataFrame({'population': population,\n 'area': area}, columns=['population', 'area'])", "3. Reading a CSV file and doing common Pandas operations", "regiones_file='data/chile_regiones.csv'\nprovincias_file='data/chile_provincias.csv'\ncomunas_file='data/chile_comunas.csv'\n\nregiones=pd.read_csv(regiones_file, header=0, sep=',')\nprovincias=pd.read_csv(provincias_file, header=0, sep=',')\ncomunas=pd.read_csv(comunas_file, header=0, sep=',')\n\nprint('regiones table: ', regiones.columns.values.tolist())\nprint('provincias table: ', provincias.columns.values.tolist())\nprint('comunas table: ', comunas.columns.values.tolist())\n\nregiones.head()\n\nprovincias.head()\n\ncomunas.head()\n\nregiones_provincias=pd.merge(regiones, provincias, how='outer')\nregiones_provincias.head()\n\nprovincias_comunas=pd.merge(provincias, comunas, how='outer')\nprovincias_comunas.head()\n\nregiones_provincias_comunas=pd.merge(regiones_provincias, comunas, how='outer')\nregiones_provincias_comunas.index.name='ID'\nregiones_provincias_comunas.head()\n\n#regiones_provincias_comunas.to_csv('chile_regiones_provincia_comuna.csv', index=False)", "4. Loading ful dataset", "data_file='data/chile_demographic.csv'\ndata=pd.read_csv(data_file, header=0, sep=',')\ndata\n\ndata.sort_values('Poblacion')\n\ndata.sort_values('Poblacion', ascending=False)\n\n(data.groupby(['Region'])['Poblacion','Superficie'].sum())\n\n(data.groupby(['Region'])['Poblacion','Superficie'].sum()).sort_values('Poblacion', ascending=False)\n\ndata.sort_values(['RegionID']).groupby(['RegionID','Region'])['Poblacion','Superficie'].sum()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hadibakalim/deepLearning
01.neural_network/01.first_neural_net-linear_regression_1/01.linear_regression_1.ipynb
mit
[ "First Neural Networks\nWe have a dataset of measurement of different animals of brain weight and body weight. We want to predict an animal's body weight given its brain weight.\nSince our data is labeled this will be Supervised approach and type of machine learning task is called Regression.", "import pandas as pd\nfrom sklearn import linear_model\nimport matplotlib.pyplot as plt", "pandas will let us read the data. \nscikit-learn is the machine learning library\nmatplotlib will let us visualize our model and data\nRead the Data", "# read data\ndataframe = pd.read_fwf('brain_body.txt')\nx_values = dataframe[['Brain']]\ny_values = dataframe[['Body']]", "Train model on the Data", "body_reg = linear_model.LinearRegression()\nbody_reg.fit(x_values, y_values)", "Visualize results", "plt.scatter(x_values, y_values)\nplt.plot(x_values, body_reg.predict(x_values))\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gojomo/gensim
docs/notebooks/atmodel_prediction_tutorial.ipynb
lgpl-2.1
[ "Authorship prediction with the author-topic model\nIn this tutorial, you will learn how to use the author-topic model in Gensim for authorship prediction, based on the topic distributions and mesuring their similarity.\nWe will train the author-topic model on a Reuters dataset, which contains 50 authors, each with 50 documents for trianing and another 50 documents for testing: https://archive.ics.uci.edu/ml/datasets/Reuter_50_50 .\nIf you wish to learn more about the Author-topic model and LDA and how to train them, you should check out these tutorials beforehand. A lot of the preprocessing and configuration here has been done using their example:\n* LDA training tips\n* Training the author-topic model\n\nNOTE:\nTo run this tutorial on your own, install Jupyter, Gensim, SpaCy, Scikit-Learn, Bokeh and Pandas, e.g. using pip:\npip install jupyter gensim spacy sklearn bokeh pandas\nNote that you need to download some data for SpaCy using python -m spacy.en.download.\nDownload the notebook at https://github.com/RaRe-Technologies/gensim/tree/develop/docs/notebooks/atmodel_prediction_tutorial.ipynb.\n\nPredicting the author of a document is a difficult task, where current approaches usually turn to neural networks. These base a lot of their predictions on learing stylistic and syntactic preferences of the authors and also other features which help rather identify the author. \nIn our case, we first model the domain knowledge of a certain author, based on what the author writes about. We do this by calculating the topic distributions for each author using the author-topic model.\nAfter that, we perform the new author inference on the held-out subset. This again calculates a topic distribution for this new unknown author. \nIn order to perform the prediction, we find out of all known authors, the most similar one to the new unknown. Mathematically speaking, we find the author, whose topic distribution is the closest to the topic distribution of the new author, by a certrain distrance function or metric. \nHere we explore the Hellinger distance for the measuring the distance between two discrete multinomial topic distributions.\nWe start off by downloading the dataset. You can do it manually using the aforementioned link, or run the following code cell.", "!wget -O - \"https://archive.ics.uci.edu/ml/machine-learning-databases/00217/C50.zip\" > /tmp/C50.zip\n\nimport logging\nlogging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.DEBUG, datefmt='%I:%M:%S')\n\nimport zipfile\n\nfilename = '/tmp/C50.zip'\n\nzip_ref = zipfile.ZipFile(filename, 'r')\nzip_ref.extractall(\"/tmp/\")\nzip_ref.close()", "We wrap all the preprocessing steps, that you can find more about in the author-topic notebook , in one fucntion so that we are able to iterate over different preprocessing parameters.", "import os, re, io\ndef preprocess_docs(data_dir):\n doc_ids = []\n author2doc = {}\n docs = []\n \n folders = os.listdir(data_dir) # List of filenames.\n for authorname in folders:\n files = file = os.listdir(data_dir + '/' + authorname)\n for filen in files:\n (idx1, idx2) = re.search('[0-9]+', filen).span() # Matches the indexes of the start end end of the ID.\n if not author2doc.get(authorname):\n # This is a new author.\n author2doc[authorname] = []\n doc_id = str(int(filen[idx1:idx2]))\n doc_ids.append(doc_id)\n author2doc[authorname].extend([doc_id])\n\n # Read document text.\n # Note: ignoring characters that cause encoding errors.\n with io.open(data_dir + '/' + authorname + '/' + filen, errors='ignore', encoding='utf-8') as fid:\n txt = fid.read()\n\n # Replace any whitespace (newline, tabs, etc.) by a single space.\n txt = re.sub('\\s', ' ', txt)\n docs.append(txt)\n \n doc_id_dict = dict(zip(doc_ids, range(len(doc_ids))))\n # Replace dataset IDs by integer IDs.\n for a, a_doc_ids in author2doc.items():\n for i, doc_id in enumerate(a_doc_ids):\n author2doc[a][i] = doc_id_dict[doc_id]\n import spacy\n nlp = spacy.load('en')\n \n %%time\n processed_docs = []\n for doc in nlp.pipe(docs, n_threads=4, batch_size=100):\n # Process document using Spacy NLP pipeline.\n\n ents = doc.ents # Named entities.\n\n # Keep only words (no numbers, no punctuation).\n # Lemmatize tokens, remove punctuation and remove stopwords.\n doc = [token.lemma_ for token in doc if token.is_alpha and not token.is_stop]\n\n # Remove common words from a stopword list.\n #doc = [token for token in doc if token not in STOPWORDS]\n\n # Add named entities, but only if they are a compound of more than word.\n doc.extend([str(entity) for entity in ents if len(entity) > 1])\n processed_docs.append(doc)\n docs = processed_docs\n del processed_docs\n \n # Compute bigrams.\n\n from gensim.models import Phrases\n\n # Add bigrams and trigrams to docs (only ones that appear 20 times or more).\n bigram = Phrases(docs, min_count=20)\n for idx in range(len(docs)):\n for token in bigram[docs[idx]]:\n if '_' in token:\n # Token is a bigram, add to document.\n docs[idx].append(token)\n return docs, author2doc", "We create the corpus of the train and test data using two separate functions, since each corpus is tied to a certain dictionary which maps the words to their ids. Also in order to create the test corpus, we use the dictionary from the train data, since the trained model has have the same id2word reference as the new test data. Otherwise token with id 1 from the test data wont't mean the same as the trained upon token with id 1 in the model.", "def create_corpus_dictionary(docs, max_freq=0.5, min_wordcount=20):\n # Create a dictionary representation of the documents, and filter out frequent and rare words.\n from gensim.corpora import Dictionary\n dictionary = Dictionary(docs)\n\n # Remove rare and common tokens.\n # Filter out words that occur too frequently or too rarely.\n max_freq = max_freq\n min_wordcount = min_wordcount\n dictionary.filter_extremes(no_below=min_wordcount, no_above=max_freq)\n\n _ = dictionary[0] # This sort of \"initializes\" dictionary.id2token.\n\n # Vectorize data.\n # Bag-of-words representation of the documents.\n corpus = [dictionary.doc2bow(doc) for doc in docs]\n\n return corpus, dictionary\n\ndef create_test_corpus(train_dictionary, docs):\n # Create test corpus using the dictionary from the train data.\n return [train_dictionary.doc2bow(doc) for doc in docs]", "For our first training, we specify that we want the parameters max_freq and min_wordcoun to be 50 and 20, as proposed by the original notebook tutorial. We will find out if this configuration is good enough for us.", "traindata_dir = \"/tmp/C50train\"\ntrain_docs, train_author2doc = preprocess_docs(traindata_dir)\ntrain_corpus_50_20, train_dictionary_50_20 = create_corpus_dictionary(train_docs, 0.5, 20)\n\nprint('Number of unique tokens: %d' % len(train_dictionary_50_20))\n\ntestdata_dir = \"/tmp/C50test\"\ntest_docs, test_author2doc = preprocess_docs(testdata_dir)\ntest_corpus_50_20 = create_test_corpus(train_dictionary_50_20, test_docs)", "We wrap the model training also in a function, in order to, again, be able to iterate over different parametrizations.", "def train_model(corpus, author2doc, dictionary, num_topics=20, eval_every=0, iterations=50, passes=20):\n from gensim.models import AuthorTopicModel\n \n model = AuthorTopicModel(corpus=corpus, num_topics=num_topics, id2word=dictionary.id2token, \\\n author2doc=author2doc, chunksize=2500, passes=passes, \\\n eval_every=eval_every, iterations=iterations, random_state=1)\n top_topics = model.top_topics(corpus)\n tc = sum([t[1] for t in top_topics]) \n print(tc / num_topics)\n return model\n\n# NOTE: Author of the logic of this function is the Olavur Mortensen, from his notebook tutorial.\n\ndef predict_author(new_doc, atmodel, top_n=10, smallest_author=1):\n from gensim import matutils\n import pandas as pd\n\n def similarity(vec1, vec2):\n '''Get similarity between two vectors'''\n dist = matutils.hellinger(matutils.sparse2full(vec1, atmodel.num_topics), \\\n matutils.sparse2full(vec2, atmodel.num_topics))\n sim = 1.0 / (1.0 + dist)\n return sim\n\n def get_sims(vec):\n '''Get similarity of vector to all authors.'''\n sims = [similarity(vec, vec2) for vec2 in author_vecs]\n return sims\n\n author_vecs = [atmodel.get_author_topics(author) for author in atmodel.id2author.values()]\n new_doc_topics = atmodel.get_new_author_topics(new_doc)\n # Get similarities.\n sims = get_sims(new_doc_topics)\n\n # Arrange author names, similarities, and author sizes in a list of tuples.\n table = []\n for elem in enumerate(sims):\n author_name = atmodel.id2author[elem[0]]\n sim = elem[1]\n author_size = len(atmodel.author2doc[author_name])\n if author_size >= smallest_author:\n table.append((author_name, sim, author_size))\n\n # Make dataframe and retrieve top authors.\n df = pd.DataFrame(table, columns=['Author', 'Score', 'Size'])\n df = df.sort_values('Score', ascending=False)[:top_n]\n\n return df\n", "We define a custom function, which measures the prediction accuracy, following the precision at k principle. We parametrize the accuracy by a parameter k, k=1 meaning we need an exact match in order to be accurate, k=5 meaning our prediction has be in the top 5 results, ordered by similarity.", "def prediction_accuracy(test_author2doc, test_corpus, model, k=5):\n\n print(\"Precision@k: top_n={}\".format(k))\n matches=0\n tries = 0\n for author in test_author2doc:\n author_id = model.author2id[author]\n for doc_id in test_author2doc[author]:\n predicted_authors = predict_author(test_corpus[doc_id:doc_id+1], atmodel=model, top_n=k)\n tries = tries+1\n if author_id in predicted_authors[\"Author\"]:\n matches=matches+1\n\n accuracy = matches/tries\n print(\"Prediction accuracy: {}\".format(accuracy))\n return accuracy, k\n\ndef plot_accuracy(scores1, label1, scores2=None, label2=None):\n \n import matplotlib.pyplot as plt\n s = [score*100 for score in scores1.values()]\n t = list(scores1.keys())\n\n plt.plot(t, s, \"b-\", label=label1)\n plt.plot(t, s, \"r^\", label=label1+\" data points\")\n \n if scores2 is not None:\n s2 = [score*100 for score in scores2.values()]\n plt.plot(t, s2, label=label2)\n plt.plot(t, s2, \"o\", label=label2+\" data points\")\n \n plt.legend(loc=\"lower right\")\n\n plt.xlabel('parameter k')\n plt.ylabel('prediction accuracy')\n plt.title('Precision at k')\n plt.xticks(t)\n plt.grid(True)\n plt.yticks([30,40,50,60,70,80,90,100])\n plt.axis([0, 11, 30, 100])\n plt.show()", "We calculate the accuracy for a range of values for k=[1,2,3,4,5,6,8,10] and plot how exactly the prediction accuracy naturally rises with higher k.", "atmodel_standard = train_model(train_corpus_50_20, train_author2doc, train_dictionary_50_20)", "We run our first training and observe that the passes and iterations parameters are set high enough, so that the model converges.\n07:47:24 INFO:PROGRESS: pass 15, at document #2500/2500\n07:47:24 DEBUG:performing inference on a chunk of 2500 documents \n07:47:27 DEBUG:2500/2500 documents converged within 50 iterations \nTells us that the model indeed conveges well.", "accuracy_scores_20topic={}\nfor i in [1,2,3,4,5,6,8,10]:\n accuracy, k = prediction_accuracy(test_author2doc, test_corpus_50_20, atmodel_standard, k=i)\n accuracy_scores_20topic[k] = accuracy\n \nplot_accuracy(scores1=accuracy_scores_20topic, label1=\"20 topics\")", "This is a rather poor accuracy performace. We increase the number of topic to 100.", "atmodel_100topics = train_model(train_corpus_50_20, train_author2doc, train_dictionary_50_20, num_topics=100, eval_every=0, iterations=50, passes=10)\n\naccuracy_scores_100topic={}\nfor i in [1,2,3,4,5,6,8,10]:\n accuracy, k = prediction_accuracy(test_author2doc, test_corpus_50_20, atmodel_100topics, k=i)\n accuracy_scores_100topic[k] = accuracy\n \nplot_accuracy(scores1=accuracy_scores_20topic, label1=\"20 topics\", scores2=accuracy_scores_100topic, label2=\"100 topics\")", "The 100-topic model is much more accurate than the 20-topic model. We continue to increase the topic until convergence.", "atmodel_150topics = train_model(train_corpus_50_20, train_author2doc, train_dictionary_50_20, num_topics=150, eval_every=0, iterations=50, passes=15)\n\naccuracy_scores_150topic={}\nfor i in [1,2,3,4,5,6,8,10]:\n accuracy, k = prediction_accuracy(test_author2doc, test_corpus_50_20, atmodel_150topics, k=i)\n accuracy_scores_150topic[k] = accuracy\n \nplot_accuracy(scores1=accuracy_scores_100topic, label1=\"100 topics\", scores2=accuracy_scores_150topic, label2=\"150 topics\")", "The 150-topic model is also slightly better, especially in the lower end of k. But we clearly see convergence. We try with 200 topic to be sure.", "atmodel_200topics = train_model(train_corpus_50_20, train_author2doc, train_dictionary_50_20, num_topics=200, eval_every=0, iterations=50, passes=15)\n\naccuracy_scores_200topic={}\nfor i in [1,2,3,4,5,6,8,10]:\n accuracy, k = prediction_accuracy(test_author2doc, test_corpus_50_20, atmodel_200topics, k=i)\n accuracy_scores_200topic[k] = accuracy\n \nplot_accuracy(scores1=accuracy_scores_150topic, label1=\"150 topics\", scores2=accuracy_scores_200topic, label2=\"200 topics\")", "The 200-topic seems to be performing a bit better for lower k, might be due to a slight overrepresentation with high topic number. So let us stop here with the topic number increase and focus some more on the dictionary. We choose either one of the models.\nCurrently we are filtering out tokens, that appear in more 50% of all documents and no more than 20 times overall, which drastically decreaces the size of our dictionary. \nWe know about this dataset, that the underlying topic are not so diverse and are structed around corporate/industrial topic class. Thus it makes sense to increase the dictionary by filtering less tokens.\nWe set the parameters set max_freq=25%, min_wordcount=10", "train_corpus_25_10, train_dictionary_25_10 = create_corpus_dictionary(train_docs, 0.25, 10)\n\ntest_corpus_25_10 = create_test_corpus(train_dictionary_25_10, test_docs)\n\nprint('Number of unique tokens: %d' % len(train_dictionary_25_10))", "We now have now nearly doubled the tokens. Let's train and evaluate.", "atmodel_150topics_25_10 = train_model(train_corpus_25_10, train_author2doc, train_dictionary_25_10, num_topics=150, eval_every=0, iterations=50, passes=15)\n\naccuracy_scores_150topic_25_10={}\nfor i in [1,2,3,4,5,6,8,10]:\n accuracy, k = prediction_accuracy(test_author2doc, test_corpus_25_10, atmodel_150topics_25_10, k=i)\n accuracy_scores_150topic_25_10[k] = accuracy\n \nplot_accuracy(scores1=accuracy_scores_150topic_25_10, label1=\"150 topics, max_freq=25%, min_wordcount=10\", scores2=accuracy_scores_150topic, label2=\"150 topics, standard\")", "The results seem rather ambigious and do not show a clear trend. Which is why we would stop here for the iterations." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
stevetjoa/stanford-mir
autocorrelation.ipynb
mit
[ "%matplotlib inline\nimport numpy, scipy, matplotlib.pyplot as plt, IPython.display as ipd\nimport librosa, librosa.display\nimport stanford_mir; stanford_mir.init()", "&larr; Back to Index\nAutocorrelation\nThe autocorrelation of a signal describes the similarity of a signal against a time-shifted version of itself. For a signal $x$, the autocorrelation $r$ is:\n$$ r(k) = \\sum_n x(n) x(n-k) $$\nIn this equation, $k$ is often called the lag parameter. $r(k)$ is maximized at $k = 0$ and is symmetric about $k$.\nThe autocorrelation is useful for finding repeated patterns in a signal. For example, at short lags, the autocorrelation can tell us something about the signal's fundamental frequency. For longer lags, the autocorrelation may tell us something about the tempo of a musical signal.\nLet's load a file:", "x, sr = librosa.load('audio/c_strum.wav')\nipd.Audio(x, rate=sr)\n\nplt.figure(figsize=(14, 5))\nlibrosa.display.waveplot(x, sr)", "numpy.correlate\nThere are two ways we can compute the autocorrelation in Python. The first method is numpy.correlate:", "# Because the autocorrelation produces a symmetric signal, we only care about the \"right half\".\nr = numpy.correlate(x, x, mode='full')[len(x)-1:]\nprint(x.shape, r.shape)", "Plot the autocorrelation:", "plt.figure(figsize=(14, 5))\nplt.plot(r[:10000])\nplt.xlabel('Lag (samples)')\nplt.xlim(0, 10000)", "librosa.autocorrelate\nThe second method is librosa.autocorrelate:", "r = librosa.autocorrelate(x, max_size=10000)\nprint(r.shape)\n\nplt.figure(figsize=(14, 5))\nplt.plot(r)\nplt.xlabel('Lag (samples)')\nplt.xlim(0, 10000)", "librosa.autocorrelate conveniently only keeps one half of the autocorrelation function, since the autocorrelation is symmetric. Also, the max_size parameter prevents unnecessary calculations.\nPitch Estimation\nThe autocorrelation is used to find repeated patterns within a signal. For musical signals, a repeated pattern can correspond to a pitch period. We can therefore use the autocorrelation function to estimate the pitch in a musical signal.", "x, sr = librosa.load('audio/oboe_c6.wav')\nipd.Audio(x, rate=sr)", "Compute and plot the autocorrelation:", "r = librosa.autocorrelate(x, max_size=5000)\nplt.figure(figsize=(14, 5))\nplt.plot(r[:200])", "The autocorrelation always has a maximum at zero, i.e. zero lag. We want to identify the maximum outside of the peak centered at zero. Therefore, we might choose only to search within a range of reasonable pitches:", "midi_hi = 120.0\nmidi_lo = 12.0\nf_hi = librosa.midi_to_hz(midi_hi)\nf_lo = librosa.midi_to_hz(midi_lo)\nt_lo = sr/f_hi\nt_hi = sr/f_lo\n\nprint(f_lo, f_hi)\nprint(t_lo, t_hi)", "Set invalid pitch candidates to zero:", "r[:int(t_lo)] = 0\nr[int(t_hi):] = 0\n\nplt.figure(figsize=(14, 5))\nplt.plot(r[:1400])", "Find the location of the maximum:", "t_max = r.argmax()\nprint(t_max)", "Finally, estimate the pitch in Hertz:", "float(sr)/t_max", "Indeed, that is very close to the true frequency of C6:", "librosa.midi_to_hz(84)", "Tempo Estimation\nWhen perfomed upon a novelty function, an autocorrelation can provide some notion of tempo.\nFor more, see the notebook Tempo Estimation.\n&larr; Back to Index" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
icrtiou/coursera-ML
ex5-bias vs variance/2- regularization of linear regression.ipynb
mit
[ "%reload_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nimport sys\nsys.path.append('..')\n\nfrom helper import linear_regression as lr\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nX, y, Xval, yval, Xtest, ytest = lr.load_data()\n# insert the intercept data of every X\nX, Xval, Xtest = [np.insert(x.reshape(x.shape[0], 1), 0, np.ones(x.shape[0]), axis=1) for x in (X, Xval, Xtest)]", "cost\n<img style=\"float: left;\" src=\"../img/linear_cost.png\">", "theta = np.ones(X.shape[1])\nlr.cost(theta, X, y)", "regularized cost\n<img style=\"float: left;\" src=\"../img/linear_reg_cost.png\">", "lr.regularized_cost(theta, X, y)", "gradient\n<img style=\"float: left;\" src=\"../img/linear_gradient.png\">", "lr.gradient(theta, X, y)", "regularized gradient\n<img style=\"float: left;\" src=\"../img/linear_reg_gradient.png\">", "lr.regularized_gradient(theta, X, y)", "fit the data\n\nregularization term $\\lambda=0$", "theta = np.ones(X.shape[0])\n\nfinal_theta = lr.linear_regression_np(X, y, l=0).get('x')\n\nb = final_theta[0] # intercept\nm = final_theta[1] # slope\n\nplt.scatter(X[:,1], y, label=\"Training data\")\nplt.plot(X[:, 1], X[:, 1]*m + b, label=\"Prediction\")\nplt.legend(loc=2)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
openearth/notebooks
sealevelmonitor.ipynb
gpl-3.0
[ "Sealevel monitor\nThis document is used to monitor the current sea level along the Dutch coast. The sea level is measured using a number of tide gauges. Six long running tide gauges are considered \"main stations\". The mean of these stations is used to estimate the \"current sea-level rise\". The measurements since 1890 are taken into account. Measurements before that are considered less valid because the Amsterdam Ordnance Datum was not yet normalized.", "# this is a list of packages that are used in this notebook\n# these come with python\nimport io\nimport zipfile\nimport functools\n\n# you can install these packages using pip or anaconda\n# (requests numpy pandas bokeh pyproj statsmodels)\n\n# for downloading\nimport requests\n\n# computation libraries\nimport numpy as np\nimport pandas\n\n# coordinate systems\nimport pyproj \n\n# statistics\nimport statsmodels.api as sm\n\n# plotting\nimport bokeh.charts\nimport bokeh.io\nimport bokeh.plotting\nimport bokeh.tile_providers\nimport bokeh.palettes\n\n# displaying things\nfrom ipywidgets import Image\nimport IPython.display\n\n# Some coordinate systems\nWEBMERCATOR = pyproj.Proj(init='epsg:3857')\nWGS84 = pyproj.Proj(init='epsg:4326')\n\n# If this notebook is not showing up with figures, you can use the following url:\n# https://nbviewer.ipython.org/github/openearth/notebooks/blob/master/sealevelmonitor.ipynb\nbokeh.io.output_notebook()\n", "The global collection of tide gauge records at the PSMSL is used to access the data. The other way to access the data is to ask the service desk data at Rijkswaterstaat. There are two types of datasets the \"Revised Local Reference\" and \"Metric\". For the Netherlands the difference is that the \"Revised Local Reference\" undoes the corrections from the NAP correction in 2014, to get a consistent dataset.", "urls = {\n 'metric_monthly': 'http://www.psmsl.org/data/obtaining/met.monthly.data/met_monthly.zip',\n 'rlr_monthly': 'http://www.psmsl.org/data/obtaining/rlr.annual.data/rlr_monthly.zip',\n 'rlr_annual': 'http://www.psmsl.org/data/obtaining/rlr.annual.data/rlr_annual.zip'\n}\ndataset_name = 'rlr_annual'\n\n# these compute the rlr back to NAP (ignoring the undoing of the NAP correction)\nmain_stations = {\n 20: {\n 'name': 'Vlissingen', \n 'rlr2nap': lambda x: x - (6976-46)\n },\n 22: {\n 'name': 'Hoek van Holland', \n 'rlr2nap': lambda x:x - (6994 - 121)\n },\n 23: {\n 'name': 'Den Helder', \n 'rlr2nap': lambda x: x - (6988-42)\n },\n 24: {\n 'name': 'Delfzijl', \n 'rlr2nap': lambda x: x - (6978-155)\n },\n 25: {\n 'name': 'Harlingen', \n 'rlr2nap': lambda x: x - (7036-122)\n },\n 32: {\n 'name': 'IJmuiden', \n 'rlr2nap': lambda x: x - (7033-83)\n }\n}\n\n# the main stations are defined by their ids\nmain_stations_idx = list(main_stations.keys())\nmain_stations_idx\n\n# download the zipfile\nresp = requests.get(urls[dataset_name])\n\n# we can read the zipfile\nstream = io.BytesIO(resp.content)\nzf = zipfile.ZipFile(stream)\n\n# this list contains a table of \n# station ID, latitude, longitude, station name, coastline code, station code, and quality flag\ncsvtext = zf.read('{}/filelist.txt'.format(dataset_name))\n\nstations = pandas.read_csv(\n io.BytesIO(csvtext), \n sep=';',\n names=('id', 'lat', 'lon', 'name', 'coastline_code', 'station_code', 'quality'),\n converters={\n 'name': str.strip,\n 'quality': str.strip\n }\n)\nstations = stations.set_index('id')\n\n# the dutch stations in the PSMSL database, make a copy\n# or use stations.coastline_code == 150 for all dutch stations\nselected_stations = stations.ix[main_stations_idx].copy()\n# set the main stations, this should be a list of 6 stations\nselected_stations\n\n# show all the stations on a map\n\n# compute the bounds of the plot\nsw = (50, -5)\nne = (55, 10)\n# transform to web mercator\nsw_wm = pyproj.transform(WGS84, WEBMERCATOR, sw[1], sw[0])\nne_wm = pyproj.transform(WGS84, WEBMERCATOR, ne[1], ne[0])\n# create a plot\nfig = bokeh.plotting.figure(tools='pan, wheel_zoom', plot_width=600, plot_height=200, x_range=(sw_wm[0], ne_wm[0]), y_range=(sw_wm[1], ne_wm[1]))\nfig.axis.visible = False\n# add some background tiles\nfig.add_tile(bokeh.tile_providers.STAMEN_TERRAIN)\n# add the stations\nx, y = pyproj.transform(WGS84, WEBMERCATOR, np.array(stations.lon), np.array(stations.lat))\nfig.circle(x, y)\nx, y = pyproj.transform(WGS84, WEBMERCATOR, np.array(selected_stations.lon), np.array(selected_stations.lat))\n_ = fig.circle(x, y, color='red')\n\n\n# show the plot\nbokeh.io.show(fig)", "Now that we have defined which tide gauges we are monitoring we can start downloading the relevant data.", "# each station has a number of files that you can look at.\n# here we define a template for each filename\n\n# stations that we are using for our computation\n# define the name formats for the relevant files\nnames = {\n 'datum': '{dataset}/RLR_info/{id}.txt',\n 'diagram': '{dataset}/RLR_info/{id}.png',\n 'url': 'http://www.psmsl.org/data/obtaining/rlr.diagrams/{id}.php',\n 'data': '{dataset}/data/{id}.rlrdata',\n 'doc': '{dataset}/docu/{id}.txt',\n 'contact': '{dataset}/docu/{id}_auth.txt'\n}\n\ndef get_url(station, dataset):\n \"\"\"return the url of the station information (diagram and datum)\"\"\"\n info = dict(\n dataset=dataset,\n id=station.name\n )\n url = names['url'].format(**info)\n return url\n# fill in the dataset parameter using the global dataset_name\nf = functools.partial(get_url, dataset=dataset_name)\n# compute the url for each station\nselected_stations['url'] = selected_stations.apply(f, axis=1)\nselected_stations\n\ndef missing2nan(value, missing=-99999):\n \"\"\"convert the value to nan if the float of value equals the missing value\"\"\"\n value = float(value)\n if value == missing:\n return np.nan\n return value\n\ndef get_data(station, dataset):\n \"\"\"get data for the station (pandas record) from the dataset (url)\"\"\"\n info = dict(\n dataset=dataset,\n id=station.name\n )\n bytes = zf.read(names['data'].format(**info))\n df = pandas.read_csv(\n io.BytesIO(bytes), \n sep=';', \n names=('year', 'height', 'interpolated', 'flags'),\n converters={\n \"height\": lambda x: main_stations[station.name]['rlr2nap'](missing2nan(x)),\n \"interpolated\": str.strip,\n }\n )\n df['station'] = station.name\n return df\n\n# get data for all stations\nf = functools.partial(get_data, dataset=dataset_name)\n# look up the data for each station\nselected_stations['data'] = [f(station) for _, station in selected_stations.iterrows()]\n\n# we now have data for each station\nselected_stations[['name', 'data']]", "Now that we have all data downloaded we can compute the mean.", "# compute the mean\ngrouped = pandas.concat(selected_stations['data'].tolist())[['year', 'height']].groupby('year')\nmean_df = grouped.mean().reset_index()\n# filter out non-trusted part (before NAP)\nmean_df = mean_df[mean_df['year'] >= 1890].copy()\n\n# these are the mean waterlevels \nmean_df.tail()\n\n# show all the stations, including the mean\ntitle = 'Sea-surface height for Dutch tide gauges [{year_min} - {year_max}]'.format(\n year_min=mean_df.year.min(),\n year_max=mean_df.year.max() \n)\nfig = bokeh.plotting.figure(title=title, x_range=(1860, 2020), plot_width=900, plot_height=400)\ncolors = bokeh.palettes.Accent6\nfor color, (id_, station) in zip(colors, selected_stations.iterrows()):\n data = station['data']\n fig.circle(data.year, data.height, color=color, legend=station['name'], alpha=0.5)\nfig.line(mean_df.year, mean_df.height, line_width=3, alpha=0.7, color='black', legend='Mean')\nfig.legend.location = \"bottom_right\"\nfig.yaxis.axis_label = 'waterlevel [mm] above NAP'\nfig.xaxis.axis_label = 'year'\n\n\n\nbokeh.io.show(fig)", "Methods\nNow we can define the statistical model. The \"current sea-level rise\" is defined by the following formula. Please note that the selected epoch of 1970 is arbitrary. \n$\nH(t) = a + b_{trend}(t-1970) + b_u\\cos(2\\pi\\frac{t - 1970}{18.613}) + b_v\\sin(2\\pi\\frac{t - 1970}{18.613})\n$\nThe terms are refered to as Constant ($a$), Trend ($b_{trend}$), Nodal U ($b_u$) and Nodal V ($b_v$). \nAlternative models are used to detect if sea-level rise is increasing. These models include the broken linear model, defined by a possible change in trend starting at 1993. This timespan is the start of the \"satellite era\" (start of TOPEX/Poseidon measurements), it is also often referred to as the start of acceleration because the satellite measurements tend to show a higher rate of sea level than the \"tide-gauge era\" (1900-2000). If this model fits better than the linear model, one could say that there is a \"increase in sea-level rise\". \n$\nH(t) = a + b_{trend}(t-1970) + b_{broken}(t > 1993)*(t-1993) + b_{u}\\cos(2\\pi\\frac{t - 1970}{18.613}) + b_{v}\\sin(2\\pi\\frac{t - 1970}{18.613})\n$\nAnother way to look at increased sea-level rise is to look at sea-level acceleration. To detect sea-level acceleration one can use a quadratic model. \n$\nH(t) = a + b_{trend}(t-1970) + b_{quadratic}(t - 1970)*(t-1970) + b_{u}\\cos(2\\pi\\frac{t - 1970}{18.613}) + b_{v}\\sin(2\\pi\\frac{t - 1970}{18.613})\n$", "# define the statistical model\ny = mean_df['height']\nX = np.c_[\n mean_df['year']-1970, \n np.cos(2*np.pi*(mean_df['year']-1970)/18.613),\n np.sin(2*np.pi*(mean_df['year']-1970)/18.613)\n]\nX = sm.add_constant(X)\nmodel = sm.OLS(y, X)\nfit = model.fit()\n\nfit.summary(yname='Sea-surface height', xname=['Constant', 'Trend', 'Nodal U', 'Nodal V'])\n# things to check:\n# Durbin Watson should be >1 for no worries, >2 for no autocorrelation\n# JB should be non-significant for normal residuals\n# abs(x2.t) + abs(x3.t) should be > 3, otherwise adding nodal is not useful\n\nfig = bokeh.plotting.figure(x_range=(1860, 2020), plot_width=900, plot_height=400)\nfor color, (id_, station) in zip(colors, selected_stations.iterrows()):\n data = station['data']\n fig.circle(data.year, data.height, color=color, legend=station['name'], alpha=0.8)\nfig.circle(mean_df.year, mean_df.height, line_width=3, legend='Mean', color='black', alpha=0.5)\nfig.line(mean_df.year, fit.predict(), line_width=3, legend='Current')\nfig.legend.location = \"bottom_right\"\nfig.yaxis.axis_label = 'waterlevel [mm] above N.A.P.'\nfig.xaxis.axis_label = 'year'\nbokeh.io.show(fig)", "Is there a sea-level acceleration?\nThe following section computes two common models to detect sea-level acceleration. The broken linear model expects that sea level has been rising faster since 1990. The quadratic model assumes that the sea-level is accelerating continuously. Both models are compared to the linear model. The extra terms are tested for significance and the AIC is computed to see which model is \"better\".", "# define the statistical model\ny = mean_df['height']\nX = np.c_[\n mean_df['year']-1970, \n (mean_df['year'] > 1993) * (mean_df['year'] - 1993),\n np.cos(2*np.pi*(mean_df['year']-1970)/18.613),\n np.sin(2*np.pi*(mean_df['year']-1970)/18.613)\n]\nX = sm.add_constant(X)\nmodel_broken_linear = sm.OLS(y, X)\nfit_broken_linear = model_broken_linear.fit()\n\n# define the statistical model\ny = mean_df['height']\nX = np.c_[\n mean_df['year']-1970, \n (mean_df['year'] - 1970) * (mean_df['year'] - 1970),\n np.cos(2*np.pi*(mean_df['year']-1970)/18.613),\n np.sin(2*np.pi*(mean_df['year']-1970)/18.613)\n]\nX = sm.add_constant(X)\nmodel_quadratic = sm.OLS(y, X)\nfit_quadratic = model_quadratic.fit()\n\nfit_broken_linear.summary(yname='Sea-surface height', xname=['Constant', 'Trend', 'Trend(year > 1990)', 'Nodal U', 'Nodal V'])\n\n\nfit_quadratic.summary(yname='Sea-surface height', xname=['Constant', 'Trend', 'Trend**2', 'Nodal U', 'Nodal V'])\n\n\nfig = bokeh.plotting.figure(x_range=(1860, 2020), plot_width=900, plot_height=400)\nfor color, (id_, station) in zip(colors, selected_stations.iterrows()):\n data = station['data']\n fig.circle(data.year, data.height, color=color, legend=station['name'], alpha=0.8)\nfig.circle(mean_df.year, mean_df.height, line_width=3, legend='Mean', color='black', alpha=0.5)\nfig.line(mean_df.year, fit.predict(), line_width=3, legend='Current')\nfig.line(mean_df.year, fit_broken_linear.predict(), line_width=3, color='#33bb33', legend='Broken')\nfig.line(mean_df.year, fit_quadratic.predict(), line_width=3, color='#3333bb', legend='Quadratic')\n\nfig.legend.location = \"top_left\"\nfig.yaxis.axis_label = 'waterlevel [mm] above N.A.P.'\nfig.xaxis.axis_label = 'year'\nbokeh.io.show(fig)", "Conclusions\nBelow are some statements that depend on the output calculated above.", "msg = '''The current average waterlevel above NAP (in mm), \nbased on the 6 main tide gauges for the year {year} is {height:.1f} cm.\nThe current sea-level rise is {rate:.0f} cm/century'''\nprint(msg.format(year=mean_df['year'].iloc[-1], height=fit.predict()[-1]/10.0, rate=fit.params.x1*100.0/10))\n\nif (fit.aic < fit_broken_linear.aic):\n print('The linear model is a higher quality model (smaller AIC) than the broken linear model.')\nelse:\n print('The broken linear model is a higher quality model (smaller AIC) than the linear model.')\nif (fit_broken_linear.pvalues['x2'] < 0.05):\n print('The trend break is bigger than we would have expected under the assumption that there was no trend break.')\nelse:\n print('Under the assumption that there is no trend break, we would have expected a trend break as big as we have seen.')\n\nif (fit.aic < fit_quadratic.aic):\n print('The linear model is a higher quality model (smaller AIC) than the quadratic model.')\nelse:\n print('The quadratic model is a higher quality model (smaller AIC) than the linear model.')\nif (fit_quadratic.pvalues['x2'] < 0.05):\n print('The quadratic term is bigger than we would have expected under the assumption that there was no quadraticness.')\nelse:\n print('Under the assumption that there is no quadraticness, we would have expected a quadratic term as big as we have seen.')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eugeniopacceli/ComputerVision
quiz1/Quiz1-Calibration.ipynb
mit
[ "Quiz 1: Calibration - Intrinsics and Extrinsics Parameters\nStudents:\n\nEugênio Pacceli\nRenato Oliveira\nBrayan Acevedo\n\nWe implemented the algorithm presented in the chapter 6 of the book \"Introductory Techniques For 3D Computer Vision\", written by Trucco and Verri.\nGiven the images of the calibration board, and the dimensions of each of it's squares (dx=23.5mm and dy=23.5mm), the first step of the algorithm is to compute the coordinates of each square corner, having 6 of those on the X axis, and 8 in the Y axis.\nThese corners have their coordinates [X, Y, Z] representing their positions on the real world. Considering the board represents a single plane and only the camera moves, we have Z = 0 for all those corners.\n\nInitialize program\n* Read each image file, using OpenCv.\n* Define the board parameters and it's representation as a matrix.", "import os\nimport numpy as np\nimport cv2\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nREALSQUARE = 23.500 # Size of a square\nBOARDDIM = (6,8) # Dimensions of the given board\nNUMIMG = 9 # Number of images to open\nimagesList = list()\n\n# Opens each image and adds to the image list\nPATH = \"quiz1-data/\"\nfor fileName in os.listdir(PATH):\n imagesList.append(cv2.imread(PATH + fileName, 0))\n\n# Computes the real world coordinates of each beginning of a square in the board \nrealSquares = []\nfor i in range(BOARDDIM[1]):\n for j in range(BOARDDIM[0]):\n realSquares.append([j*REALSQUARE, i*REALSQUARE, 0.000])\n\n# Converts to numpy array\nrealSquares = np.array(realSquares, dtype=np.float32)\n", "Detect the corners on the real images and compute A matrix\nTo detect the corners of each square in each image, we used the function findChessboardCorners(), from OpenCv.\nThis function returns every corner found in an image, given the board dimensions in the real world (6,8).\nSo, for every image, we executed this function and associated each corner [x,y], found on each image, with it's equivalent in the real world coordinates [X,Y,Z].\nThen we calculated all the rows of the A matrix, given the association between real world coordinates and image plane coordinates for every corner.", "matA = list()\n\nfor item in range(NUMIMG):\n img = imagesList[item]\n _, boardCorners = cv2.findChessboardCorners(img, BOARDDIM, None)\n boardCorners = boardCorners.reshape((BOARDDIM[0] * BOARDDIM[1], 2))\n for k in range(48):\n x, y = boardCorners[k, :]\n X, Y, Z = realSquares[k, :]\n matA.append([x*X, x*Y, x*Z, x, -y*X, -y*Y, y*Z, -y])", "Now we must compute the parameters of the rotation matrix R and translation vector T, given the results of the SVD (singular values decomposition) of matrix A (remember this matrix was generated, in the loop above, using the product of each square corner real coordinates by it's image plane coordinates, plus a column with -y in the end of each row of A).\nFirst, lets obtain the v vector:\n* v[1] = r[2,1]\n* v[2] = r[2,2]\n* v[3] = r[2,3]\n* v[4] = Ty\n* v[5] = a*r[1,1]\n* v[6] = a*r[1,2]\n* v[7] = a*r[1,3]\n* v[8] = a*Tx\nr[x,y] are the elements of R matrix, Ty and Tx are the elements of T vector, 'a' is the ration between the numbers of pixels on a image horizontal line by vertical line (aspect ratio).\nTo obtain the v vector of the equation Av=0, we did the singular values decomposition using the function numpy.linalg.svd(), which returns U, D, V.\nThe vector-solution v is obtained by extracting the column of V corresponding to the column of D that contains the minimal value in the diagonal.", "matA = np.array(matA, dtype=np.float32)\n\nU, D, V = np.linalg.svd(matA, full_matrices=True)\n\n# The column of V corresponding to the minimal value in the diagonal of D\n# In the given sample, D always contains a 0 in the 7th columny\n# If we pick another value, v is generated with null values\nvecV = V[6,:]\n\nv1, v2, v3, v4, v5, v6, v7, v8 = vecV\n", "Scale factor = sqrt(r[2,1]^2 + r[2,2]^2 + r[2,3]^2)", "# Compute the scale factor given the vector v\ngamma = np.sqrt(v1**2 + v2**2 + v3**2)", "Aspect ratio = sqrt(v[5]^2 + v[6]^2 + v[7]^2) / Scale factor", "# Compute the aspect ratio (alpha)\nalpha = np.sqrt(v5**2 + v6**2 + v7**2) / gamma", "Extraction of rotation matrix R and translation vector T given the elements of v vector:", "# First row of R matrix\nr11, r12, r13 = [v5 / alpha, v6 / alpha, v7 / alpha]\n\n# Second row of R matrix\nr21, r22, r23 = v1/gamma, v2/gamma, v3/gamma\n\n# Third row of R matrix, computed by the cross product of rows 1 and 2\nr31, r32, r33 = np.cross([r11, r12, r13], [r21, r22, r23])\n\n# Obtain the elements of the translation vector\nTx, Ty = [v8/alpha, v4]", "Determinate the signal of gamma, to detect a possible signal inversion of the first two rows of R matrix.\nThen, we compute the parameters Tz and fx, creating another matrix A and a vector B, and solving the equation system using the least squares technique, made available by the function np.linalg.lstsq(matA,vecB).", "# If this product is bigger than 0, invert the signal on R[1,:] and R[2,:]\nif x*(r11*X + r12*Y + r13*Z + Tx) > 0:\n r11 = -r11\n r12 = -r12\n r13 = -r13\n r21 = -r21\n r22 = -r22\n r23 = -r23\n Tx = -Tx\n Ty = -Ty\n\ndel matA\nmatA = list()\nvecB = list()\n\n# Generate new matrix A and vector B\nfor item in range(NUMIMG):\n _, boardCorners = cv2.findChessboardCorners(imagesList[item], BOARDDIM, None)\n boardCorners = boardCorners.reshape((BOARDDIM[0] * BOARDDIM[1], 2))\n for k in range(48):\n x, y = boardCorners[k, :]\n X, Y, Z = realSquares[k]\n matA.append([x, (r11*X + r12*Y + r13*Z + Tx)])\n vecB.append([-x*(r31*X + r32*Y + r33*Z)])\n\nmatA = np.array(matA)\nvecB = np.array(vecB)\n# Solve by least squares the system Ax = B\nvecSol,_, _, _ = np.linalg.lstsq(matA,vecB)\n\n# Obtain Tz and fx\nTz, fx = vecSol\n\n# Compute fy\nfy = fx / alpha\n\n# Matrix R and vector T representation in proper numpy objects\nmatR = np.array([[r11, r12, r13], [r21, r22, r23], [r31, r32, r33]])\nvecT = np.array([[Tx], [Ty], [Tz]])", "Prints our results", "print(\"Matriz R \\n {}\".format(matR))\nprint(\"\\nVetor T\\n{}\".format(vecT))\nprint(\"fx = {}\".format(fx))\nprint(\"fy ={}\".format(fy))\nprint(\"alpha = {}\".format(alpha))\nprint(\"gamma = {}\".format(gamma))", "Results given by the Toolbox using Matlab\n% Intrinsic and Extrinsic Camera Parameters\n%\n% This script file can be directly executed under Matlab to recover the camera intrinsic and extrinsic parameters.\n% IMPORTANT: This file contains neither the structure of the calibration objects nor the image coordinates of the calibration points.\n% All those complementary variables are saved in the complete matlab data file Calib_Results.mat.\n% For more information regarding the calibration model visit http://www.vision.caltech.edu/bouguetj/calib_doc/\nIntrinsic Camera Parameters\n%-- Focal length:\nfc = [ 1319.360839018184800 ; 1325.862886727936900 ];\n%-- Principal point:\ncc = [ 801.687461000126630 ; 398.685724393203370 ];\n%-- Skew coefficient:\nalpha_c = 0.000000000000000;\n%-- Distortion coefficients:\nkc = [ 0.094052171503048 ; -0.196804510414059 ; -0.009826260896323 ; 0.003042938250442 ; 0.000000000000000 ];\n%-- Focal length uncertainty:\nfc_error = [ 7.614776975212465 ; 7.882743402506637 ];\n%-- Principal point uncertainty:\ncc_error = [ 10.631216021613728 ; 10.799630586885627 ];\n%-- Skew coefficient uncertainty:\nalpha_c_error = 0.000000000000000;\n%-- Distortion coefficients uncertainty:\nkc_error = [ 0.022911520624474 ; 0.089089522533125 ; 0.002827995432640 ; 0.003450888593813 ; 0.000000000000000 ];\n%-- Image size:\nnx = 1600;\nny = 904;\n%-- Various other variables (may be ignored if you do not use the Matlab Calibration Toolbox):\n%-- Those variables are used to control which intrinsic parameters should be optimized\nn_ima = 9; % Number of calibration images\nest_fc = [ 1 ; 1 ]; % Estimation indicator of the two focal variables\nest_aspect_ratio = 1; % Estimation indicator of the aspect ratio fc(2)/fc(1)\ncenter_optim = 1; % Estimation indicator of the principal point\nest_alpha = 0; % Estimation indicator of the skew coefficient\nest_dist = [ 1 ; 1 ; 1 ; 1 ; 0 ]; % Estimation indicator of the distortion coefficients", "img1w = cv2.imread('extrin_param.png', cv2.IMREAD_COLOR)\nimg_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB)\nplt.figure(1)\nplt.imshow(img_rgb)\n\nimg1w = cv2.imread('extrin_param1.png', cv2.IMREAD_COLOR)\nimg_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB)\nplt.figure(2)\nplt.imshow(img_rgb)\n", "Extrinsic Camera Parameters\n%-- The rotation (omc_kk) and the translation (Tc_kk) vectors for every calibration image and their uncertainties\n%-- Image #1:\nomc_1 = [ 1.984622e+00 ; 1.845352e-01 ; 2.870369e-01 ];\nTc_1 = [ -1.574390e+02 ; 5.101294e+01 ; 3.756956e+02 ];\nomc_error_1 = [ 8.027448e-03 ; 4.932236e-03 ; 8.087287e-03 ];\nTc_error_1 = [ 3.093133e+00 ; 3.203087e+00 ; 2.546667e+00 ];\n%-- Image #2:\nomc_2 = [ 2.508173e+00 ; -2.174940e-01 ; 2.164566e-01 ];\nTc_2 = [ -1.390132e+02 ; 6.165338e+01 ; 3.904267e+02 ];\nomc_error_2 = [ 8.521395e-03 ; 3.432194e-03 ; 1.065160e-02 ];\nTc_error_2 = [ 3.169612e+00 ; 3.282157e+00 ; 2.471550e+00 ];\n%-- Image #3:\nomc_3 = [ 2.527412e+00 ; -1.619762e-01 ; 2.726591e-01 ];\nTc_3 = [ -1.320341e+02 ; 3.732533e+01 ; 3.677618e+02 ];\nomc_error_3 = [ 8.562994e-03 ; 3.575236e-03 ; 1.069876e-02 ];\nTc_error_3 = [ 2.971579e+00 ; 3.088936e+00 ; 2.324621e+00 ];\n%-- Image #4:\nomc_4 = [ 2.403292e+00 ; 2.859610e-01 ; 3.590933e-01 ];\nTc_4 = [ -1.307800e+02 ; 6.399676e+01 ; 3.768658e+02 ];\nomc_error_4 = [ 8.352029e-03 ; 3.534587e-03 ; 9.799631e-03 ];\nTc_error_4 = [ 3.104473e+00 ; 3.162434e+00 ; 2.366463e+00 ];\n%-- Image #5:\nomc_5 = [ 1.950385e+00 ; 8.878343e-02 ; -3.789958e-02 ];\nTc_5 = [ -8.674013e+01 ; 7.116935e+01 ; 3.170839e+02 ];\nomc_error_5 = [ 7.936524e-03 ; 4.988629e-03 ; 8.027915e-03 ];\nTc_error_5 = [ 2.598114e+00 ; 2.646435e+00 ; 2.018596e+00 ];\n%-- Image #6:\nomc_6 = [ 2.396903e+00 ; -5.169785e-01 ; 4.419722e-02 ];\nTc_6 = [ -8.462341e+01 ; 5.978167e+01 ; 3.749603e+02 ];\nomc_error_6 = [ 8.379910e-03 ; 3.952135e-03 ; 1.001740e-02 ];\nTc_error_6 = [ 3.031554e+00 ; 3.096359e+00 ; 2.264016e+00 ];\n%-- Image #7:\nomc_7 = [ 2.472130e+00 ; -1.340944e+00 ; -3.767778e-01 ];\nTc_7 = [ -7.211444e+01 ; 1.104529e+02 ; 3.959351e+02 ];\nomc_error_7 = [ 8.504043e-03 ; 3.186557e-03 ; 1.220480e-02 ];\nTc_error_7 = [ 3.256782e+00 ; 3.260764e+00 ; 2.366755e+00 ];\n%-- Image #8:\nomc_8 = [ -2.244006e+00 ; 2.066472e+00 ; -2.738894e-01 ];\nTc_8 = [ 5.965238e+01 ; 8.676875e+01 ; 3.136598e+02 ];\nomc_error_8 = [ 6.038871e-03 ; 5.565660e-03 ; 1.031145e-02 ];\nTc_error_8 = [ 2.556427e+00 ; 2.563756e+00 ; 1.969765e+00 ];\n%-- Image #9:\nomc_9 = [ -2.292830e+00 ; 1.862245e+00 ; -4.464243e-02 ];\nTc_9 = [ 2.162723e+01 ; 7.229592e+01 ; 3.928030e+02 ];\nomc_error_9 = [ 6.143239e-03 ; 5.905689e-03 ; 1.211607e-02 ];\nTc_error_9 = [ 3.168044e+00 ; 3.163959e+00 ; 2.144994e+00 ];", "img1w = cv2.imread('corner_1.png', cv2.IMREAD_COLOR)\nimg_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB)\nplt.figure(1)\nplt.imshow(img_rgb)\n\nimg1w = cv2.imread('corner_2.png', cv2.IMREAD_COLOR)\nimg_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB)\nplt.figure(2)\nplt.imshow(img_rgb)\n\nimg1w = cv2.imread('corner_3.png', cv2.IMREAD_COLOR)\nimg_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB)\nplt.figure(3)\nplt.imshow(img_rgb)\n\nimg1w = cv2.imread('corner_4.png', cv2.IMREAD_COLOR)\nimg_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB)\nplt.figure(4)\nplt.imshow(img_rgb)\nimg1w = cv2.imread('corner_5.png', cv2.IMREAD_COLOR)\nimg_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB)\nplt.figure(5)\nplt.imshow(img_rgb)\n\nimg1w = cv2.imread('corner_6.png', cv2.IMREAD_COLOR)\nimg_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB)\nplt.figure(6)\nplt.imshow(img_rgb)\nimg1w = cv2.imread('corner_7.png', cv2.IMREAD_COLOR)\nimg_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB)\nplt.figure(7)\nplt.imshow(img_rgb)\n\nimg1w = cv2.imread('corner_8.png', cv2.IMREAD_COLOR)\nimg_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB)\nplt.figure(8)\nplt.imshow(img_rgb)\n\nimg1w = cv2.imread('corner_9.png', cv2.IMREAD_COLOR)\nimg_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB)\nplt.figure(9)\nplt.imshow(img_rgb)", "Sources:\n\nCamera calibration with OpenCv\nIntroductory techniques for 3-D computer vision - Emanuele Trucco & Alessandro Verri, Capítulo 6 \"Camera Calibration\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.21/_downloads/2fc30e4810d35d643811cc11759b3b9a/plot_resample.ipynb
bsd-3-clause
[ "%matplotlib inline", "Resampling data\nWhen performing experiments where timing is critical, a signal with a high\nsampling rate is desired. However, having a signal with a much higher sampling\nrate than is necessary needlessly consumes memory and slows down computations\noperating on the data.\nThis example downsamples from 600 Hz to 100 Hz. This achieves a 6-fold\nreduction in data size, at the cost of an equal loss of temporal resolution.", "# Authors: Marijn van Vliet <[email protected]>\n#\n# License: BSD (3-clause)\n\nfrom matplotlib import pyplot as plt\n\nimport mne\nfrom mne.datasets import sample", "Setting up data paths and loading raw data (skip some data for speed)", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nraw = mne.io.read_raw_fif(raw_fname).crop(120, 240).load_data()", "Since downsampling reduces the timing precision of events, we recommend\nfirst extracting epochs and downsampling the Epochs object:", "events = mne.find_events(raw)\nepochs = mne.Epochs(raw, events, event_id=2, tmin=-0.1, tmax=0.8, preload=True)\n\n# Downsample to 100 Hz\nprint('Original sampling rate:', epochs.info['sfreq'], 'Hz')\nepochs_resampled = epochs.copy().resample(100, npad='auto')\nprint('New sampling rate:', epochs_resampled.info['sfreq'], 'Hz')\n\n# Plot a piece of data to see the effects of downsampling\nplt.figure(figsize=(7, 3))\n\nn_samples_to_plot = int(0.5 * epochs.info['sfreq']) # plot 0.5 seconds of data\nplt.plot(epochs.times[:n_samples_to_plot],\n epochs.get_data()[0, 0, :n_samples_to_plot], color='black')\n\nn_samples_to_plot = int(0.5 * epochs_resampled.info['sfreq'])\nplt.plot(epochs_resampled.times[:n_samples_to_plot],\n epochs_resampled.get_data()[0, 0, :n_samples_to_plot],\n '-o', color='red')\n\nplt.xlabel('time (s)')\nplt.legend(['original', 'downsampled'], loc='best')\nplt.title('Effect of downsampling')\nmne.viz.tight_layout()", "When resampling epochs is unwanted or impossible, for example when the data\ndoesn't fit into memory or your analysis pipeline doesn't involve epochs at\nall, the alternative approach is to resample the continuous data. This\ncan only be done on loaded or pre-loaded data.", "# Resample to 300 Hz\nraw_resampled_300 = raw.copy().resample(300, npad='auto')", "Because resampling also affects the stim channels, some trigger onsets might\nbe lost in this case. While MNE attempts to downsample the stim channels in\nan intelligent manner to avoid this, the recommended approach is to find\nevents on the original data before downsampling.", "print('Number of events before resampling:', len(mne.find_events(raw)))\n\n# Resample to 100 Hz (suppress the warning that would be emitted)\nraw_resampled_100 = raw.copy().resample(100, npad='auto', verbose='error')\nprint('Number of events after resampling:',\n len(mne.find_events(raw_resampled_100)))\n\n# To avoid losing events, jointly resample the data and event matrix\nevents = mne.find_events(raw)\nraw_resampled, events_resampled = raw.copy().resample(\n 100, npad='auto', events=events)\nprint('Number of events after resampling:', len(events_resampled))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dietmarw/EK5312_ElectricalMachines
Chapman/Ch1-Problem_1-19.ipynb
unlicense
[ "Excercises Electric Machinery Fundamentals\nChapter 1\nProblem 1-19", "%pylab notebook\n%precision %.4g", "Description\nFigure P1-14 shows a simple single-phase ac power system with three loads. The voltage source is\n$\\vec{V} = 240\\,V\\angle 0^\\circ$, impedances of these three loads are:\n$$\\vec{Z}_1 = 10\\,\\Omega\\angle 30^\\circ \\quad \\vec{Z}_2 = 10\\,\\Omega\\angle 45^\\circ \\quad \\vec{Z}_3 = 10\\,\\Omega\\angle -90^\\circ $$\n<img src=\"figs/FigC_P1-14.jpg\" width=\"80%\">", "V = 240 # [V]\nZ1 = 10.0 * exp(1j* 30/180*pi) \nZ2 = 10.0 * exp(1j* 45/180*pi) \nZ3 = 10.0 * exp(1j*-90/180*pi) ", "Answer the following questions about this power system.\n(a)\n\nAssume that the switch shown in the figure is initially open, and calculate the current I , the power factor, and the real, reactive, and apparent power being supplied by the source.\n\n(b)\n\nHow much real, reactive, and apparent power is being consumed by each load with the switch open?\n\n(c)\n\nAssume that the switch shown in the figure is now closed, and calculate the current I , the power factor, and the real, reactive, and apparent power being supplied by the source.\n\n(d)\n\nHow much real, reactive, and apparent power is being consumed by each load with the switch closed?\n\n(e)\n\nWhat happened to the current flowing from the source when the switch closed? Why?\n\nSOLUTION\n(a)\nWith the switch open, only loads 1 and 2 are connected to the source. The current $\\vec{I}_1$ in Load 1 and the current $\\vec{I}_2$ in Load 2 are:", "I1 = V/Z1\nI2 = V/Z2\nI1_angle = arctan(I1.imag/I1.real)\nI2_angle = arctan(I2.imag/I2.real)\nprint('''I1 = {:.1f} A ∠{:.1f}°\nI2 = {:.1f} A ∠{:.1f}°'''.format(\n abs(I1), I1_angle/pi*180,\n abs(I2), I2_angle/pi*180))", "Therefore the total current from the source is $\\vec{I} = \\vec{I}_1 + \\vec{I}_2$:", "I = I1 + I2\nI_angle = arctan(I.imag/I.real)\nprint('I = {:.1f} A ∠{:.1f}°'.format(\n abs(I), I_angle/pi*180))\nprint('==================') ", "The power factor supplied by the source is:", "PF = cos(-I_angle)\nPF", "lagging (because current laggs behind voltage). \nNote that the angle $\\theta$ used in the power factor and power calculations is the impedance angle, which is the negative of the current angle as long as voltage is at $0^\\circ$.\nThe real, reactive, and apparent power supplied by the source are\n$$S = VI^* \\quad P = VI\\cos\\theta = real(S) \\quad Q = VI\\sin\\theta = imag(S)$$", "So = V*conj(I) # I use index \"o\" for open switch\nSo", "Let's pretty-print that:", "print('''\nSo = {:>7.1f} VA\nPo = {:>7.1f} W\nQo = {:>7.1f} var\n================'''.format(abs(So), So.real, So.imag))", "(b)\nThe real, reactive, and apparent power consumed by Load 1 and by Load 2 respectively are:", "S1o = V*conj(I1)\nS1o\n\nS2o = V*conj(I2)\nS2o", "Let's pretty-print that:", "print('''\nS1o = {:>6.1f} VA\nP1o = {:>6.1f} W\nQ1o = {:>6.1f} var\n----------------\nS2o = {:>6.1f} VA\nP2o = {:>6.1f} W\nQ2o = {:>6.1f} var\n================'''.format(abs(S1o), S1o.real, S1o.imag,\n abs(S2o), S2o.real, S2o.imag))", "As expected, the real and reactive power supplied by the source are equal to the sum of the\nreal and reactive powers consumed by the loads.\n(c)\nWith the switch closed, all three loads are connected to the source. The current in Loads 1 and 2 is the same as before. The current $\\vec{I}_3$ in Load 3 is:", "I3 = V/Z3\nI3_angle = arctan(I3.imag/I3.real)\nprint('I3 = {:.1f} A ∠{:.1f}°'.format(abs(I3), I3_angle/pi*180)) ", "Therefore the total current from the source is $\\vec{I} = \\vec{I}_1 + \\vec{I}_2 + \\vec{I}_3$:", "I = I1 + I2 + I3\nI_angle = arctan(I.imag/I.real)\nprint('I = {:.1f} A ∠{:.1f}°'.format(abs(I), I_angle/pi*180))\nprint('=================') ", "The power factor supplied by the source is:", "PF = cos(-I_angle)\nPF", "lagging (because current laggs behind voltage). \nThe real, reactive, and apparent power supplied by the source are\n$$S = VI^* \\quad P = VI\\cos\\theta = real(S) \\quad Q = VI\\sin\\theta = imag(S)$$", "Sc = V*conj(I) # I use index \"c\" for closed switch\nSc", "Let's pretty-print that:", "print('''\nSc = {:.1f} VA\nPc = {:.1f} W\nQc = {:.1f} var\n==============='''.format(abs(Sc), Sc.real, Sc.imag))", "(d)\nThe real, reactive, and apparent power consumed by Load 1, Load 2 and by Load 3 respectively are:", "S1c = V*conj(I1)\nS1c\n\nS2c = V*conj(I2)\nS2c\n\nS3c = V*conj(I3)\nS3c\n\nprint('''\nS1c = {:>7.1f} VA\nP1c = {:>7.1f} W\nQ1c = {:>7.1f} var\n-----------------\nS2c = {:>7.1f} VA\nP2c = {:>7.1f} W\nQ2c = {:>7.1f} var\n-----------------\nS3c = {:>7.1f} VA\nP3c = {:>7.1f} W\nQ3c = {:>7.1f} var\n================='''.format(abs(S1c), S1c.real, S1c.imag,\n abs(S2c), S2c.real, S2c.imag,\n abs(S3c), S3c.real, S3c.imag))", "(e)\nThe current flowing decreased when the switch closed, because most of the reactive power being\nconsumed by Loads 1 and 2 is being supplied by Load 3. Since less reactive power has to be supplied by the source, the total current flow decreases." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gaufung/PythonStandardLibrary
FileSystem/Path.ipynb
mit
[ "os.path\nWriting code to work with files on multiple platforms is easy using the functions included in the os.path module. Even programs not intended to be ported between platforms should use os.path for reliable filename parsing.\nParsing Path", "import os.path\n\nPATHS = [\n '/one/two/three',\n '/one/two/three/',\n '/',\n '.',\n '',\n]\n\nfor path in PATHS:\n print('{!r:>17} : {}'.format(path, os.path.split(path)))\n\nfor path in PATHS:\n print('{!r:>17}:{}'.format(path, os.path.basename(path)))\n\nfor path in PATHS:\n print('{!r:>17}:{}'.format(path, os.path.dirname(path)))\n\nimport os.path\n\nPATHS = [\n 'filename.txt',\n 'filename',\n '/path/to/filename.txt',\n '/',\n '',\n 'my-archive.tar.gz',\n 'no-extension.',\n]\n\nfor path in PATHS:\n print('{!r:>21} : {!r}'.format(path, os.path.splitext(path)))\n\nimport os.path\n\npaths = ['/one/two/three/four',\n '/one/two/threefold',\n '/one/two/three/',\n ]\nfor path in paths:\n print('PATH:', path)\n\nprint()\nprint('PREFIX:', os.path.commonprefix(paths))\n\nimport os.path\n\npaths = ['/one/two/three/four',\n '/one/two/threefold',\n '/one/two/three/',\n ]\nfor path in paths:\n print('PATH:', path)\n\nprint()\nprint('PREFIX:', os.path.commonpath(paths))", "Building Path\nIf any argument to join begins with os.sep, all of the previous arguments are discarded and the new one becomes the beginning of the return value.", "import os.path\n\nPATHS = [\n ('one', 'two', 'three'),\n ('/', 'one', 'two', 'three'),\n ('/one', '/two', '/three'),\n]\n\nfor parts in PATHS:\n print('{} : {!r}'.format(parts, os.path.join(*parts)))\n\nimport os.path\n\nfor user in ['', 'gaufung', 'nosuchuser']:\n lookup = '~' + user\n print('{!r:>15} : {!r}'.format(\n lookup, os.path.expanduser(lookup)))\n\nimport os.path\nimport os\n\nos.environ['MYVAR'] = 'VALUE'\n\nprint(os.path.expandvars('/path/to/$MYVAR'))", "Normal Path", "import os.path\n\nPATHS = [\n 'one//two//three',\n 'one/./two/./three',\n 'one/../alt/two/three',\n]\n\nfor path in PATHS:\n print('{!r:>22} : {!r}'.format(path, os.path.normpath(path)))\n\nimport os\nimport os.path\n\nos.chdir('/usr')\n\nPATHS = [\n '.',\n '..',\n './one/two/three',\n '../one/two/three',\n]\n\nfor path in PATHS:\n print('{!r:>21} : {!r}'.format(path, os.path.abspath(path)))", "File Time", "import os.path\nimport time\n\nprint('File :', '~/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb')\nprint('Access time :', time.ctime(os.path.getatime('/Users/gaufung/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb')))\nprint('Modified time:', time.ctime(os.path.getmtime('/Users/gaufung/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb')))\nprint('Change time :', time.ctime(os.path.getctime('/Users/gaufung/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb')))\nprint('Size :', os.path.getsize('/Users/gaufung/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
besser82/shogun
doc/ipython-notebooks/multiclass/Tree/DecisionTrees.ipynb
bsd-3-clause
[ "Decision Trees\nBy Parijat Mazumdar (GitHub ID: mazumdarparijat)\nThis notebook illustrates the use of decision trees in Shogun for classification and regression. Various decision tree learning algorithms like ID3, C4.5, CART, CHAID have been discussed in detail using both intuitive toy datasets as well as real-world datasets.\nDecision Tree Basics\nDecision Trees are a non-parametric supervised learning method that can be used for both classification and regression. Decision trees essentially encode a set of if-then-else rules which can be used to predict target variable given data features. These if-then-else rules are formed using the training dataset with the aim to satisfy as many training data instances as possible. The formation of these rules (aka. decision tree) from training data is called decision tree learning. Various decision tree learning algorithms have been developed and they work best in different situations. An advantage of decision trees is that they can model any type of function for classification or regression which other techniques cannot. But a decision tree is highly prone to overfitting and bias towards training data. So, decision trees are used for very large datasets which are assumed to represent the ground truth well. Additionally, certain tree pruning algorithms are also used to tackle overfitting. \nID3 (Iterative Dichotomiser 3)\nID3 is a straightforward decision tree learning algorithm developed by Ross Quinlan. ID3 is applicable only in cases where the attributes (or features) defining data examples are categorical in nature and the data examples belong to pre-defined, clearly distinguishable (ie. well defined) classes. ID3 is an iterative greedy algorithm which starts with the root node and eventually builds the entire tree. At each node, the \"best\" attribute to classify data is chosen. The \"best\" attribute is chosen using the information gain metric. Once an attribute is chosen in a node, the data examples in the node are categorized into sub-groups based on the attribute values that they have. Basically, all data examples having the same attribute value are put together in the same sub-group. These sub-groups form the children of the present node and the algorithm is repeated for each of the newly formed children nodes. This goes on until all the data members of a node belong to the same class or all the attributes are exhausted. In the latter case, the class predicted may be erroneous and generally the mode of the classes appearing in the node is chosen as the predictive class. \nPseudocode for ID3 Algorithm\nExample using a Simple dataset\nIn this section, we create a simple example where we try to predict the usage of mobile phones by individuals based on their income, age, education and marital status. Each of the attributes have been categorized into 2 or 3 types. Let us create the training dataset and tabulate it first.", "import os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../../data')\n# training data\ntrain_income=['Low','Medium','Low','High','Low','High','Medium','Medium','High','Low','Medium',\n'Medium','High','Low','Medium']\n\ntrain_age = ['Old','Young','Old','Young','Old','Young','Young','Old','Old','Old','Young','Old',\n'Old','Old','Young']\n\ntrain_education = ['University','College','University','University','University','College','College',\n'High School','University','High School','College','High School','University','High School','College']\n\ntrain_marital = ['Married','Single','Married','Single','Married','Single','Married','Single','Single',\n'Married','Married','Single','Single','Married','Married']\n\ntrain_usage = ['Low','Medium','Low','High','Low','Medium','Medium','Low','High','Low','Medium','Low',\n'High','Low','Medium']\n\n# print data\nprint('Training Data Table : \\n')\nprint('Income \\t\\t Age \\t\\t Education \\t\\t Marital Status \\t Usage')\nfor i in range(len(train_income)):\n\tprint(train_income[i]+' \\t\\t '+train_age[i]+' \\t\\t '+train_education[i]+' \\t\\t '+train_marital[i]+' \\t\\t '+train_usage[i])\n", "We want to create a decision tree from the above training dataset. The first step for that is to encode the data into numeric values and bind them to Shogun's features and multiclass labels.", "from shogun import ID3ClassifierTree, features, MulticlassLabels\nfrom numpy import array, concatenate\n\n# encoding dictionary\nincome = {'Low' : 1.0, 'Medium' : 2.0, 'High' : 3.0}\nage = {'Young' : 1.0, 'Old' : 2.0}\neducation = {'High School' : 1.0, 'College' : 2.0, 'University' : 3.0}\nmarital_status = {'Married' : 1.0, 'Single' : 2.0}\nusage = {'Low' : 1.0, 'Medium' : 2.0, 'High' : 3.0}\n\n\n# encode training data\nfor i in range(len(train_income)):\n\ttrain_income[i] = income[train_income[i]]\n\ttrain_age[i] = age[train_age[i]]\n\ttrain_education[i] = education[train_education[i]]\n\ttrain_marital[i] = marital_status[train_marital[i]]\n\ttrain_usage[i] = usage[train_usage[i]]\n \n# form Shogun feature matrix\ntrain_data = array([train_income, train_age, train_education, train_marital])\ntrain_feats = features(train_data);\n\n# form Shogun multiclass labels\nlabels = MulticlassLabels(array(train_usage));", "Next, we learn our decision tree using the features and labels created.", "# create ID3ClassifierTree object\nid3 = ID3ClassifierTree()\n\n# set labels\nid3.put('labels', labels)\n\n# learn the tree from training features\nis_successful = id3.train(train_feats)", "Our decision tree is ready now and we want to use it to make some predictions over test data. So, let us create some test data examples first.", "# test data\ntest_income = ['Medium','Medium','Low','High','High']\ntest_age = ['Old','Young','Old','Young','Old']\ntest_education = ['University','College','High School','University','College']\ntest_marital = ['Married','Single','Married','Single','Married']\ntest_usage = ['Low','Medium','Low','High','High']\n\n# tabulate test data\nprint('Test Data Table : \\n')\nprint('Income \\t\\t Age \\t\\t Education \\t\\t Marital Status \\t Usage')\nfor i in range(len(test_income)):\n\tprint(test_income[i]+' \\t\\t '+test_age[i]+' \\t\\t '+test_education[i]+' \\t\\t '+test_marital[i]+' \\t\\t ?')\n", "Next, as with training data, we encode our test dataset and bind it to Shogun features. Then, we apply our decision tree to the test examples to obtain the predicted labels.", "# encode test data\nfor i in range(len(test_income)):\n\ttest_income[i] = income[test_income[i]]\n\ttest_age[i] = age[test_age[i]]\n\ttest_education[i] = education[test_education[i]]\n\ttest_marital[i] = marital_status[test_marital[i]]\n\n# bind to shogun features \ntest_data = array([test_income, test_age, test_education, test_marital])\ntest_feats = features(test_data)\n\n# apply decision tree classification\ntest_labels = id3.apply_multiclass(test_feats)", "Finally let us tabulate the results obtained and compare them with our intuitive predictions.", "output = test_labels.get_labels();\noutput_labels=[0]*len(output)\n\n# decode back test data for printing\nfor i in range(len(test_income)):\n\ttest_income[i]=income.keys()[income.values().index(test_income[i])]\n\ttest_age[i]=age.keys()[age.values().index(test_age[i])]\n\ttest_education[i]=education.keys()[education.values().index(test_education[i])]\n\ttest_marital[i]=marital_status.keys()[marital_status.values().index(test_marital[i])]\n\toutput_labels[i]=usage.keys()[usage.values().index(output[i])]\n\n# print output data\nprint('Final Test Data Table : \\n')\nprint('Income \\t Age \\t Education \\t Marital Status \\t Usage(predicted)')\nfor i in range(len(test_income)):\n\tprint(test_income[i]+' \\t '+test_age[i]+' \\t '+test_education[i]+' \\t '+test_marital[i]+' \\t\\t '+output_labels[i])", "So, do the predictions made by our decision tree match our inferences from training set? Yes! For example, from the training set we infer that the individual having low income has low usage and also all individuals going to college have medium usage. The decision tree predicts the same for both cases. \nExample using a real dataset\nWe choose the car evaluation dataset from the UCI Machine Learning Repository as our real-world dataset. The car.names file of the dataset enumerates the class categories as well as the non-class attributes. Each car categorized into one of 4 classes : unacc, acc, good, vgood. Each car is judged using 6 attributes : buying, maint, doors, persons, lug_boot, safety. Each of these attributes can take 3-4 values. Let us first make a dictionary to encode strings to numeric values using information from cars.names file.", "# class attribute\nevaluation = {'unacc' : 1.0, 'acc' : 2.0, 'good' : 3.0, 'vgood' : 4.0}\n\n# non-class attributes\nbuying = {'vhigh' : 1.0, 'high' : 2.0, 'med' : 3.0, 'low' : 4.0}\nmaint = {'vhigh' : 1.0, 'high' : 2.0, 'med' : 3.0, 'low' : 4.0}\ndoors = {'2' : 1.0, '3' : 2.0, '4' : 3.0, '5more' : 4.0}\npersons = {'2' : 1.0, '4' : 2.0, 'more' : 3.0}\nlug_boot = {'small' : 1.0, 'med' : 2.0, 'big' : 3.0}\nsafety = {'low' : 1.0, 'med' : 2.0, 'high' : 3.0}", "Next, let us read the file and form Shogun features and labels.", "f = open( os.path.join(SHOGUN_DATA_DIR, 'uci/car/car.data'), 'r')\n\nfeats = []\nlabels = []\n\n# read data from file and encode\nfor line in f:\n words = line.rstrip().split(',')\n words[0] = buying[words[0]]\n words[1] = maint[words[1]]\n words[2] = doors[words[2]]\n words[3] = persons[words[3]]\n words[4] = lug_boot[words[4]]\n words[5] = safety[words[5]]\n words[6] = evaluation[words[6]]\n feats.append(words[0:6])\n labels.append(words[6])\n\nf.close()", "From the entire dataset, let us choose some test vectors to form our test dataset.", "from numpy import random, delete\n\nfeats = array(feats)\nlabels = array(labels)\n\n# number of test vectors\nnum_test_vectors = 200;\n\ntest_indices = random.randint(feats.shape[0], size = num_test_vectors)\ntest_features = feats[test_indices]\ntest_labels = labels[test_indices]\n\n# remove test vectors from training set\nfeats = delete(feats,test_indices,0)\nlabels = delete(labels,test_indices,0)", "Next step is to train our decision tree using the training features and applying it to our test dataset to get predicted output classes.", "# shogun test features and labels\ntest_feats = features(test_features.T)\ntest_labels = MulticlassLabels(test_labels)\n\n# method for id3 training and\ndef ID3_routine(feats, labels):\n\n # Shogun train features and labels\n train_feats = features(feats.T)\n train_lab = MulticlassLabels(labels)\n\n # create ID3ClassifierTree object\n id3 = ID3ClassifierTree()\n\n # set labels\n id3.put('labels', train_lab)\n\n # learn the tree from training features\n id3.train(train_feats)\n\n # apply to test dataset\n output = id3.apply_multiclass(test_feats)\n \n return output\n\noutput = ID3_routine(feats, labels)", "Finally, let us compare our predicted labels with test labels to find out the percentage error of our classification model.", "from shogun import MulticlassAccuracy\n\n# Shogun object for calculating multiclass accuracy\naccuracy = MulticlassAccuracy()\nprint('Accuracy : ' + str(accuracy.evaluate(output, test_labels)))", "We see that the accuracy is moderately high. Thus our decision tree can evaluate any car given its features with a high success rate. As a final exercise, let us examine the effect of training dataset size on the accuracy of decision tree.", "# list of error rates for all training dataset sizes\nerror_rate = []\n\n# number of error rate readings taken for each value of dataset size\nnum_repetitions = 3\n\n# loop over training dataset size\nfor i in range(500,1600,200):\n indices = random.randint(feats.shape[0], size = i)\n train_features = feats[indices]\n train_labels = labels[indices]\n \n average_error = 0\n for i in range(num_repetitions):\n output = ID3_routine(train_features, train_labels)\n average_error = average_error + (1-accuracy.evaluate(output, test_labels))\n \n error_rate.append(average_error/num_repetitions)\n\n# plot the error rates \nimport matplotlib.pyplot as pyplot\n% matplotlib inline\nfrom scipy.interpolate import interp1d\nfrom numpy import linspace, arange\n\nfig,axis = pyplot.subplots(1,1)\nx = arange(500,1600,200)\nf = interp1d(x, error_rate)\n\nxnew = linspace(500,1500,100)\npyplot.plot(x,error_rate,'o',xnew,f(xnew),'-')\npyplot.xlim([400,1600])\npyplot.xlabel('training dataset size')\npyplot.ylabel('Classification Error')\npyplot.title('Decision Tree Performance')\npyplot.show()", "NOTE : The above code snippet takes about half a minute to execute. Please wait patiently.\nFrom the above plot, we see that error rate decreases steadily as we increase the training dataset size. Although in this case, the decrease in error rate is not very significant, in many datasets this decrease in error rate can be substantial.\nC4.5\nThe C4.5 algorithm is essentially an extension of the ID3 algorithm for decision tree learning. It has the additional capability of handling continuous attributes and attributes with missing values. The tree growing process in case of C4.5 is same as that of ID3 i.e. finding the best split at each node using the information gain metric. But in case of continuous attribute, the C4.5 algorithm has to perform the additional step of converting it to a two-value categorical attribute by splitting about a suitable threshold. This threshold is chosen in a way such that the resultant split produces maximum information gain. Let us start exploring Shogun's C4.5 algorithm API with a toy example. \nExample using toy dataset\nLet us consider a 3-class classification using 2 attributes. One of the attributes (say attribute X) is a 2-class categorical attribute depicted by values 1 and 2. The other attribute (say attribute Y) is a continuous attribute having values between 1 and 9. The simple rules of classification are as follows : if X=1 and Y $\\epsilon$ [1,5), data point belongs to class 1, if X=1 and Y $\\epsilon$ [5,9), data point belongs to class 2 and if X=2, data point belongs to class 3. Let us realize the toy dataset by plotting it.", "import matplotlib.pyplot as plt\nfrom numpy import ones, zeros, random, concatenate\nfrom shogun import features, MulticlassLabels\n% matplotlib inline\n\ndef create_toy_classification_dataset(ncat,do_plot):\n # create attribute values and labels for class 1\n x = ones((1,ncat))\n y = 1+random.rand(1,ncat)*4\n lab = zeros(ncat)\n\n # add attribute values and labels for class 2\n x = concatenate((x,ones((1,ncat))),1)\n y = concatenate((y,5+random.rand(1,ncat)*4),1)\n lab = concatenate((lab,ones(ncat)))\n\n # add attribute values and labels for class 3\n x = concatenate((x,2*ones((1,ncat))),1)\n y = concatenate((y,1+random.rand(1,ncat)*8),1)\n lab = concatenate((lab,2*ones(ncat)))\n\n # create test data\n ntest = 20\n x_t = concatenate((ones((1,3*ntest/4)),2*ones((1,ntest/4))),1)\n y_t = 1+random.rand(1,ntest)*8\n\n if do_plot:\n # plot training data\n c = ['r','g','b']\n for i in range(3):\n plt.scatter(x[0,lab==i],y[0,lab==i],color=c[i],marker='x',s=50)\n\n # plot test data\n plt.scatter(x_t[0,:],y_t[0,:],color='k',s=10,alpha=0.8)\n\n plt.xlabel('attribute X')\n plt.ylabel('attribute Y')\n plt.show()\n\n # form training feature matrix\n train_feats = features(concatenate((x,y),0))\n\n # from training labels\n train_labels = MulticlassLabels(lab)\n\n # from test feature matrix\n test_feats = features(concatenate((x_t,y_t),0))\n \n return (train_feats,train_labels,test_feats);\n\ntrain_feats,train_labels,test_feats = create_toy_classification_dataset(20,True)", "In the above plot the training data points are are marked with different colours of crosses where each colour corresponds to a particular label. The test data points are marked by black circles. For us it is a trivial task to assign correct colours (i.e. labels) to the black points. Let us see how accurately C4.5 assigns colours to these test points.\nNow let us train a decision tree using the C4.5 algorithm. We need to create a Shogun C4.5 tree object and supply training features and training labels to it. We also need to specify which attribute is categorical and which is continuous. The attribute types can be specified using set_feature_types method through which all categorical attributes are set as True and continuous attributes as False.", "from numpy import array\nfrom shogun import C45ClassifierTree\n\n# steps in C4.5 Tree training bundled together in a python method\ndef train_tree(feats,types,labels):\n # C4.5 Tree object\n tree = C45ClassifierTree()\n # set labels\n tree.put('labels', labels)\n # supply attribute types\n tree.set_feature_types(types)\n # supply training matrix and train\n tree.train(feats)\n \n return tree\n\n# specify attribute types X is categorical hence True, Y is continuous hence False\nfeat_types = array([True,False])\n\n# get back trained tree\nC45Tree = train_tree(train_feats,feat_types,train_labels)", "Now that we have trained the decision tree, we can use it to classify our test vectors.", "def classify_data(tree,data):\n # get classification labels\n output = tree.apply_multiclass(data) \n # get classification certainty\n output_certainty=tree.get_real_vector('m_certainty')\n \n return output,output_certainty\n\nout_labels,out_certainty = classify_data(C45Tree,test_feats)", "Let us use the output labels to colour our test data points to qualitatively judge the performance of the decision tree.", "from numpy import int32\n\n# plot results\ndef plot_toy_classification_results(train_feats,train_labels,test_feats,test_labels):\n train = train_feats.get_real_matrix('feature_matrix')\n lab = train_labels.get_labels()\n test = test_feats.get_real_matrix('feature_matrix')\n out_labels = test_labels.get_labels()\n \n c = ['r','g','b']\n for i in range(out_labels.size):\n plt.scatter(test[0,i],test[1,i],color=c[int32(out_labels[i])],s=50)\n\n # plot training dataset for visual comparison\n for i in range(3):\n plt.scatter(train[0,lab==i],train[1,lab==i],color=c[i],marker='x',s=30,alpha=0.7)\n\n plt.show()\n \nplot_toy_classification_results(train_feats,train_labels,test_feats,out_labels)", "We see that the decision tree trained using the C4.5 algorithm works almost perfectly in this toy dataset. Now let us try this algorithm on a real world dataset. \nExample using a real dataset\nIn this section we will investigate that how accurately we can predict the species of an Iris flower using a C4.5 trained decision tree. In this example we will use petal length, petal width, sepal length and sepal width as our attributes to decide among 3 classes of Iris : Iris Setosa, Iris Versicolor and Iris Verginica. Let us start by suitably reading the dataset.", "import csv\nfrom numpy import array\n\n# dictionary to encode class names to class labels\nto_label = {'Iris-setosa' : 0.0, 'Iris-versicolor' : 1.0, 'Iris-virginica' : 2.0}\n\n# read csv file and separate out labels and features\nlab = []\nfeat = []\nwith open( os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data')) as csvfile:\n csvread = csv.reader(csvfile,delimiter=',')\n for row in csvread:\n feat.append([float(i) for i in row[0:4]])\n lab.append(to_label[row[4]])\n\nlab = array(lab)\nfeat = array(feat).T", "Because there is no separate test dataset, we first divide the given dataset into training and testing subsets.", "from numpy import int32, random\n\n# no.of vectors in test dataset\nntest = 25\n# no. of vectors in train dataset\nntrain = 150-ntest\n\n# randomize the order of vectors\nsubset = int32(random.permutation(150))\n\n# choose 1st ntrain from randomized set as training vectors\nfeats_train = feat[:,subset[0:ntrain]]\n# form training labels correspondingly\ntrain_labels = lab[subset[0:ntrain]]\n\n# form test features and labels (for accuracy evaluations)\nfeats_test = feat[:,subset[ntrain:ntrain+ntest]]\ntest_labels = lab[subset[ntrain:ntrain+ntest]]", "Before marching forward with applying C4.5, let us plot the data to get a better understanding. The given data points are 4-D and hence cannot be conveniently plotted. We need to reduce the number of dimensions to 2. This reduction can be achieved using any dimension reduction algorithm like PCA. However for the sake of brevity, let us just choose two highly correlated dimensions, petal width and petal length (see summary statistics), right away for plotting.", "import matplotlib.pyplot as plt\n% matplotlib inline\n\n# plot training features\nc = ['r', 'g', 'b']\nfor i in range(3):\n plt.scatter(feats_train[2,train_labels==i],feats_train[3,train_labels==i],color=c[i],marker='x')\n\n# plot test data points in black\nplt.scatter(feats_test[2,:],feats_test[3,:],color='k',marker='o')\n\nplt.show()", "First, let us create Shogun features and labels from the given data.", "from shogun import features, MulticlassLabels\n\n# training data\nfeats_train = features(feats_train)\ntrain_labels = MulticlassLabels(train_labels)\n\n# test data\nfeats_test = features(feats_test)\ntest_labels = MulticlassLabels(test_labels)", "We know for fact that decision trees tend to overfit. Hence pruning becomes a necessary step. In case of toy dataset, we skipped the pruning step because the dataset was simple and noise-free. But in case of a real dataset like the Iris dataset pruning cannot be skipped. So we have to partition the training dataset into the training subset and the validation subset.", "# randomize the order of vectors\nsubset = int32(random.permutation(ntrain))\n\nnvalidation = 45\n\n# form training subset and validation subset\ntrain_subset = subset[0:ntrain-nvalidation]\nvalidation_subset = subset[ntrain-nvalidation:ntrain]", "Now we train the decision tree first, then prune it and finally use it to get output labels for test vectors.", "# set attribute types - all continuous\nfeature_types = array([False, False, False, False])\n\n# remove validation subset before training the tree\nfeats_train.add_subset(train_subset)\ntrain_labels.add_subset(train_subset)\n\n# train tree\nC45Tree = train_tree(feats_train,feature_types,train_labels)\n\n# bring back validation subset\nfeats_train.remove_subset()\ntrain_labels.remove_subset()\n\n# remove data belonging to training subset\nfeats_train.add_subset(validation_subset)\ntrain_labels.add_subset(validation_subset)\n\n# prune the tree\nC45Tree.prune_tree(feats_train,train_labels)\n\n# bring back training subset\nfeats_train.remove_subset()\ntrain_labels.remove_subset()\n\n# get results\noutput, output_certainty = classify_data(C45Tree,feats_test)", "Let us calculate the accuracy of the classification made by our tree as well as plot the results for qualitative evaluation.", "from shogun import MulticlassAccuracy\n\n# Shogun object for calculating multiclass accuracy\naccuracy = MulticlassAccuracy()\nprint('Accuracy : ' + str(accuracy.evaluate(output, test_labels)))\n\n# convert MulticlassLabels object to labels vector\noutput = output.get_labels()\ntest_labels = test_labels.get_labels()\ntrain_labels = train_labels.get_labels()\n\n# convert features object to matrix\nfeats_test = feats_test.get_real_matrix('feature_matrix')\nfeats_train = feats_train.get_real_matrix('feature_matrix')\n\n# plot ground truth\nfor i in range(3):\n plt.scatter(feats_test[2,test_labels==i],feats_test[3,test_labels==i],color=c[i],marker='x',s=100)\n\n# plot predicted labels\nfor i in range(output.size):\n plt.scatter(feats_test[2,i],feats_test[3,i],color=c[int32(output[i])],marker='o',s=30*output_certainty[i])\n \nplt.show()", "From the evaluation of results, we infer that, with the help of a C4.5 trained decision tree, we can predict (with high accuracy) the type of Iris plant given its petal and sepal widths and lengths.\nClassification and Regression Trees (CART)\nThe CART algorithm is a popular decision tree learning algorithm introduced by Breiman et al. Unlike ID3 and C4.5, the learnt decision tree in this case can be used for both multiclass classification and regression depending on the type of dependent variable. The tree growing process comprises of recursive binary splitting of nodes. To find the best split at each node, all possible splits of all available predictive attributes are considered. The best split is the one that maximises some splitting criterion. For classification tasks, ie. when the dependent attribute is categorical, the Gini index is used as the splitting criterion. For regression tasks, ie. when the dependent variable is continuous, the least squares deviation is used. Let us learn about Shogun's CART implementation by working on two toy problems, one on classification and the other on regression.\nClassification example using toy data\nLet us consider the same dataset as that in the C4.5 toy example. We re-create the dataset and plot it first.", "train_feats,train_labels,test_feats=create_toy_classification_dataset(20,True)", "Next, we supply necessary parameters to the CART algorithm and use it train our decision tree.", "from shogun import PT_MULTICLASS, CARTree\nfrom numpy import array\n\ndef train_carttree(feat_types,problem_type,num_folds,use_cv_pruning,labels,feats):\n # create CART tree object\n c = CARTree(feat_types,problem_type,num_folds,use_cv_pruning)\n # set training labels\n c.set_labels(labels)\n # train using training features\n c.train(feats)\n \n return c\n\n# form feature types True for nominal (attribute X), False for ordinal/continuous (attribute Y)\nft = array([True, False])\n\n# get back trained tree\ncart = train_carttree(ft, PT_MULTICLASS, 5, True, train_labels, train_feats)", "In the above code snippet, we see four parameters being supplied to the CART tree object. feat_types supplies knowledge of attribute types of training data to the CART algorithm and problem_type specifies whether it is a multiclass classification problem (PT_MULTICLASS) or a regression problem (PT_REGRESSION). The boolean parameter use_cv_pruning switches on cross-validation pruning of the trained tree and num_folds specifies the number of folds of cross-validation to be applied while pruning. At this point, let us divert ourselves briefly towards undertanding what kind of pruning strategy is employed by Shogun's CART implementation. The CART algorithm uses the cost-complexity pruning strategy. Cost-Complexity pruning yields a list of subtrees of varying depths using complexity normalized resubstitution error, $R_\\alpha(T)$. Resubstitution error, R(T), measures how well a decision tree fits the training data. But, this measure favours larger trees over smaller ones. Hence the complexity normalized resubstitution error metric is used which adds penalty for increased complexity and in-turn counters overfitting.\n$R_\\alpha(T)=R(T)+\\alpha \\times (numleaves)$\nThe best subtree among the list of subtrees can be chosen using cross validation or using the best-fit metric in the validation dataset. Setting use_cv_pruning in the above code snippet basically tells the CART object to use cross-validation to choose the best among the subtrees generated by cost-complexity pruning.\nLet us now get back on track and use the trained tree to classify our test data.", "from numpy import int32\n\n# get output labels\noutput_labels = cart.apply_multiclass(test_feats)\n\nplot_toy_classification_results(train_feats,train_labels,test_feats,output_labels)", "Regression example using toy data\nIn this example, we form the training dataset by sampling points from a sinusoidal curve and see how well a decision tree, trained using these samples, re-creates the actual sinusoid.", "from shogun import RegressionLabels, features\nfrom numpy import random, sin, linspace\nimport matplotlib.pyplot as plt\n% matplotlib inline\n\ndef create_toy_regression_dataset(nsamples,noise_var):\n # randomly choose positions in X axis between 0 to 16\n samples_x = random.rand(1,nsamples)*16\n\n # find out y (=sin(x)) values for the sampled x positions and add noise to it\n samples_y = sin(samples_x)+(random.rand(1,nsamples)-0.5)*noise_var\n\n # plot the samples\n plt.scatter(samples_x,samples_y,color='b',marker='x')\n \n # create training features\n train_feats = features(samples_x)\n # training labels\n train_labels = RegressionLabels(samples_y[0,:])\n\n return (train_feats,train_labels)\n\n# plot the reference sinusoid\ndef plot_ref_sinusoid():\n plot_x = linspace(-2,18,100)\n plt.plot(plot_x,sin(plot_x),color='y',linewidth=1.5)\n plt.xlabel('Feature values')\n plt.ylabel('Labels')\n plt.xlim([-3,19])\n plt.ylim([-1.5,1.5])\n\n# number of samples is 300, noise variance is 0.5\ntrain_feats,train_labels = create_toy_regression_dataset(300,0.5)\n\nplot_ref_sinusoid()\nplt.show()", "Next, we train our CART-tree.", "from shogun import PT_REGRESSION\nfrom numpy import array\n\n# feature type - continuous\nfeat_type = array([False])\n\n# get back trained tree\ncart = train_carttree(feat_type, PT_REGRESSION, 5, True, train_labels, train_feats)", "Now let us use the trained decision tree to regress over the entire range of the previously depicted sinusoid.", "def plot_predicted_sinusoid(cart):\n # regression range - 0 to 16\n x_test = array([linspace(0,16,100)])\n\n # form Shogun features\n test_feats = features(x_test)\n\n # apply regression using our previously trained CART-tree\n regression_output = cart.apply_regression(test_feats).get_labels()\n\n # plot the result\n plt.plot(x_test[0,:],regression_output,linewidth=2.0)\n\n # plot reference sinusoid\n plot_ref_sinusoid()\n\n plt.show()\n \nplot_predicted_sinusoid(cart)", "As we can see from the above plot, CART-induced decision tree follows the reference sinusoid quite beautifully!\nClassification example using real dataset\nIn this section, we will apply the CART algorithm on the Iris dataset. Remember that the Iris dataset provides us with just a training dataset and no separate test dataset. In case of the C4.5 example discussed earlier, we ourselves divided the entire training dataset into training subset and test subset. In this section, we will employ a different strategy i.e. cross validation. In cross-validation, we divide the training dataset into n subsets where n is a user controlled parameter. We perform n iterations of training and testing in which, at each iteration, we choose one of the n subsets as our test dataset and the remaining n-1 subsets as our training dataset. The performance of the model is usually taken as the average of the performances in various iterations. Shogun's cross validation class makes it really easy to apply cross-validation to any model of our choice. Let us realize this by applying cross-validation to CART-tree trained over Iris dataset. We start by reading the data.", "import csv\nfrom numpy import array\nimport matplotlib.pylab as plt \n% matplotlib inline\n\n# dictionary to encode class names to class labels\nto_label = {'Iris-setosa' : 0.0, 'Iris-versicolor' : 1.0, 'Iris-virginica' : 2.0}\n\n# read csv file and separate out labels and features\nlab = []\nfeat = []\nwith open( os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data')) as csvfile:\n csvread = csv.reader(csvfile,delimiter=',')\n for row in csvread:\n feat.append([float(i) for i in row[0:4]])\n lab.append(to_label[row[4]])\n\nlab = array(lab)\nfeat = array(feat).T\n\n# plot the dataset using two highly correlated attributes\nc = ['r', 'g', 'b']\nfor i in range(3):\n plt.scatter(feat[2,lab==i],feat[3,lab==i],color=c[i],marker='x')\n \nplt.show()", "Next, we setup the model which is CART-tree in this case.", "from shogun import CARTree, PT_MULTICLASS\n\n# set attribute types - all continuous\nfeature_types = array([False, False, False, False])\n\n# setup CART-tree with cross validation pruning switched off\ncart = CARTree(feature_types,PT_MULTICLASS,5,False)", "Finally we can use Shogun's cross-validation class to get performance.", "from shogun import features, MulticlassLabels\nfrom shogun import CrossValidation, MulticlassAccuracy, CrossValidationSplitting, CrossValidationResult\n\n# training features\nfeats_train = features(feat)\n# training labels\nlabels_train = MulticlassLabels(lab)\n\n# set evaluation criteria - multiclass accuracy\naccuracy = MulticlassAccuracy()\n\n# set splitting criteria - 10 fold cross-validation\nsplit = CrossValidationSplitting(labels_train,10)\n\n# set cross-validation parameters \ncross_val = CrossValidation(cart,feats_train,labels_train,split,accuracy,False)\n\n# run cross-validation multiple times - to get better estimate of accuracy\ncross_val.put('num_runs', 10)\n\n# get cross validation result\n# CARTree is not x-validatable\n# result = cross_val.evaluate()\n\n# print result\n# print('Mean Accuracy : ' + str(CrossValidationResult.obtain_from_generic(result).get_mean()))", "We get a mean accuracy of about 0.93-0.94. This number essentially means that a CART-tree trained using this dataset is expected to classify Iris flowers, given their required attributes, with an accuracy of 93-94% in a real world scenario. The parameters required by Shogun's cross-validation class should be noted in the above code snippet. The class requires the model, training features, training labels, splitting strategy and evaluation method to be specified. \nRegression using real dataset\nIn this section, we evaluate CART-induced decision tree over the Servo dataset. Using this dataset, we essentially want to train a model which can predict the rise time of a servomechanism given the required parameters which are the two (integer) gain settings and two (nominal) choices of mechanical linkages. Let us read the dataset first.", "from numpy import array\n\n# dictionary to convert string features to integer values\nto_int = {'A' : 1, 'B' : 2, 'C' : 3, 'D' : 4, 'E' : 5}\n\n# read csv file and separate out labels and features\nlab = []\nfeat = []\nwith open( os.path.join(SHOGUN_DATA_DIR, 'uci/servo/servo.data')) as csvfile:\n csvread = csv.reader(csvfile,delimiter=',')\n for row in csvread:\n feat.append([to_int[row[0]], to_int[row[1]], float(row[2]), float(row[3])])\n lab.append(float(row[4]))\n\nlab = array(lab)\nfeat = array(feat).T", "The servo dataset is a small training dataset (contains just 167 training vectors) with no separate test dataset, like the Iris dataset. Hence we will apply the same cross-validation strategy we applied in case of the Iris dataset. However, to make things interesting let us play around with a yet-untouched parameter of CART-induced tree i.e. the maximum allowed tree depth. As the tree depth increases, the tree becomes more complex and hence fits the training data more closely. By setting a maximum allowed tree depth, we restrict the complexity of trained tree and hence avoid over-fitting. But choosing a low value of the maximum allowed tree depth may lead to early stopping i.e. under-fitting. Let us explore how we can decide the appropriate value of the max-allowed-tree-depth parameter. Let us create a method, which takes max-allowed-tree-depth parameter as input and returns the corresponding cross-validated error as output.", "from shogun import CARTree, RegressionLabels, PT_REGRESSION, MeanSquaredError\nfrom shogun import CrossValidation, CrossValidationSplitting, CrossValidationResult\n\n# form training features\nfeats_train = features(feat)\n# form training labels\nlabels_train = RegressionLabels(lab)\n\ndef get_cv_error(max_depth):\n # set attribute types - 2 nominal and 2 ordinal\n feature_types = array([True, True, False, False])\n # setup CART-tree with cross validation pruning switched off\n cart = CARTree(feature_types,PT_REGRESSION,5,False)\n # set max allowed depth\n cart.set_max_depth(max_depth)\n\n # set evaluation criteria - mean squared error\n accuracy = MeanSquaredError()\n # set splitting criteria - 10 fold cross-validation\n split = CrossValidationSplitting(labels_train,10)\n # set cross-validation parameters \n cross_val = CrossValidation(cart,feats_train,labels_train,split,accuracy,False)\n\n # run cross-validation multiple times\n cross_val.put('num_runs', 10)\n\n # return cross validation result\n return CrossValidationResult.obtain_from_generic(cross_val.evaluate()).get_mean()", "Next, let us supply a range of max_depth values to the above method and plot the returned cross-validated errors.", "import matplotlib.pyplot as plt\n\n# CARTree is not x-validatable\n# cv_errors = [get_cv_error(i) for i in range(1,15)]\n# plt.plot(range(1,15),cv_errors,'bo',range(1,15),cv_errors,'k')\n# plt.xlabel('max_allowed_depth')\n# plt.ylabel('cross-validated error')\n# plt.ylim(0,1.2)\n# plt.show()", "From the above plot quite clearly gives us the most appropriate value of maximum allowed depth. We see that the first minima occurs at a maximum allowed depth of 6-8. Hence, one of these should be the desired value. It is to be noted that the error metric that we are discussing here is the mean squared error. Thus, from the above plot, we can also claim that, given required parameters, our CART-flavoured decision tree can predict the rise time within an average error range of $\\pm0.5$ (i.e. square root of 0.25 which is the approximate minimum cross-validated error). The relative error i.e average_error/range_of_labels comes out to be ~30%.\nCHi-squared Automatic Interaction Detection (CHAID)\nThe CHAID is an algorithm for decision tree learning proposed by Kass (1980). It is similar in functionality to CART in the sense that both can be used for classification as well as regression. But unlike CART, CHAID internally handles only categorical features. The continuous features are first converted into ordinal categorical features for the CHAID algorithm to be able to use them. This conversion is done by binning of feature values.The number of bins (K) has to be supplied by the user. Given K, a predictor is split in such a way that all the bins get the same number (more or less) of distinct predictor values. The maximum feature value in each bin is used as a breakpoint.\nAn important parameter in the CHAID tree growing process is the p-value. The p-value is the metric that is used for deciding which categories of predictor values to merge during merging as well as for deciding the best attribute during splitting. The p-value is calculated using different hypothesis testing methods depending on the type of dependent variable (nominal, ordinal or continuous). A more detailed discussion on the CHAID algorithm can be found in the documentation of the CCHAIDTree class in Shogun. Let us move on to a more interesting topic which is learning to use CHAID using Shogun's python API. \nClassification example using toy dataset\nLet us re-use the toy classification dataset used in C4.5 and CART to see the API usage of CHAID as well as to qualitatively compare the results of the CHAID algorithm with the other two.", "train_feats,train_labels,test_feats = create_toy_classification_dataset(20,True)", "Now, we set up our CHAID-tree with appropriate parameters and train over given data.", "from shogun import PT_MULTICLASS, CHAIDTree\nfrom numpy import array, dtype, int32\n\ndef train_chaidtree(dependent_var_type,feature_types,num_bins,feats,labels):\n # create CHAID tree object\n c = CHAIDTree(dependent_var_type,feature_types,num_bins)\n # set training labels\n c.put('labels', labels)\n # train using training features\n c.train(feats)\n \n return c\n\n# form feature types 0 for nominal (attribute X), 2 for continuous (attribute Y)\nft = array([0, 2],dtype=int32)\n\n# cache training matrix\ntrain_feats_cache=features(train_feats.get_feature_matrix())\n\n# get back trained tree - dependent variable type is nominal (hence 0), number of bins for binning is 10 \nchaid = train_chaidtree(0,ft,10,train_feats,train_labels)\n\nprint('updated_matrix')\nprint(train_feats.get_real_matrix('feature_matrix'))\nprint('')\nprint('original_matrix')\nprint(train_feats_cache.get_real_matrix('feature_matrix'))", "An important point to be noted in the above code snippet is that CHAID training modifies the training data. The actual continuous feature values are replaced by the discrete ordinal values obtained during continuous to ordinal conversion. Notice the difference between the original feature matrix and the updated matrix. The updated matrix contains only 10 distinct values denoting all values of the original matrix for feature dimension at row index 1.\nWith a CHAID-trained decision tree at our disposal, it's time to apply it to colour our test points.", "# get output labels\noutput_labels = chaid.apply_multiclass(test_feats)\n\nplot_toy_classification_results(train_feats_cache,train_labels,test_feats,output_labels)", "Regression example with toy dataset\nIn this section, we re-work the sinusoid curve fitting example (earlier used in CART toy regression).", "train_feats,train_labels = create_toy_regression_dataset(300,0.5)\nplot_ref_sinusoid()\nplt.show()", "As usual, we start by setting up our decision tree and training it.", "from numpy import dtype, int32, array\n\n# feature type - continuous\nfeat_type = array([2],dtype=int32)\n\n# get back trained tree\nchaid = train_chaidtree(2,feat_type, 50, train_feats, train_labels)", "Next, we use the trained decision tree to follow the reference sinusoid.", "plot_predicted_sinusoid(chaid)", "A distinguishing feature about the predicted curve is the presence of steps. These steps are essentially an artifact of continuous to ordinal conversion. If we decrease the number of bins for the conversion the step widths will increase.\nClassification example over real dataset\nIn this section, we will try to estimate the quality of wine based on 13 attributes like alcohol content, malic acid, magnesium content, etc. using the wine dataset. Let us first read the dataset using Shogun's CSV file reader.", "from shogun import CSVFile, features, MulticlassLabels\n\ntrain_feats=features(CSVFile( os.path.join(SHOGUN_DATA_DIR, 'uci/wine/fm_wine.dat')))\ntrain_labels=MulticlassLabels(CSVFile( os.path.join(SHOGUN_DATA_DIR, 'uci/wine/label_wine.dat')))", "Like the case of CART, here we are also interested in finding out the approximate accuracy with which our CHAID tree trained on this dataset will perform in real world. Hence, we will apply the cross validation strategy. But first we specify the parameters of the CHAID tree.", "from shogun import CHAIDTree, MulticlassLabels\n\n# set attribute types - all attributes are continuous(2)\nfeature_types = array([2 for i in range(13)],dtype=int32) \n\n# setup CHAID tree - dependent variable is nominal(0), feature types set, number of bins(20)\nchaid = CHAIDTree(0,feature_types,20)", "Next we set up the cross-validation class and get back the error estimate we want i.e mean classification error.", "# set up cross validation class\n\nfrom shogun import CrossValidation, CrossValidationSplitting, CrossValidationResult, MulticlassAccuracy\n\n# set evaluation criteria - multiclass accuracy\naccuracy = MulticlassAccuracy()\n \n# set splitting criteria - 10 fold cross-validation\nsplit = CrossValidationSplitting(train_labels,10)\n\n# set cross-validation parameters \ncross_val = CrossValidation(chaid,train_feats,train_labels,split,accuracy,False)\n\n# run cross-validation multiple times\ncross_val.put('num_runs', 10)\n\n# CHAIDTree is not x-validatable\n# print('Mean classification accuracy : '+str(CrossValidationResult.obtain_from_generic(cross_val.evaluate()).get_mean()*100)+' %')", "Regression example using real dataset\nIn this section, we try to predict the value of houses in Boston using 13 attributes, like per capita crime rate in neighborhood, number of rooms, nitrous oxide concentration in air, proportion of non-retail business in the area etc. Out of the 13 attributes 12 are continuous and 1 (the Charles river dummy variable) is binary nominal. Let us load the dataset as our first step. For this, we can directly use Shogun's CSV file reader class.", "from shogun import CSVFile, features, RegressionLabels\nfrom numpy import ptp\n\ntrain_feats=features(CSVFile( os.path.join(SHOGUN_DATA_DIR, 'uci/housing/fm_housing.dat')))\ntrain_labels=RegressionLabels(CSVFile( os.path.join(SHOGUN_DATA_DIR, 'uci/housing/housing_label.dat')))\n\n# print range of regression labels - this is useful for calculating relative deviation later \nprint('labels range : '+str(ptp(train_labels.get_labels())))", "Next, we set up the parameters for the CHAID tree as well as the cross-validation class.", "from shogun import CHAIDTree, MeanSquaredError\nfrom shogun import CrossValidation, CrossValidationSplitting, CrossValidationResult\nfrom numpy import array, dtype, int32\n\ndef get_cv_error(max_depth):\n # set feature types - all continuous(2) except 4th column which is nominal(0) \n feature_types = array([2]*13,dtype=int32)\n feature_types[3]=0\n feature_types[8]=1\n feature_types[9]=1 \n\n # setup CHAID-tree\n chaid = CHAIDTree(2,feature_types,10)\n # set max allowed depth\n chaid.set_max_tree_depth(max_depth)\n\n # set evaluation criteria - mean squared error\n accuracy = MeanSquaredError()\n # set splitting criteria - 5 fold cross-validation\n split = CrossValidationSplitting(train_labels,5)\n # set cross-validation parameters \n cross_val = CrossValidation(chaid,train_feats,train_labels,split,accuracy,False)\n\n # run cross-validation multiple times\n cross_val.set_num_runs(3)\n\n # return cross validation result\n return CrossValidationResult.obtain_from_generic(cross_val.evaluate()).get_mean()\n\nimport matplotlib.pyplot as plt\n% matplotlib inline\n# CHAIDTree is not x-validatable\n# cv_errors = [get_cv_error(i) for i in range(1,10)]\n# plt.plot(range(1,10),cv_errors,'bo',range(1,10),cv_errors,'k')\n# plt.xlabel('max_allowed_depth')\n# plt.ylabel('cross-validated error')\n# plt.show()", "From the above figure, we see that tree depth of 2-4 is most optimal and gives a mean squared error of ~25 which is a deviation of ~$\\pm5$. We already calculated the range of labels to be 45.0, hence the relative deviation comes out to be 11.11%\nReferences\n[1] Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science\n[2] Quinlan, J. R. 1986. Induction of Decision Trees. Mach. Learn. 1: 1 (Mar. 1986), 81-106" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/bcc/cmip6/models/bcc-csm2-mr/ocean.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: BCC\nSource ID: BCC-CSM2-MR\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:39\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'bcc', 'bcc-csm2-mr', 'ocean')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Seawater Properties\n3. Key Properties --&gt; Bathymetry\n4. Key Properties --&gt; Nonoceanic Waters\n5. Key Properties --&gt; Software Properties\n6. Key Properties --&gt; Resolution\n7. Key Properties --&gt; Tuning Applied\n8. Key Properties --&gt; Conservation\n9. Grid\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Discretisation --&gt; Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --&gt; Tracers\n14. Timestepping Framework --&gt; Baroclinic Dynamics\n15. Timestepping Framework --&gt; Barotropic\n16. Timestepping Framework --&gt; Vertical Physics\n17. Advection\n18. Advection --&gt; Momentum\n19. Advection --&gt; Lateral Tracers\n20. Advection --&gt; Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --&gt; Momentum --&gt; Operator\n23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\n24. Lateral Physics --&gt; Tracers\n25. Lateral Physics --&gt; Tracers --&gt; Operator\n26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\n27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\n30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n35. Uplow Boundaries --&gt; Free Surface\n36. Uplow Boundaries --&gt; Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\n39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\n40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\n41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the ocean.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the ocean component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.2. Eos Functional Temp\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTemperature used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n", "2.3. Eos Functional Salt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSalinity used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n", "2.4. Eos Functional Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n", "2.5. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.6. Ocean Specific Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.7. Ocean Reference Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date of bathymetry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Ocean Smoothing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Source\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe source of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how isolated seas is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. River Mouth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.5. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.6. Is Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.7. Thickness Level 1\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThickness of first surface ocean level (in meters)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBrief description of conservation methodology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Consistency Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Was Flux Correction Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes conservation involve flux correction ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of grid in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical coordinates in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Partial Steps\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11. Grid --&gt; Discretisation --&gt; Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Staggering\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal grid staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Diurnal Cycle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiurnal cycle type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Timestepping Framework --&gt; Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time stepping scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14. Timestepping Framework --&gt; Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBaroclinic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Timestepping Framework --&gt; Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime splitting method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBarotropic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Timestepping Framework --&gt; Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of vertical time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of advection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Advection --&gt; Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n", "18.2. Scheme Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean momemtum advection scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. ALE\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19. Advection --&gt; Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19.3. Effective Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.5. Passive Tracers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPassive tracers advected", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.6. Passive Tracers Advection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Advection --&gt; Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lateral physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transient eddy representation in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n", "22. Lateral Physics --&gt; Momentum --&gt; Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24. Lateral Physics --&gt; Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24.2. Submesoscale Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "25. Lateral Physics --&gt; Tracers --&gt; Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Constant Val\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.3. Flux Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV flux (advective or skew)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Added Diffusivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vertical physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical convection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.2. Tide Induced Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.3. Double Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there double diffusion", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.4. Shear Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there interior shear mixing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "33.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "34.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "34.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35. Uplow Boundaries --&gt; Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of free surface in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFree surface scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35.3. Embeded Seaice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36. Uplow Boundaries --&gt; Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Type Of Bbl\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.3. Lateral Mixing Coef\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "36.4. Sill Overflow\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any specific treatment of sill overflows", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of boundary forcing in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.2. Surface Pressure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.3. Momentum Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.4. Tracers Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.5. Wave Effects\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.6. River Runoff Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.7. Geothermal Heating\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum bottom friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum lateral friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of sunlight penetration scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40.2. Ocean Colour\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "40.3. Extinction Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. From Sea Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.3. Forced Mode Restoring\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cgeoffroy/son-analyze
son-scikit/src/son_scikit/resources/tutorials/Basic_anomalies_detection.ipynb
apache-2.0
[ "son-analyze tutorial notebook\n\nSetup\nThe son_analyze and son_scikit packages contains the function related to SONATA.\n* son_analyze handles gathering data from the SONATA's SP or emulator\n* son_scikit manipulates the metrics to work with analytics libraries", "import son_analyze\nimport son_scikit\nprint('Welcome to son-analyze v{} and son-scikit v{}.'.format(son_analyze.__version__, son_scikit.__version__))", "You can write and use Python files and call their functions inside your notebooks to keep them simples.", "import helpers\nprint('You can use and tweak the python code in the helpers.py file (example: \"{}\")'.format(helpers.foobar()))", "Fetching metrics\nTo keep this example self-contained, the data is read from a static file. The metrics are stored held in a all_dataframes variable.", "import reprlib\nimport json\nimport arrow\nimport requests\nfrom son_analyze.core.prometheus import PrometheusData\nfrom son_scikit.hl_prometheus import build_sonata_df_by_id\n\nall_dataframes = None\nwith open('empty_vnf1_sonemu_rx_count_packets_180.json') as raw:\n x = PrometheusData(raw.read())\n all_dataframes = build_sonata_df_by_id(x)", "Each VNF has its own dataframe where metrics have a corresponding column. Here, the empty_vnf1 VNF has a sonemu_rx_count_packets column for the monitored received packets on the network.", "print('The dictonnary of all dataframes by VNF names: {}'.format(reprlib.repr(all_dataframes)))\nprint(all_dataframes['empty_vnf1'].head())", "Basic plotting\nFrom there we use the df and ddf variables as shortcuts, before plotting them.\n* df is the main dataframe we are going to work with\n* ddf contains the discrete difference of df", "import matplotlib\nimport matplotlib.pyplot as plt\nmatplotlib.style.use('ggplot')\n%matplotlib inline\nmatplotlib.rcParams['figure.figsize'] = (20.0, 5.0)\n\ndf = all_dataframes['empty_vnf1']\nddf = df.diff().dropna()\n\ndf.plot();\nddf.plot();", "Injecting errors in the metrics\nFor this tutorial, we inject two errors in the metrics. This is done inside the error_ddf dataframe.", "error_ddf = ddf.copy()\nerror_ddf.sonemu_rx_count_packets[1111] *= 2.6\nerror_ddf.sonemu_rx_count_packets[3333] *= 2.7", "Detecting anomalies\nWe use the pyculiarity package to detect anomalies in a dataframe using the detect_ts function.", "from pyculiarity import detect_ts\nimport pandas as pd\nimport time\n\n\ndef f(x):\n dt = x.to_datetime()\n return time.mktime(dt.timetuple())\n\ntarget = error_ddf\nu = pd.DataFrame({'one': list(target.index.map(f)), 'two': target.sonemu_rx_count_packets})\nresults = detect_ts(u, max_anoms=0.004, alpha=0.01, direction='both') #, threshold='med_max')", "The resulting plot clearly shows the 2 anomalies.", "# make a nice plot\nmatplotlib.rcParams['figure.figsize'] = (20.0, 10.0)\nf, ax = plt.subplots(2, 1, sharex=True)\nax[0].plot(target.index, target.sonemu_rx_count_packets, 'b')\nax[0].plot(results['anoms'].index, results['anoms']['anoms'], 'ro')\nax[0].set_title('Detected Anomalies')\nax[1].set_xlabel('Time Stamp')\nax[0].set_ylabel('Count')\nax[1].plot(results['anoms'].index, results['anoms']['anoms'], 'b')\nax[1].set_ylabel('Anomaly Magnitude')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pvalienteverde/ElCuadernillo
ElCuadernillo/20160725_SistemasDeRecomendacionContentBased/Content-Based paso a paso.ipynb
mit
[ "from IPython.core.display import HTML\nHTML(\"<style>.container { width:100% !important; }</style>\")\n\n%load_ext autoreload\n%autoreload 2\n\nimport sys\nsys.path.append('./Scripts') ", "Recomendacion de productos, Content-Based\nA continuación veremos paso a paso como se puede realizar un sistema de recomendacion basado en el contenido en python. \nhttp://www.p.valienteverde.com/sistemas-de-recomendacion-basados-en-el-contenido-content-based/\nBasado en el Contenido (ContendBased)\nPor medio de la descripción del producto. Relaciones de tags\n\nVectorizacion del contenido, es decir, generar los tags\nPonderar los tags\nGenerar el motor de buscada de los productos similares\n\n<img src='imagenes/contect_based.png'>", "import pandas as pd\nimport numpy as np\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.feature_extraction.text import TfidfTransformer\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.metrics.pairwise import cosine_similarity\nfrom sklearn.neighbors import NearestNeighbors\n\nimport ContendBased as CB\n\ndatos = pd.read_csv(\"./BD/people_wiki.tar.gz\", compression='gzip')\ndatos.head(3)\n\nentrada_elton_john=datos.query('name == \"Elton John\"')\nentrada_elton_john", "Explorar los tags y su peso\n\nPrimero vamos a vertorizar (extración de los tags) la descripción del producto --> CountVectorizer()\nCalcular la importancia de cada tag en la descripcion del producto --> TfidfTransformer()\n\nVectorizacion\nCalculamos los tags", "vectorizacion = CountVectorizer()\nbag_of_words= vectorizacion.fit_transform(datos.text)\n\nbag_words_elton_john = vectorizacion.transform(entrada_elton_john.text)\nCB.mostrar_pesos_tags(bag_words_elton_john,vectorizacion)", "Normalizacion", "tf_transformer = TfidfTransformer(use_idf=False).fit(bag_of_words)\n\nmatrix_elton_john_tf=tf_transformer.transform(bag_of_words[entrada_elton_john.index.values])\npesos_tf=CB.mostrar_pesos_tags(matrix_elton_john_tf,vectorizacion,descripcion='tf')\npesos_tf", "Fuck, los tags mas importante no describen los articulos\n- SOLUCION: Normalizar y poderar teniendo en cuenta la frecuencia de cada tag en los demas documenos", "tfidf_transformer = TfidfTransformer(use_idf=True).fit(bag_of_words)\nmatrix_elton_john_tf_idx=tfidf_transformer.transform(bag_of_words[entrada_elton_john.index.values])\npesos_tf_idx=CB.mostrar_pesos_tags(matrix_elton_john_tf_idx,vectorizacion,descripcion='tf-idx')\npesos_tf_idx", "Vamos por el buen camino, vamos a ayudar al algoritmo de dos formas:\n1. Eliminando las palabras mas comunes --> stopwords\n2. Eliminado los numeros: Fuera de su contexto pierden todo su significado", "import nltk\nnltk.download('stopwords')\n\nfrom nltk.corpus import stopwords\ncachedStopWords = stopwords.words(\"english\")\ncachedStopWords[:10]\n\nvectorizacion_stop_words = CountVectorizer(stop_words = cachedStopWords, token_pattern='(?u)\\\\b[a-zA-Z]\\\\w\\\\w+\\\\b')\nbag_of_words_stop_words = vectorizacion_stop_words.fit_transform(datos.text)\ntfidf_transformer_stop_words = TfidfTransformer(use_idf=True).fit(bag_of_words_stop_words)\n\nmatrix_elton_john_tf_idx_stop_words=tfidf_transformer_stop_words.transform(bag_of_words_stop_words[entrada_elton_john.index.values])\npesos_tf_idx_filtrado=CB.mostrar_pesos_tags(matrix_elton_john_tf_idx_stop_words,vectorizacion_stop_words,descripcion='tf_idx_filtrado')\npesos_tf_idx_filtrado", "Comparativa de las mejoras realizadas\n\ntf: Poderamos y normalizados los tags de cada articulo\ntf-idx: Poderamos y normalizados los tags de cada articulo teniendo en cuenta la frecuencia de cada tag en los demas articulos\ntf_idx_filtrado: Eliminamos las palabras mas comunes y los numeros a tf-idx", "pd.concat([pesos_tf.reset_index(level=0),pesos_tf_idx.reset_index(level=0),pesos_tf_idx_filtrado.reset_index(level=0)],axis=1)", "Forma rapida\nTodos los pasos anteriores se puede simplificar por medio de un metodo de sklearn que los agrutina", "tfidf_vectorizer = TfidfVectorizer( stop_words=cachedStopWords, token_pattern='(?u)\\\\b[a-zA-Z]\\\\w\\\\w+\\\\b')\ntfidf_vectorizer.fit(datos.text)\n\nCB.mostrar_pesos_tags(tfidf_vectorizer.transform(entrada_elton_john.text),tfidf_vectorizer)", "Prediccion\nUna vez que ya hemos extraidos los tags de los productos, buscamos por similitud los mas parecidos", "vecinos = NearestNeighbors(n_neighbors=5,metric='cosine',algorithm='brute')\ndatos_por_tags = tfidf_vectorizer.transform(datos.text)\n\nvecinos.fit(datos_por_tags)", "Con nuestro motor de recomendacion creado, podemos utilizarlo como un buscador de articulos. Como se verá, de los 5 actores propuestos, 4 de ellos al menos ha ganado un oscar !!!!", "buscador = tfidf_vectorizer.transform(['Award Actor Oscar'])\ndistancia,indices = vecinos.kneighbors(buscador)\ndatos.iloc[indices[0],:]", "Veamos que famosos nos relaciona con Al Pacino...", "al_pacino_vectorizado = tfidf_vectorizer.transform(datos.query('name == \"Al Pacino\"').text)\ndistancia,indices = vecinos.kneighbors(al_pacino_vectorizado)\ndatos.iloc[indices[0],:]" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bmcfee/pescador
examples/Pescador demo.ipynb
isc
[ "Pescador demo\nThis notebook illustrates some of the basic functionality of pescador: a package to facilitate iterative learning from data streams (implemented as python generators).", "import pescador\n\nimport numpy as np\nnp.set_printoptions(precision=4)\nimport sklearn\nimport sklearn.datasets\nimport sklearn.linear_model\nimport sklearn.metrics\nimport sklearn.model_selection\n\ndef batch_sampler(X, Y, batch_size=20, scale = 1e-1):\n '''A gaussian noise generator for data\n \n Parameters\n ----------\n X : ndarray\n features, n_samples by dimensions\n \n Y : ndarray\n labels, n_samples\n \n batch_size : int\n size of the minibatches to generate\n \n scale : float > 0\n scale of the noise to add\n \n Generates\n ---------\n data\n An infinite stream of data dictionaries\n batch = dict(X=X[i], Y=Y[i])\n '''\n \n X = np.atleast_2d(X)\n Y = np.atleast_1d(Y)\n\n \n n, d = X.shape\n \n while True:\n i = np.random.randint(0, n, size=batch_size)\n \n noise = scale * np.random.randn(batch_size, d)\n \n yield {'X': X[i] + noise, 'Y': Y[i]}\n\n# Load up the iris dataset for the demo\ndata = sklearn.datasets.load_iris()\nX, Y = data.data, data.target\nclasses = np.unique(Y)\n\n# What does the data stream look like?\n\n# First, we'll wrap the generator function in a Streamer object.\n# This is necessary for a few reasons, notably so that we can re-instantiate\n# the generator multiple times (eg once per epoch)\nbatches = pescador.Streamer(batch_sampler, X, Y)\n\nfor q in batches(max_iter=3):\n print(q)", "Benchmarking\nWe can benchmark our learner's efficiency by running a couple of experiments on the Iris dataset.\nOur classifier will be L1-regularized logistic regression.", "%%time\nss = sklearn.model_selection.ShuffleSplit(n_splits=2, test_size=0.2)\nfor train, test in ss.split(np.arange(len(X))):\n \n # Make an SGD learner, nothing fancy here\n classifier = sklearn.linear_model.SGDClassifier(verbose=0, \n loss='log',\n penalty='l1', \n n_iter=1)\n \n # Again, build a streamer object\n batches = pescador.Streamer(batch_sampler, X[train], Y[train])\n\n # And train the model on the stream.\n n_steps = 0\n for batch in batches(max_iter=5e3):\n classifier.partial_fit(batch['X'], batch['Y'], classes=classes)\n \n n_steps += 1\n \n # How's it do on the test set?\n print('Test-set accuracy: {:.3f}'.format(sklearn.metrics.accuracy_score(Y[test], classifier.predict(X[test]))))\n print('# Steps: ', n_steps)", "Parallelism\nIt's possible that the learner is more or less efficient than the data generator. If the data generator has higher latency than the learner (SGDClassifier), then this will slow down the learning.\nPescador uses zeromq to parallelize data stream generation, effectively decoupling it from the learner.", "%%time\nss = sklearn.model_selection.ShuffleSplit(n_splits=2, test_size=0.2)\nfor train, test in ss.split(np.arange(len(X))):\n \n # Make an SGD learner, nothing fancy here\n classifier = sklearn.linear_model.SGDClassifier(verbose=0, \n loss='log',\n penalty='l1', \n n_iter=1)\n \n # First, turn the data_generator function into a Streamer object\n batches = pescador.Streamer(batch_sampler, X[train], Y[train])\n \n # Then, send this thread to a second process\n zmq_stream = pescador.ZMQStreamer(batches, 5156)\n \n # And train the model on the stream.\n n_steps = 0\n for batch in zmq_stream(max_iter=5e3):\n classifier.partial_fit(batch['X'], batch['Y'], classes=classes)\n \n n_steps += 1\n \n # How's it do on the test set?\n print('Test-set accuracy: {:.3f}'.format(sklearn.metrics.accuracy_score(Y[test], classifier.predict(X[test]))))\n print('# Steps: ', n_steps)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
LSSTTVS/WhitepaperNotebooks
notebooks/k_consecutive_visits.ipynb
bsd-3-clause
[ "Calculate the time gap between three consecutive visits in each filter, and generate a set of combined histograms with all filters. Derived from sims_maf_contrib/tutorials/Plotting Examples.ipynb", "# Import modules.\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport lsst.sims.maf.db as db\nimport lsst.sims.maf.metrics as metrics\nimport lsst.sims.maf.slicers as slicers\nimport lsst.sims.maf.plots as plots\nfrom lsst.sims.maf.metricBundles import MetricBundle, MetricBundleGroup, makeBundlesDictFromList\nfrom mafContrib import kConsecutiveGapMetric\n\n# Connect to databases.\nrunName = 'minion_1016'\nopsdb = db.OpsimDatabase(runName + '_sqlite.db')\noutDir = 'allfilters_test'\nresultsDb = db.ResultsDb(outDir=outDir)", "Set up and run non-dithered metric bundles. Use a lower value of nside to make the notebook run faster, although at lower spatial resolution.", "nside = 16\n# Set up metrics, slicer and summaryMetrics.\nm1 = kConsecutiveGapMetric(k=2)\nm2 = metrics.AveGapMetric()\nslicer = slicers.HealpixSlicer(nside=nside)\nsummaryMetrics = [metrics.MinMetric(), metrics.MeanMetric(), metrics.MaxMetric(), \n metrics.MedianMetric(), metrics.RmsMetric(), \n metrics.PercentileMetric(percentile=25), metrics.PercentileMetric(percentile=75)]\n# And I'll set a plotDict for the nvisits and coadded depth, because otherwise the DD fields throw the \n# scale in the plots into too wide a range. \n# (we could also generate plots, see this effect, then set the dict and regenerate the plots)\n#nvisitsPlotRanges = {'xMin':0, 'xMax':300, 'colorMin':0, 'colorMax':300, 'binsize':5}\n#coaddPlotRanges = {'xMin':24, 'xMax':28, 'colorMin':24, 'colorMax':28, 'binsize':0.02}\n\nfilterlist = ['u', 'g', 'r', 'i', 'z', 'y']\nfilterorder = {'u':0, 'g':1, 'r':2, 'i':3, 'z':4, 'y':5}\n\n# Create metricBundles for each filter. \n# For ease of access later, I want to make a dictionary with 'kgap[filter]' first.\nkgap = {}\navegap = {}\nfor f in filterlist:\n sqlconstraint = 'filter = \"%s\"' %(f)\n # Add displayDict stuff that's useful for showMaf to put things in \"nice\" order.\n displayDict = {'subgroup':'Undithered', 'order':filterorder[f], 'group':'kgap'}\n kgap[f] = MetricBundle(m1, slicer, sqlconstraint=sqlconstraint, runName=runName,\n summaryMetrics=summaryMetrics, #plotDict=nvisitsPlotRanges,\n displayDict=displayDict)\n displayDict['group'] = 'AveGap'\n avegap[f] = MetricBundle(m2, slicer, sqlconstraint=sqlconstraint, runName=runName,\n summaryMetrics=summaryMetrics, #plotDict=nvisitsPlotRanges,\n displayDict=displayDict)\n \nblistAll = []\nfor f in filterlist:\n blistAll.append(kgap[f])\n blistAll.append(avegap[f])\nbdict = makeBundlesDictFromList(blistAll)\n# Set the metricBundleGroup up with all metricBundles, in all filters.\nbgroup = MetricBundleGroup(bdict, opsdb, outDir=outDir, resultsDb=resultsDb)\nbgroup.runAll()\nbgroup.writeAll()\nbgroup.plotAll()\n\nprint 'Kgap --'\nfor f in filterlist:\n print kgap[f].summaryValues\nprint 'Avegap --'\nfor f in filterlist:\n print avegap[f].summaryValues", "Now let's try to combine the histograms.", "# Set more complicated plot labels directly in the bundles.\nfor f in filterlist:\n kgap[f].setPlotDict({'label':'%s %1.f/%.1f/%1.f' %(f, kgap[f].summaryValues['25th%ile'], \n kgap[f].summaryValues['Median'], \n kgap[f].summaryValues['75th%ile'])})\n\n# Set up the plotHandler.\nph = plots.PlotHandler(outDir=outDir, resultsDb=resultsDb)\n# Instantiate the healpix histogram plotter, since we'll use it a lot.\nhealpixhist = plots.HealpixHistogram()\nph.setMetricBundles(kgap)\n# Add min/max values to the plots, which will be used for the combo histogram for nvisits.\n#ph.setPlotDicts(nvisitsPlotRanges)\nph.plot(plotFunc=healpixhist)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
tdeoskar/NLP1-2017
lab3/lab3.ipynb
gpl-3.0
[ "Lab 3: Constituency parsing with CKY\nThe grammatical structure of a sentence can be represented with a Context Free Grammar (CFG). When we additionally assign probabilities to the rules of the CFG we get a PCFG: a Probabilistic CFG. \nGiven a sufficiently expressive PCFG (one that holds enough rules) we can parse new sentences using the Cocke–Kasami–Younger (CKY) algorithm. You can use this algorithm in three ways: to find the set of all the possible parses $p$ of a sentence $s$ under a PCFG $G$; to find the probability of the sentence by summing up the probabilities of these parses; or to find the parse $p^{*}$ of the highest probability.\nTasks\n\n\nIn this notebook you will learn how to represent a PCFG in an object-oriented manner as a collection of python classes. These classes are already defined for you. Read them through thoroughly and make sure that you understand them well. You have to use them in task 2.\n\n\nImplement the CKY algorithm to find the most probable parse $p^{*}$ for a sentence. Your implementation will follow the psuedo-code that is given in both the lecture slides, and Jurafsky and Martin.\n\n\nThe reference for this notebook is chapters 13 and 14 of Jurafsky and Martin (both in the 2nd and 3rd edition) and the slides from week 3.\n\nRules\n\n\nThe lab exercises should be made in groups of two people.\n\n\nThe deadline is Tuesday 5 December 16:59.\n\n\nThe assignment should submitted to Blackboard as .ipynb. Only one submission per group!\n\n\nThe filename should be lab3_lastname1_lastname2.ipynb, so for example lab3_Manning_Schuetze.ipynb. Notebooks that do not follow this format will not be graded.\n\n\nThe notebook is graded on a scale of 0-100. The number of points for each question is indicated in parantheses.\n\n\nThere are no BONUS questions in this lab.\n\n\nNotes on implementation:\n\n\nYou should write your code and answers in this iPython Notebook (see http://ipython.org/notebook.html for reference material). If you have problems, please contact your teaching assistant.\n\n\nUse only one cell for code and one cell for markdown answers. Do not add these cells yourself, but use the ones that already exist in the notebook.\n\n\nPut all code in the cell with the # YOUR CODE HERE comment.\n\n\nFor theoretical question, put your solution in the YOUR ANSWER HERE cell.\n\n\nTest your code and make sure we can run your notebook", "import numpy as np\nfrom collections import Counter, defaultdict\nimport math\nimport nltk\nfrom nltk.tree import Tree", "1. PCFG\nIn this lab we will show you a way to represent a PCFG using python objects. We will introduce the following classes:\n\nSymbol\nTerminal\nNonterminal\n\n\nRule\n\nAt first glance, this might seem like a lot of work. But, hopefully, by the time you get to implementing CKY you will be confinced of the benefits of these constructions.\nSymbol\nRecall that:\n* Terminal symbols are the words of the sentence: I, ate, salad, the etc.\n* Nonterminal symbols are the syntactic categories of the various constituents: S, NP, VP, Det etc.\nIn our representation, Symbol is going to be a container class. The classes Terminal and Nonterminal will inherit from the Symbol class and will hence both become a type of symbol. The classes themselves are effectively a container for the underlying python strings.", "class Symbol:\n \"\"\"\n A symbol in a grammar.\n This class will be used as parent class for Terminal, Nonterminal.\n This way both will be a type of Symbol.\n \"\"\"\n def __init__(self):\n pass\n\n\nclass Terminal(Symbol):\n \"\"\"\n Terminal symbols are words in a vocabulary\n \n E.g. 'I', 'ate', 'salad', 'the'\n \"\"\"\n\n def __init__(self, symbol: str):\n assert type(symbol) is str, 'A Terminal takes a python string, got %s' % type(symbol)\n self._symbol = symbol\n\n def is_terminal(self):\n return True\n\n def is_nonterminal(self):\n return False\n\n def __str__(self):\n return \"'%s'\" % self._symbol\n\n def __repr__(self):\n return 'Terminal(%r)' % self._symbol\n\n def __hash__(self):\n return hash(self._symbol)\n\n def __len__(self):\n \"\"\"The length of the underlying python string\"\"\"\n return len(self._symbol)\n\n def __eq__(self, other):\n return type(self) == type(other) and self._symbol == other._symbol\n\n def __ne__(self, other):\n return not (self == other)\n \n @property\n def obj(self):\n \"\"\"Returns the underlying python string\"\"\"\n return self._symbol\n\n\nclass Nonterminal(Symbol):\n \"\"\"\n Nonterminal symbols are the grammatical classes in a grammar.\n \n E.g. S, NP, VP, N, Det, etc.\n \"\"\"\n\n def __init__(self, symbol: str):\n assert type(symbol) is str, 'A Nonterminal takes a python string, got %s' % type(symbol)\n self._symbol = symbol\n\n def is_terminal(self):\n return False\n \n def is_nonterminal(self):\n return True\n\n def __str__(self):\n return \"[%s]\" % self._symbol\n\n def __repr__(self):\n return 'Nonterminal(%r)' % self._symbol\n\n def __hash__(self):\n return hash(self._symbol)\n \n def __len__(self):\n \"\"\"The length of the underlying python string\"\"\"\n return len(self._symbol)\n \n def __eq__(self, other):\n return type(self) == type(other) and self._symbol == other._symbol\n\n def __ne__(self, other):\n return not (self == other)\n \n @property\n def obj(self):\n \"\"\"Returns the underlying python string\"\"\"\n return self._symbol\n", "Let's try out the classes by initializing some terminal an nonterminal symbols:", "dog = Terminal('dog')\nthe = Terminal('the')\nwalks = Terminal('walks')\n\nS = Nonterminal('S')\nNP = Nonterminal('NP')\nNP_prime = Nonterminal('NP')\nVP = Nonterminal('VP')\nV = Nonterminal('V')\nN = Nonterminal('N')\nDet = Nonterminal('Det')", "The methods __eq__ and __ne__ make it possible to compare our objects using standard Python syntax. But more importantly: compare in the way that we are interested in, namely whether the underlying representation is the same.\nTo see the difference, try commenting out the method __eq__ in the class above, and notice different result of the equality test NP==NP_prime.", "print(dog)\nprint(NP)\nprint()\nprint(NP==Det)\nprint(NP!=Det)\nprint(NP==NP)\nprint(NP==NP_prime)", "Note the difference between calling print(NP) and simply calling NP. The first is taken care of by the method __str__ and the second by the method __repr__.", "dog", "We can also easily check if our symbol is a terminal or not:", "dog.is_terminal()\n\nNP.is_terminal()", "Finally the method __hash__ makes our object hashable, and hence usable in a datastructure like a dictionary. \nTry commenting out this method above in the class and then retry constructing the dictionary: notice the error.", "d = {NP: 1, S: 2}\nd", "Rules\nIn a PCFG a rule looks something like this \n$$NP \\to Det\\;N,$$\nwith a corresponding probability, for example $1.0$ (if we lived in a world where all noun phrases had this grammatical structure).\nIn our representation, Rule will be an object made of a left-hand side (lhs) symbol, a sequence of right-hand side symbols (rhs) and a probability prob. \nIf we use the above defined symbols, we can call\nrule = Rule(NP, [Det, N], 1.0).\n\nThis will construct an instance called rule which represent the rule above\n[NP] -&gt; [Det] [N] (1.0).", "class Rule:\n\n def __init__(self, lhs, rhs, prob):\n \"\"\"\n Constructs a Rule.\n A Rule takes a LHS symbol and a list/tuple of RHS symbols.\n\n :param lhs: the LHS nonterminal\n :param rhs: a sequence of RHS symbols (terminal or nonterminal)\n :param prob: probability of the rule\n \"\"\"\n\n assert isinstance(lhs, Symbol), 'LHS must be an instance of Symbol'\n assert len(rhs) > 0, 'If you want an empty RHS, use an epsilon Terminal EPS'\n assert all(isinstance(s, Symbol) for s in rhs), 'RHS must be a sequence of Symbol objects'\n self._lhs = lhs\n self._rhs = tuple(rhs)\n self._prob = prob\n\n\n def __eq__(self, other):\n return self._lhs == other._lhs and self._rhs == other._rhs and self._prob == other._prob\n\n def __ne__(self, other):\n return not (self == other)\n\n def __hash__(self):\n return hash((self._lhs, self._rhs, self._prob))\n\n def __repr__(self):\n return '%s -> %s (%s)' % (self._lhs,\n ' '.join(str(sym) for sym in self._rhs),\n self.prob)\n\n def is_binary(self):\n \"\"\"True if Rule is binary: A -> B C\"\"\"\n return len(self._rhs) == 2\n \n def is_unary(self):\n \"\"\"True if Rule is unary: A -> w\"\"\"\n return len(self._rhs) == 1\n \n @property\n def lhs(self):\n \"\"\"Returns the lhs of the rule\"\"\"\n return self._lhs\n\n @property\n def rhs(self):\n \"\"\"Returns the rhs of the rule\"\"\"\n return self._rhs\n\n @property\n def prob(self):\n \"\"\"Returns the probability of the rule\"\"\"\n return self._prob\n", "Just as with Terminal and Nonterminal you can print an instance of Rule, you can access its attributes, and you can hash rules with containers such as dict and set.", "r1 = Rule(S, [NP, VP], 1.0)\nr2 = Rule(NP, [Det, N], 1.0)\nr3 = Rule(N, [dog], 1.0)\nr4 = Rule(Det, [the], 1.0)\nr5 = Rule(VP, [walks], 1.0)\n\nprint(r1)\nprint(r2)\nprint(r3)\nprint(r4)\n\nprint(r1.prob)\n\nr1 in set([r1])\n\nd = {r1: 1, r2: 2}\nd", "Grammar\nA PCFG is a container for Rules. The Rules are stored in the PCFG in such a way that they can be accesed easily in different ways.", "class PCFG(object):\n \"\"\"\n Constructs a PCFG.\n A PCFG stores a list of rules that can be accessed in various ways.\n \n :param rules: an optional list of rules to initialize the grammar with\n \"\"\"\n def __init__(self, rules=[]):\n self._rules = []\n self._rules_by_lhs = defaultdict(list)\n self._terminals = set()\n self._nonterminals = set()\n for rule in rules:\n self.add(rule)\n\n def add(self, rule):\n \"\"\"Adds a rule to the grammar\"\"\"\n if not rule in self._rules:\n self._rules.append(rule)\n self._rules_by_lhs[rule.lhs].append(rule)\n self._nonterminals.add(rule.lhs)\n for s in rule.rhs:\n if s.is_terminal():\n self._terminals.add(s)\n else:\n self._nonterminals.add(s)\n\n def update(self, rules):\n \"\"\"Add a list of rules to the grammar\"\"\"\n for rule in rules:\n self.add(rule)\n\n @property\n def nonterminals(self):\n \"\"\"The list of nonterminal symbols in the grammar\"\"\"\n return self._nonterminals\n\n @property\n def terminals(self):\n \"\"\"The list of terminal symbols in the grammar\"\"\"\n return self._terminals\n \n @property\n def rules(self):\n \"\"\"The list of rules in the grammar\"\"\"\n return self._rules\n \n @property\n def binary_rules(self):\n \"\"\"The list of binary rules in the grammar\"\"\"\n return [rule for rule in self._rules if rule.is_binary()]\n \n @property\n def unary_rules(self):\n \"\"\"The list of unary rules in the grammar\"\"\"\n return [rule for rule in self._rules if rule.is_unary()]\n\n def __len__(self):\n return len(self._rules)\n\n def __getitem__(self, lhs):\n return self._rules_by_lhs.get(lhs, frozenset())\n\n def get(self, lhs, default=frozenset()):\n \"\"\"The list of rules whose LHS is the given symbol lhs\"\"\"\n return self._rules_by_lhs.get(lhs, frozenset())\n\n def __iter__(self):\n \"\"\"Iterator over rules (in arbitrary order)\"\"\"\n return iter(self._rules)\n\n def iteritems(self):\n \"\"\"Iterator over pairs of the kind (LHS, rules rewriting LHS)\"\"\"\n return self._rules_by_lhs.items()\n\n def __str__(self):\n \"\"\"Prints the grammar line by line\"\"\"\n lines = []\n for lhs, rules in self.iteritems():\n for rule in rules:\n lines.append(str(rule))\n return '\\n'.join(lines)\n", "Initialize a grammar", "G = PCFG()", "We can add rules individually with add, or as a list with update:", "G.add(r1)\nG.update([r2,r3,r4,r5])", "We can print the grammar", "print(G)", "We can get the set of rewrite rules for a certain LHS symbol.", "G.get(S)\n\nG.get(NP)", "We can also iterate through rules in the grammar.\nNote that the following is basically counting how many rules we have in the grammar.", "sum(1 for r in G)", "which can also be done in a more efficient way", "len(G)", "We can access the set of terminals and nonterminals of the grammar:", "print(G.nonterminals)\n\nprint(G.terminals)\n\nS in G.nonterminals\n\ndog in G.terminals", "Finally we can easily access all the binary rules and all the unary rules in the grammar:", "G.unary_rules\n\nG.binary_rules", "For the following sections you will need to have the Natural Language Toolkit (NLTK) installed. We will use a feature of the NLTK toolkit that lets you draw constituency parses. Details for download can be found here: http://www.nltk.org/install.html.\n\nVisualizing a tree\nFor the sake of legacy let's reiterate an age-old NLP schtick, the well-known example of structural ambiguity from the Groucho Marx movie, Animal Crackers (1930):\n\nOne morning I shot an elephant in my pajamas. How he got into my pajamas, I don't know.\n\nLet's take a closer look at the ambiguity in the phrase: I shot an elephant in my pajamas. The ambiguity is caused by the fact that the sentence has two competing parses represented in:\n(S (NP I) (VP (VP (V shot) (NP (Det an) (N elephant))) (PP (P in) (NP (Det my) (N pajamas)))))\n\nand\n(S (NP I) (VP (V shot) (NP (Det an) (NP (N elephant) (PP (P in) (NP (Det my) (N pajamas)))))))\n\nWe can write these parses down as strings and then let NLTK turn them into trees using the NLTK Tree class. (See http://www.nltk.org/api/nltk.html#nltk.tree.Tree as reference for this class, if you want to know more.)", "parse1 = \"(S (NP I) (VP (VP (V shot) (NP (Det an) (N elephant))) (PP (P in) (NP (Det my) (N pajamas)))))\"\nparse2 = \"(S (NP I) (VP (V shot) (NP (Det an) (NP (N elephant) (PP (P in) (NP (Det my) (N pajamas)))))))\"\n\npajamas1 = Tree.fromstring(parse1)\npajamas2 = Tree.fromstring(parse2)", "We can then pretty-print these trees:", "pajamas1.pretty_print()\npajamas2.pretty_print()", "Parsing with CKY\nLet's stick with this sentence for the rest of this lab. We will use CKY to find the 'best' parse for this sentence.", "# Turn the sentence into a list\nsentence = \"I shot an elephant in my pajamas\".split()\n# The length of the sentence\nnum_words = len(sentence)", "A PCFG for this sentence can be found in the file groucho-grammar.txt. We read this in with the function read_grammar_rules.", "def read_grammar_rules(istream):\n \"\"\"Reads grammar rules formatted as 'LHS ||| RHS ||| PROB'.\"\"\"\n for line in istream:\n line = line.strip()\n if not line:\n continue\n fields = line.split('|||')\n if len(fields) != 3:\n raise ValueError('I expected 3 fields: %s', fields)\n lhs = fields[0].strip()\n\n if lhs[0] == '[':\n lhs = Nonterminal(lhs[1:-1])\n else:\n lhs = Terminal(lhs)\n rhs = fields[1].strip().split()\n new_rhs = []\n for r in rhs:\n if r[0] == '[':\n r = Nonterminal(r[1:-1])\n else:\n r = Terminal(r)\n new_rhs.append(r)\n\n prob = float(fields[2].strip())\n yield Rule(lhs, new_rhs, prob)\n\n# Read in the grammar\nistream = open('groucho-grammar-1.txt')\ngrammar = PCFG(read_grammar_rules(istream))\nprint(\"The grammar:\\n\", grammar, \"\\n\")", "We will also need the following two dictionaries: nonterminal2index mapping from nonterminals to integers (indices); and its inverse, an index2nonterminal dictionary.", "num_nonterminals = len(grammar.nonterminals)\n\n# Make a nonterminal2index and a index2nonterminal dictionary\nn2i = defaultdict(lambda: len(n2i))\ni2n = dict()\nfor A in grammar.nonterminals:\n i2n[n2i[A]] = A\n\n# Stop defaultdict behavior of n2i\nn2i = dict(n2i)\n\nn2i", "The charts\nNow we are ready to introduce the chart datastructures. We need a chart to store the scores and a chart to store the backpointers.\nBoth of these will be 3-dimensional numpy arrays: one named score (also named table in J&M) holding the probabilities of intermediate results; one named back to store the backpointers in. We will use the following indexing convention for these charts:\n\n\nFormat for the chart holding the scores: \n score[A][begin][end] = probability (naming as in slides)\n table[A][i][j] = probability (naming as in J&amp;M)\n\n\n\nFormat for the chart holding the backpointers:\n back[A][begin][end] = (split,B,C) (naming as in slides)\n back[A][i][j] = (k,B,C) (naming as in J&amp;M)\n\n\n\nThis indexing convention is convenient for printing. See what happens when we print back below: we get num_nonterminal slices, each a numpy array of shape [n_words+1, n_words+1]. This is easier to read than the format table[i][j][A].\n[Note] Here we pretended A is both the nonterminal as well as the index. In actual fact, in our implementation A will be the nonterminal and the index for A will be n2i[A].\nLet's show you what we mean:", "# A numpy array zeros\nscore = np.zeros((num_nonterminals,\n num_words + 1, \n num_words + 1))\n\n# A numpy array that can store arbitrary data (we set dtype to object)\nback = np.zeros((num_nonterminals,\n num_words + 1, \n num_words + 1), dtype=object)", "The following illustrates the way you will use the back chart. In this example, your parser recognized that the words between 0 and 2 form an NP and the words between 2 and the end of the sentence form a VP (and nothing else yet):", "# Illustration of the backpointer array\nback[n2i[S]][0][-1] = (2,NP,VP) \nback", "Exercise 1. (80 points)\nImplement the CKY algorithm. Follow the pseudo-code given in the lecture-slides (or alternatively in J&M). The code must comply to the following:\n\nThe function cky takes a sentence (list of words) a grammar (an instance of PCFG) and a n2i nonterminals2index dictionary.\nThe function cky returns the filled-in score-chart and backpointer-chart, following the format established above.\n\n[Hint] This is the moment to make good use of the methods of the classes PCFG, Rule, Nonterminal, and Terminal!", "def cky(sentence, grammar, n2i):\n \"\"\"\n The CKY algorithm.\n \n Follow the pseudocode from the slides (or J&M).\n \n :param sentence: a list of words\n :param grammar: an instance of the class PCFG\n :param n2i: a dictionary mapping from Nonterminals to indices\n :return score: the filled in scores chart\n :return back: the filled in backpointers chart \n \"\"\"\n num_words = len(sentence)\n num_nonterminals = len(grammar.nonterminals)\n \n # A numpy array to store the scores of intermediate parses\n score = np.zeros((num_nonterminals,\n num_words + 1, \n num_words + 1))\n\n # A numpy array to store the backpointers\n back = np.zeros((num_nonterminals,\n num_words + 1, \n num_words + 1), dtype=object)\n \n\n # YOUR CODE HERE\n raise NotImplementedError\n \n return score, back\n\n# Run CKY\nscore, back = cky(sentence, grammar, n2i)", "Check your CKY\nUse the code in the following two cell to check your cky implementation.\nTake the Nonterminal S to inspect your filled in score and backpointer charts. Leave the code in this cell unchanged. We will use this to evaluate the corectness your cky function.", "### Don't change the code in this cell. ###\n\nS = Nonterminal('S')\n\nprint('The whole slice for nonterminal S:')\nprint(score[n2i[S]], \"\\n\")\n\nprint('The score in cell (S, 0, num_words), which is the probability of the best parse:')\nprint(score[n2i[S]][0][num_words], \"\\n\")\n\nprint('The backpointer in cell (S, 0, num_words):')\nprint(back[n2i[S]][0][num_words], \"\\n\")", "Exercise 2. (20 points)\nWrite the function build_tree that reconstructs the parse from the backpointer table. This is the function that is called in the return statement of the pseudo-code in Jurafsky and Martin.\n[Note] This is a challenging exercise! And we have no pseudocode for you here: you must come up with your own implementation. On the other hand, it will also constitute just the last 20 points of your your grade, so don't worry too much if can't finish it. If you finished exercise 1 you already have an 8 for this lab!\nHere is some additional advice:\n\n\nUse recursion - that is write your function in a recursive way. \n\nWhat is the base case? Hint: $A \\to w$.\nWhat is the recursive case? Hint: $A \\to B\\; C$.\n\n\n\nUse the additional clas Span that we introduce below for the symbols in your recovered rules.\n\nRead the documentation in this class for its usage.\n\n\n\nIf you want to use the function make_nltk_tree that we provide (and that turns a derivation into an NLTK tree so that you can draw it) your function must return the list of rules in derivation ordered depth-first. \n\nIf you write your function recursively this should happen automatically.\n\n\n\nThe following class will be very useful in your solution for the function build_tree.", "class Span(Symbol):\n \"\"\"\n A Span indicates that symbol was recognized between begin and end.\n \n Example:\n Span(Terminal('the'), 0, 1)\n This means: we found 'the' in the sentence between 0 and 1\n Span(Nonterminal('NP'), 4, 8) represents NP:4-8\n This means: we found an NP that covers the part of the sentence between 4 and 8\n \n Thus, Span holds a Terminal or a Nonterminal and wraps it between two integers. \n This makes it possible to distinguish between two instances of the same rule in the derrivation.\n Example:\n We can find that the rule NP -> Det N is use twice in the parse derrivation. But that in the first\n case it spans \"an elephant\" and in the second case it spans \"my pajamas\". We want to distinguis these. \n So: \"an elephant\" is covered by [NP]:2-4 -> [Det]:2-3 [N]:3-4\n \"my pajamas\" is covered by [NP]:5-7 -> [Det]:5-6 [N]:6-7\n \n Internally, we represent spans with tuples of the kind (symbol, start, end).\n \"\"\"\n\n def __init__(self, symbol, start, end):\n assert isinstance(symbol, Symbol), 'A span takes an instance of Symbol, got %s' % type(symbol)\n self._symbol = symbol\n self._start = start\n self._end = end\n\n def is_terminal(self):\n # a span delegates this to an underlying symbol\n return self._symbol.is_terminal()\n\n def root(self):\n # Spans are hierarchical symbols, thus we delegate\n return self._symbol.root()\n\n def obj(self):\n \"\"\"The underlying python tuple (Symbol, start, end)\"\"\"\n return (self._symbol, self._start, self._end)\n\n def translate(self, target):\n return Span(self._symbol.translate(target), self._start, self._end)\n\n def __str__(self):\n \"\"\"Prints Symbol with span if Symbol is Nonterminal else without (purely aesthetic distinction)\"\"\"\n if self.is_terminal():\n return \"%s\" % (self._symbol)\n else: \n return \"%s:%s-%s\" % (self._symbol, self._start, self._end)\n \n def __repr__(self):\n return 'Span(%r, %r, %r)' % (self._symbol, self._start, self._end)\n\n def __hash__(self):\n return hash((self._symbol, self._start, self._end))\n\n def __eq__(self, other):\n return type(self) == type(other) and self._symbol == other._symbol and self._start == other._start and self._end == other._end\n\n def __ne__(self, other):\n return not (self == other)\n", "Example usage of Span:", "span_S = Span(S, 0, 10)\nprint(span_S)\nspan_S = Span(dog, 4, 5)\nprint(span_S)\n\nspanned_rule = Rule(Span(NP, 2, 4), [Span(Det, 2, 3), Span(NP, 3, 4)], prob=None)\nprint(spanned_rule)", "Your final derivation should look like this:\n\n(Note that the rule probabilities are set to None. These are not saved in the backpointer chart so cannot be retrieved at the recovering stage. They also don't matter at this point, so you can set them to None.)\nIf you give this derivation to the functionmake_nltk_tree and then let NLTK draw it, then you get this tree:\n\nThe exercise", "def build_tree(back, sentence, root, n2i):\n \"\"\"\n Reconstruct the viterbi parse from a filled-in backpointer chart.\n \n It returns a list called derivation which hols the rules that. If you \n want to use the function make_nltk_tree you must make sure that the\n \n :param back: a backpointer chart of shape [num_nonterminals, num_words+1, num_words+1]\n :param sentence: a list of words\n :param root: the root symbol of the tree: Nonterminal('S')\n :param n2i: the dictionary mapping from Nonterminals to indices\n :return derivation: a derivation: a list of Rules with Span symbols that generate the Viterbi tree. \n If you want to draw them with the function that we provide, then this list \n should be ordered depth first!\n \"\"\"\n derivation = []\n num_words = len(sentence)\n\n # YOUR CODE HERE\n raise NotImplementedError\n \n return derivation", "Get your derivation:", "derivation = build_tree(back, sentence, S, n2i)\nderivation", "Turn the derivation into an NLTK tree:", "def make_nltk_tree(derivation):\n \"\"\"\n Return a NLTK Tree object based on the derivation\n (list or tuple of Rules)\n \"\"\"\n d = defaultdict(None, ((r.lhs, r.rhs) for r in derivation))\n\n def make_tree(lhs):\n return Tree(str(lhs), (str(child) if child not in d else make_tree(child) for child in d[lhs]))\n\n return make_tree(derivation[0].lhs)\n\ntree = make_nltk_tree(derivation)\ntree.pretty_print()", "That's it!\nCongratulations, you have made it to the end of the lab.\nMake sure all your cells are executed so that all your answers are there. Then, continue if you're interested!\n\nOptional\nIf you managed to get your entire CKY-parser working and have an appetite for more, it might be fun to try it on some more sentences and grammars. Give the grammars below a try!\nAlternative Groucho-grammar\nIf you change the probabilities in the grammar, you'll get a different parse as most likely one. Compare groucho-grammar-1.txt with groucho-grammar-2.txt and spot the difference in probabilities.", "# YOUR CODE HERE", "The man with the telescope\nAnother ambiguous sentence:\n\nI saw the man on the hill with the telescope.\n\nA grammar for this sentence is specified in the file telescope-grammar.txt.", "# YOUR CODE HERE" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jdhaines/ProcessTechnology
Spartanburg/breakout.ipynb
mit
[ "Spartanburg Breakout Data Set\nAnalysis with Machine-Learning, SK-Learn, and Python\nData Preparation\nOur data is in a file called \"breakout_data.csv\"\nWe'll use python to grab it and process it into a usable numpy array", "import numpy as np # get numpy package\ndata = np.genfromtxt(fname='breakout_data.csv', # data filename\n dtype=None, # figure out the data type by column\n delimiter=',', # delimit on commas\n names=True, # first line contains column namesh\n )", "Now our data is in a nice numpy ndarray. We can access it using the numpy methods. For example:\nWe can print the headers and the number of columns...", "column_headers = data.dtype.names\nprint(column_headers) # print the column headers\nprint('Number of columns: {}'.format(len(column_headers)))", "We can also print specific rows of data...", "print('The first row of data is: \\n{}'.format(data[0])) # print the first row\nprint('\\n') # print a blank line\nprint('and the last row of data is: \\n{}'.format(data[len(data)-1])) # print the last row" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
sympy/scipy-2017-codegen-tutorial
notebooks/_37-chemical-kinetics-numba.ipynb
bsd-3-clause
[ "NOTE\nThis notebook doesn't work yet. I have previously written my own version of lambdify here.\nDon't know if that's the path to go, or wait for next release of numba.\nIn this notebook we will use numba the increase the performance of our callbacks produced by lambdify in SymPy.", "import json\nimport numpy as np\nimport sympy as sym\nfrom scipy2017codegen.odesys import ODEsys\nfrom scipy2017codegen.chem import mk_rsys", "The ODEsys class and convenience functions from previous notebook (35) has been put in two modules for easy importing. Recapping what we did last:", "watrad_data = json.load(open('../scipy2017codegen/data/radiolysis_300_Gy_s.json'))\nwatrad = mk_rsys(ODEsys, **watrad_data)\ntout = np.logspace(-6, 3, 200) # close to one hour of operation\nc0 = {'H2O': 55.4e3, 'H+': 1e-4, 'OH-': 1e-4}\ny0 = [c0.get(symb.name, 0) for symb in watrad.y]\n\n%timeit yout, info = watrad.integrate_odeint(tout, y0)", "so that is the benchmark to beat.", "from numba import njit\nwatrad_numba = mk_rsys(ODEsys, **watrad_data, lambdify=lambda *args: njit(sym.lambdify(*args, modules=\"numpy\")))\nwatrad_numba.integrate_odeint(tout, y0)\n\n%timeit watrad_numba.integrate_odeint(tout, y0)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Just to see that everything looks alright:", "fig, ax = plt.subplots(1, 1, figsize=(14, 6))\nwatrad_numba.plot_result(tout, *watrad_numba.integrate_odeint(tout, y0), ax=ax)\nax.set_xscale('log')\nax.set_yscale('log')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rusucosmin/courses
ml/ex05/template/ex05.ipynb
mit
[ "# Useful starting lines\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n%load_ext autoreload\n%autoreload 2", "Logistic Regression\nClassification Using Linear Regression\nLoad your data.", "from helpers import sample_data, load_data, standardize\n\n# load data.\nheight, weight, gender = load_data()\n\n# build sampled x and y.\nseed = 1\ny = np.expand_dims(gender, axis=1)\nX = np.c_[height.reshape(-1), weight.reshape(-1)]\ny, X = sample_data(y, X, seed, size_samples=200)\nx, mean_x, std_x = standardize(X)", "Use least_squares to compute w, and visualize the results.", "from least_squares import least_squares\nfrom plots import visualization\n\ndef least_square_classification_demo(y, x):\n # ***************************************************\n # INSERT YOUR CODE HERE\n # classify the data by linear regression: TODO\n # ***************************************************\n tx = np.c_[np.ones((y.shape[0], 1)), x]\n # w = least squares with respect to tx\n err, w = least_squares(y, tx)\n print(f\"MSE: {err}\")\n visualization(y, x, mean_x, std_x, w, \"classification_by_least_square\")\n \nleast_square_classification_demo(y, x)", "Logistic Regression\nCompute your cost by negative log likelihood.", "def sigmoid(t):\n \"\"\"apply sigmoid function on t.\"\"\"\n return 1 / (1 + np.exp(-t))\n# sanity checks\nassert(sigmoid(0) == .5)\nassert(np.all(sigmoid(np.array([0, 0, 0])) == np.array([.5, .5, .5])))\n\ndef calculate_loss(y, tx, w):\n \"\"\"compute the cost by negative log likelihood.\"\"\"\n pred = tx @ w\n return -(y * np.log(sigmoid(pred)) + (1 - y) * np.log(1 - sigmoid(pred))).sum()\n\ndef calculate_gradient(y, tx, w):\n \"\"\"compute the gradient of loss.\"\"\"\n return tx.T @ (sigmoid(tx @ w) - y)", "Using Gradient Descent\nImplement your function to calculate the gradient for logistic regression.", "def learning_by_gradient_descent(y, tx, w, gamma):\n \"\"\"\n Do one step of gradient descen using logistic regression.\n Return the loss and the updated w.\n \"\"\"\n # ***************************************************\n # INSERT YOUR CODE HERE\n # compute the cost: TODO\n # ***************************************************\n loss = calculate_loss(y, tx, w)\n # ***************************************************\n # INSERT YOUR CODE HERE\n # compute the gradient: TODO\n # ***************************************************\n grad = calculate_gradient(y, tx, w)\n # ***************************************************\n # INSERT YOUR CODE HERE\n # update w: TODO\n # ***************************************************\n w = w - gamma * grad\n return loss, w", "Demo!", "from helpers import de_standardize\n\ndef logistic_regression_gradient_descent_demo(y, x):\n # init parameters\n max_iter = 10000\n threshold = 1e-8\n gamma = 0.01\n losses = []\n\n # build tx\n tx = np.c_[np.ones((y.shape[0], 1)), x]\n w = np.zeros((tx.shape[1], 1))\n\n # start the logistic regression\n for iter in range(max_iter):\n # get loss and update w.\n loss, w = learning_by_gradient_descent(y, tx, w, gamma)\n # log info\n if iter % 100 == 0:\n print(\"Current iteration={i}, loss={l}\".format(i=iter, l=loss))\n # converge criterion\n losses.append(loss)\n if len(losses) > 1 and np.abs(losses[-1] - losses[-2]) < threshold:\n break\n # visualization\n visualization(y, x, mean_x, std_x, w, \"classification_by_logistic_regression_gradient_descent\")\n print(\"loss={l}\".format(l=calculate_loss(y, tx, w)))\n\nlogistic_regression_gradient_descent_demo(y, x)", "Calculate your hessian below", "def calculate_hessian(y, tx, w):\n \"\"\"return the hessian of the loss function.\"\"\"\n S = np.diag((sigmoid(tx @ w) * (1 - sigmoid(tx @ w))).flatten())\n return (tx.T @ S) @ tx", "Write a function below to return loss, gradient, and hessian.", "def logistic_regression(y, tx, w):\n \"\"\"return the loss, gradient, and hessian.\"\"\"\n loss = calculate_loss(y, tx, w)\n gradient = calculate_gradient(y, tx, w)\n hessian = calculate_hessian(y, tx, w)\n return loss, gradient, hessian", "Using Newton's method\nUse Newton's method for logistic regression.", "def learning_by_newton_method(y, tx, w):\n \"\"\"\n Do one step on Newton's method.\n return the loss and updated w.\n \"\"\"\n loss, gradient, hessian = logistic_regression(y, tx, w)\n w = w - np.linalg.inv(hessian) @ gradient\n return loss, w", "demo", "def logistic_regression_newton_method_demo(y, x):\n # init parameters\n max_iter = 100\n threshold = 1e-8\n lambda_ = 0.1\n losses = []\n\n # build tx\n tx = np.c_[np.ones((y.shape[0], 1)), x]\n w = np.zeros((tx.shape[1], 1))\n\n # start the logistic regression\n for iter in range(max_iter):\n # get loss and update w.\n loss, w = learning_by_newton_method(y, tx, w)\n # log info\n if iter % 1 == 0:\n print(\"Current iteration={i}, the loss={l}\".format(i=iter, l=loss))\n # converge criterion\n losses.append(loss)\n if len(losses) > 1 and np.abs(losses[-1] - losses[-2]) < threshold:\n break\n # visualization\n visualization(y, x, mean_x, std_x, w, \"classification_by_logistic_regression_newton_method\")\n print(\"loss={l}\".format(l=calculate_loss(y, tx, w)))\n\nlogistic_regression_newton_method_demo(y, x)", "Using penalized logistic regression\nFill in the function below.", "def penalized_logistic_regression(y, tx, w, lambda_):\n \"\"\"return the loss, gradient, and hessian.\"\"\"\n loss, gradient, hessian = logistic_regression(y, tx, w)\n penalised_loss = loss + lambda_ * ((w ** 2).sum())\n return loss, gradient + 2 * lambda_ * w, hessian\n\ndef learning_by_penalized_gradient(y, tx, w, gamma, lambda_):\n \"\"\"\n Do one step of gradient descent, using the penalized logistic regression.\n Return the loss and updated w.\n \"\"\"\n # ***************************************************\n # INSERT YOUR CODE HERE\n # return loss, gradient: TODO\n # ***************************************************\n loss, gradient, hessian = penalized_logistic_regression(y, tx, w, lambda_)\n # ***************************************************\n # INSERT YOUR CODE HERE\n # update w: TODO\n # ***************************************************\n w = w - gamma * gradient\n return loss, w\n\ndef logistic_regression_penalized_gradient_descent_demo(y, x):\n # init parameters\n max_iter = 10000\n gamma = 0.01\n lambda_ = 0.01\n threshold = 1e-8\n losses = []\n\n # build tx\n tx = np.c_[np.ones((y.shape[0], 1)), x]\n w = np.zeros((tx.shape[1], 1))\n\n # start the logistic regression\n for iter in range(max_iter):\n # get loss and update w.\n loss, w = learning_by_penalized_gradient(y, tx, w, gamma, lambda_)\n # log info\n if iter % 100 == 0:\n print(\"Current iteration={i}, loss={l}, wnorm={wnorm}\".format(i=iter, l=loss, wnorm=(w ** 2).sum()))\n # converge criterion\n losses.append(loss)\n if len(losses) > 1 and np.abs(losses[-1] - losses[-2]) < threshold:\n break\n # visualization\n visualization(y, x, mean_x, std_x, w, \"classification_by_logistic_regression_penalized_gradient_descent\")\n print(\"loss={l} wnorm={wnorm}\".format(l=calculate_loss(y, tx, w), wnorm=(w ** 2).sum()))\n \nlogistic_regression_penalized_gradient_descent_demo(y, x)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aryarohit07/machine-learning-with-python
linear_regression/linear_regression_gradient_descent_with_multiple_variables.ipynb
mit
[ "We will implement linear regression with multiple variables to predict the prices of houses. Suppose you are selling your house and you want to know what a good market price would be. One way to do this is to first collect information on recent houses sold and make a model of housing prices.\nThe file ex1data2.txt contains a training set of housing prices in Port- land, Oregon. The first column is the size of the house (in square feet), the second column is the number of bedrooms, and the third column is the price of the house.", "import pandas as pd\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\ndf = pd.read_csv('ex1data2.txt', header=None)\n\nprint(df.head())\n\n#Lets try to visualize the data\nfig = plt.figure()\nax = Axes3D(fig)\nax.scatter(df[0], df[1], df[2])\nax.set_zlabel('price')\nplt.xlabel('size of the house (in square feet)')\nplt.ylabel('number of bedrooms')\nplt.show()\nprint('We have 47 houses data')\n\nimport numpy as np\n#Data preparation\n\n#We are not adding column of ones here because we want to normalize the features first\nX = df.drop([2], axis=1).values\ny = df[2].values\n\nprint(X[:1], y[:1])", "Now we will start with normalization of the features because size of the house is in different range as compared to number of bedrooms", "def featureNormalize(X):\n mu = X.mean(axis=0)\n sigma = X.std(axis=0)\n X_norm = (X - mu)/sigma\n return (X_norm, mu, sigma)", "Data Preparation", "X_norm, mu, sigm = featureNormalize(X)\n\n# now lets add ones to the input feature X for theta0\n\nones = np.ones((X_norm.shape[0], 1), float)\nX = np.concatenate((ones,X_norm), axis=1)\n\nprint(X[:1])\n\n#Cost function\ndef computeCostMulti(X, y, theta):\n m = X.shape[0]\n hypothesis = X.dot(theta) # h_theta = theta.T * x = theta0*x0 + theta1*x1 + ... + thetan*xn\n J = (1/(2*m)) * (np.sum(np.square(hypothesis-y))) \n return J\n\ntheta = np.zeros(X.shape[1])\nJ_cost = computeCostMulti(X, y, theta)\nprint('J_Cost', J_cost)\n\ndef gradientDescentMulti(X, y, theta, alpha, num_iters):\n m = X.shape[0]\n J_history = np.zeros(num_iters)\n for iter in np.arange(num_iters):\n h = X.dot(theta)\n theta = theta - alpha * (1/m) * X.T.dot(h-y)\n J_history[iter] = computeCostMulti(X, y, theta)\n return theta, J_history\n\nalpha = 0.01;\nnum_iters = 1000;\n\ntheta, J_history = gradientDescentMulti(X, y, theta, alpha, num_iters)\n\n#Lets plot something\nplt.xlim(0,num_iters)\nplt.plot(J_history)\nplt.ylabel('Cost J')\nplt.xlabel('Iterations')\nplt.show()\n\nprint(theta)", "Now lets predict prices of some houses and compare the result with scikit-learn prediction.", "from sklearn.linear_model import LinearRegression\nclf = LinearRegression()\nclf.fit(X, y)\n\ninputXs = np.array([[1, 100, 3], [1, 200, 3]])\nsklearnPrediction = clf.predict(inputXs)\n\ngradientDescentPrediction = inputXs.dot(theta)\n\nprint(sklearnPrediction, gradientDescentPrediction)\n\nprint(\"Looks Good :D\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/noaa-gfdl/cmip6/models/gfdl-cm4/ocnbgchem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: NOAA-GFDL\nSource ID: GFDL-CM4\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:34\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-cm4', 'ocnbgchem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\n3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\n4. Key Properties --&gt; Transport Scheme\n5. Key Properties --&gt; Boundary Forcing\n6. Key Properties --&gt; Gas Exchange\n7. Key Properties --&gt; Carbon Chemistry\n8. Tracers\n9. Tracers --&gt; Ecosystem\n10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\n11. Tracers --&gt; Ecosystem --&gt; Zooplankton\n12. Tracers --&gt; Disolved Organic Matter\n13. Tracers --&gt; Particules\n14. Tracers --&gt; Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Elemental Stoichiometry\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n", "1.5. Elemental Stoichiometry Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.7. Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Damping\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for passive tracers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "2.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for passive tracers (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for biology sources and sinks", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "3.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transport scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n", "4.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTransport scheme used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4.3. Use Different Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how atmospheric deposition is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n", "5.2. River Input\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river input is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n", "5.3. Sediments From Boundary Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Sediments From Explicit Model\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from explicit sediment model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.2. CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe CO2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.3. O2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs O2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.4. O2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe O2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. DMS Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs DMS gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.6. DMS Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify DMS gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.7. N2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.8. N2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.9. N2O Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2O gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.10. N2O Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2O gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.11. CFC11 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC11 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.12. CFC11 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.13. CFC12 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC12 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.14. CFC12 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.15. SF6 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs SF6 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.16. SF6 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify SF6 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.17. 13CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.18. 13CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.19. 14CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.20. 14CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.21. Other Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any other gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how carbon chemistry is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n", "7.2. PH Scale\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.3. Constants If Not OMIP\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Sulfur Cycle Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sulfur cycle modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Nutrients Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Nitrous Species If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous species.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.5. Nitrous Processes If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous processes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Tracers --&gt; Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Upper Trophic Levels Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefine how upper trophic level are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of phytoplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n", "10.2. Pft\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Tracers --&gt; Ecosystem --&gt; Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of zooplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nZooplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Tracers --&gt; Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there bacteria representation ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Lability\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Tracers --&gt; Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Types If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Size If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n", "13.4. Size If Discrete\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.5. Sinking Speed If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Tracers --&gt; Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n", "14.2. Abiotic Carbon\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs abiotic carbon modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.3. Alkalinity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is alkalinity modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Bihaqo/t3f
docs/tutorials/tensor_nets.ipynb
mit
[ "Tensor Nets (compressing neural networks)\nOpen this page in an interactive mode via Google Colaboratory.\nIn this notebook we provide an example of how to build a simple Tensor Net (see https://arxiv.org/abs/1509.06569).\nThe main ingredient is the so-called TT-Matrix, a generalization of the Kronecker product matrices, i.e. matrices of the form \n$$A = A_1 \\otimes A_2 \\cdots \\otimes A_n$$\nIn t3f TT-Matrices are represented using the TensorTrain class.", "# Import TF 2.\n%tensorflow_version 2.x\nimport tensorflow as tf\nimport numpy as np\nimport tensorflow.keras.backend as K\n\n# Fix seed so that the results are reproducable.\ntf.random.set_seed(0)\nnp.random.seed(0)\n\ntry:\n import t3f\nexcept ImportError:\n # Install T3F if it's not already installed.\n !git clone https://github.com/Bihaqo/t3f.git\n !cd t3f; pip install .\n import t3f\n\nW = t3f.random_matrix([[4, 7, 4, 7], [5, 5, 5, 5]], tt_rank=2)\n\nprint(W)", "Using TT-Matrices we can compactly represent densely connected layers in neural networks, which allows us to greatly reduce number of parameters. Matrix multiplication can be handled by the t3f.matmul method which allows for multiplying dense (ordinary) matrices and TT-Matrices. Very simple neural network could look as following (for initialization several options such as t3f.glorot_initializer, t3f.he_initializer or t3f.random_matrix are available):", "class Learner:\n def __init__(self):\n initializer = t3f.glorot_initializer([[4, 7, 4, 7], [5, 5, 5, 5]], tt_rank=2)\n self.W1 = t3f.get_variable('W1', initializer=initializer)\n self.W2 = tf.Variable(tf.random.normal([625, 10]))\n self.b2 = tf.Variable(tf.random.normal([10]))\n \n def predict(self, x):\n b1 = tf.Variable(tf.zeros([625]))\n h1 = t3f.matmul(x, W1) + b1\n h1 = tf.nn.relu(h1)\n return tf.matmul(h1, W2) + b2\n\n def loss(self, x, y):\n y_ = tf.one_hot(y, 10)\n logits = self.predict(x)\n return tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=logits))\n", "For convenience we have implemented a layer analogous to Keras Dense layer but with a TT-Matrix instead of an ordinary matrix. An example of fully trainable net is provided below.", "from tensorflow.keras.datasets import mnist\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Activation, Dropout, Flatten\nfrom tensorflow.keras.utils import to_categorical\nfrom tensorflow.keras import optimizers\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()", "Some preprocessing...", "x_train = x_train / 127.5 - 1.0\nx_test = x_test / 127.5 - 1.0\n\ny_train = to_categorical(y_train, num_classes=10)\ny_test = to_categorical(y_test, num_classes=10)\n\nmodel = Sequential()\nmodel.add(Flatten(input_shape=(28, 28)))\ntt_layer = t3f.nn.KerasDense(input_dims=[7, 4, 7, 4], output_dims=[5, 5, 5, 5],\n tt_rank=4, activation='relu',\n bias_initializer=1e-3)\nmodel.add(tt_layer)\nmodel.add(Dense(10))\nmodel.add(Activation('softmax'))\n\nmodel.summary()", "Note that in the dense layer we only have $1725$ parameters instead of $784 * 625 = 490000$.", "optimizer = optimizers.Adam(lr=1e-2)\nmodel.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])\n\nmodel.fit(x_train, y_train, epochs=3, batch_size=64, validation_data=(x_test, y_test))", "Compression of Dense layers\nLet us now train an ordinary DNN (without TT-Matrices) and show how we can compress it using the TT decomposition. (In contrast to directly training a TT-layer from scratch in the example above.)", "model = Sequential()\nmodel.add(Flatten(input_shape=(28, 28)))\nmodel.add(Dense(625, activation='relu'))\nmodel.add(Dense(10))\nmodel.add(Activation('softmax'))\n\nmodel.summary()\n\noptimizer = optimizers.Adam(lr=1e-3)\nmodel.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])\n\nmodel.fit(x_train, y_train, epochs=5, batch_size=64, validation_data=(x_test, y_test))", "Let us convert the matrix used in the Dense layer to the TT-Matrix with tt-ranks equal to 16 (since we trained the network without the low-rank structure assumption we may wish start with high rank values).", "W = model.trainable_weights[0]\nprint(W)\nWtt = t3f.to_tt_matrix(W, shape=[[7, 4, 7, 4], [5, 5, 5, 5]], max_tt_rank=16)\nprint(Wtt)", "We need to evaluate the tt-cores of Wtt. We also need to store other parameters for later (biases and the second dense layer).", "cores = Wtt.tt_cores\nother_params = model.get_weights()[1:]", "Now we can construct a tensor network with the first Dense layer replaced by Wtt\ninitialized using the previously computed cores.", "model = Sequential()\nmodel.add(Flatten(input_shape=(28, 28)))\ntt_layer = t3f.nn.KerasDense(input_dims=[7, 4, 7, 4], output_dims=[5, 5, 5, 5],\n tt_rank=16, activation='relu')\nmodel.add(tt_layer)\nmodel.add(Dense(10))\nmodel.add(Activation('softmax'))\n\noptimizer = optimizers.Adam(lr=1e-3)\nmodel.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])\n\nmodel.set_weights(list(cores) + other_params)\n\nprint(\"new accuracy: \", model.evaluate(x_test, y_test)[1])\n\nmodel.summary()", "We see that even though we now have about 5% of the original number of parameters we still achieve a relatively high accuracy.\nFinetuning the model\nWe can now finetune this tensor network.", "model.fit(x_train, y_train, epochs=2, batch_size=64, validation_data=(x_test, y_test))", "We see that we were able to achieve higher validation accuracy than we had in the plain DNN, while keeping the number of parameters extremely small (21845 vs 496885 parameters in the uncompressed model)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
huggingface/pytorch-transformers
notebooks/01-training-tokenizers.ipynb
apache-2.0
[ "Tokenization doesn't have to be slow !\nIntroduction\nBefore going deep into any Machine Learning or Deep Learning Natural Language Processing models, every practitioner\nshould find a way to map raw input strings to a representation understandable by a trainable model.\nOne very simple approach would be to split inputs over every space and assign an identifier to each word. This approach\nwould look similar to the code below in python\npython\ns = \"very long corpus...\"\nwords = s.split(\" \") # Split over space\nvocabulary = dict(enumerate(set(words))) # Map storing the word to it's corresponding id\nThis approach might work well if your vocabulary remains small as it would store every word (or token) present in your original\ninput. Moreover, word variations like \"cat\" and \"cats\" would not share the same identifiers even if their meaning is \nquite close.\n\nSubtoken Tokenization\nTo overcome the issues described above, recent works have been done on tokenization, leveraging \"subtoken\" tokenization.\nSubtokens extends the previous splitting strategy to furthermore explode a word into grammatically logicial sub-components learned\nfrom the data.\nTaking our previous example of the words cat and cats, a sub-tokenization of the word cats would be [cat, ##s]. Where the prefix \"##\" indicates a subtoken of the initial input. \nSuch training algorithms might extract sub-tokens such as \"##ing\", \"##ed\" over English corpus.\nAs you might think of, this kind of sub-tokens construction leveraging compositions of \"pieces\" overall reduces the size\nof the vocabulary you have to carry to train a Machine Learning model. On the other side, as one token might be exploded\ninto multiple subtokens, the input of your model might increase and become an issue on model with non-linear complexity over the input sequence's length. \n\nAmong all the tokenization algorithms, we can highlight a few subtokens algorithms used in Transformers-based SoTA models : \n\nByte Pair Encoding (BPE) - Neural Machine Translation of Rare Words with Subword Units (Sennrich et al., 2015)\nWord Piece - Japanese and Korean voice search (Schuster, M., and Nakajima, K., 2015)\nUnigram Language Model - Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates (Kudo, T., 2018)\nSentence Piece - A simple and language independent subword tokenizer and detokenizer for Neural Text Processing (Taku Kudo and John Richardson, 2018)\n\nGoing through all of them is out of the scope of this notebook, so we will just highlight how you can use them.\n@huggingface/tokenizers library\nAlong with the transformers library, we @huggingface provide a blazing fast tokenization library\nable to train, tokenize and decode dozens of Gb/s of text on a common multi-core machine.\nThe library is written in Rust allowing us to take full advantage of multi-core parallel computations in a native and memory-aware way, on-top of which \nwe provide bindings for Python and NodeJS (more bindings may be added in the future). \nWe designed the library so that it provides all the required blocks to create end-to-end tokenizers in an interchangeable way. In that sense, we provide\nthese various components: \n\nNormalizer: Executes all the initial transformations over the initial input string. For example when you need to\nlowercase some text, maybe strip it, or even apply one of the common unicode normalization process, you will add a Normalizer. \nPreTokenizer: In charge of splitting the initial input string. That's the component that decides where and how to\npre-segment the origin string. The simplest example would be like we saw before, to simply split on spaces.\nModel: Handles all the sub-token discovery and generation, this part is trainable and really dependant\n of your input data.\nPost-Processor: Provides advanced construction features to be compatible with some of the Transformers-based SoTA\nmodels. For instance, for BERT it would wrap the tokenized sentence around [CLS] and [SEP] tokens.\nDecoder: In charge of mapping back a tokenized input to the original string. The decoder is usually chosen according\nto the PreTokenizer we used previously.\nTrainer: Provides training capabilities to each model.\n\nFor each of the components above we provide multiple implementations:\n\nNormalizer: Lowercase, Unicode (NFD, NFKD, NFC, NFKC), Bert, Strip, ...\nPreTokenizer: ByteLevel, WhitespaceSplit, CharDelimiterSplit, Metaspace, ...\nModel: WordLevel, BPE, WordPiece\nPost-Processor: BertProcessor, ...\nDecoder: WordLevel, BPE, WordPiece, ...\n\nAll of these building blocks can be combined to create working tokenization pipelines. \nIn the next section we will go over our first pipeline.\nAlright, now we are ready to implement our first tokenization pipeline through tokenizers. \nFor this, we will train a Byte-Pair Encoding (BPE) tokenizer on a quite small input for the purpose of this notebook.\nWe will work with the file from Peter Norving.\nThis file contains around 130.000 lines of raw text that will be processed by the library to generate a working tokenizer.", "!pip install tokenizers\n\nBIG_FILE_URL = 'https://raw.githubusercontent.com/dscape/spell/master/test/resources/big.txt'\n\n# Let's download the file and save it somewhere\nfrom requests import get\nwith open('big.txt', 'wb') as big_f:\n response = get(BIG_FILE_URL, )\n \n if response.status_code == 200:\n big_f.write(response.content)\n else:\n print(\"Unable to get the file: {}\".format(response.reason))\n", "Now that we have our training data we need to create the overall pipeline for the tokenizer", "# For the user's convenience `tokenizers` provides some very high-level classes encapsulating\n# the overall pipeline for various well-known tokenization algorithm. \n# Everything described below can be replaced by the ByteLevelBPETokenizer class. \n\nfrom tokenizers import Tokenizer\nfrom tokenizers.decoders import ByteLevel as ByteLevelDecoder\nfrom tokenizers.models import BPE\nfrom tokenizers.normalizers import Lowercase, NFKC, Sequence\nfrom tokenizers.pre_tokenizers import ByteLevel\n\n# First we create an empty Byte-Pair Encoding model (i.e. not trained model)\ntokenizer = Tokenizer(BPE())\n\n# Then we enable lower-casing and unicode-normalization\n# The Sequence normalizer allows us to combine multiple Normalizer that will be\n# executed in order.\ntokenizer.normalizer = Sequence([\n NFKC(),\n Lowercase()\n])\n\n# Our tokenizer also needs a pre-tokenizer responsible for converting the input to a ByteLevel representation.\ntokenizer.pre_tokenizer = ByteLevel()\n\n# And finally, let's plug a decoder so we can recover from a tokenized input to the original one\ntokenizer.decoder = ByteLevelDecoder()", "The overall pipeline is now ready to be trained on the corpus we downloaded earlier in this notebook.", "from tokenizers.trainers import BpeTrainer\n\n# We initialize our trainer, giving him the details about the vocabulary we want to generate\ntrainer = BpeTrainer(vocab_size=25000, show_progress=True, initial_alphabet=ByteLevel.alphabet())\ntokenizer.train(files=[\"big.txt\"], trainer=trainer)\n\nprint(\"Trained vocab size: {}\".format(tokenizer.get_vocab_size()))", "Et voilà ! You trained your very first tokenizer from scratch using tokenizers. Of course, this \ncovers only the basics, and you may want to have a look at the add_special_tokens or special_tokens parameters\non the Trainer class, but the overall process should be very similar.\nWe can save the content of the model to reuse it later.", "# You will see the generated files in the output.\ntokenizer.model.save('.')", "Now, let load the trained model and start using out newly trained tokenizer", "# Let's tokenizer a simple input\ntokenizer.model = BPE('vocab.json', 'merges.txt')\nencoding = tokenizer.encode(\"This is a simple input to be tokenized\")\n\nprint(\"Encoded string: {}\".format(encoding.tokens))\n\ndecoded = tokenizer.decode(encoding.ids)\nprint(\"Decoded string: {}\".format(decoded))", "The Encoding structure exposes multiple properties which are useful when working with transformers models\n\nnormalized_str: The input string after normalization (lower-casing, unicode, stripping, etc.)\noriginal_str: The input string as it was provided\ntokens: The generated tokens with their string representation\ninput_ids: The generated tokens with their integer representation\nattention_mask: If your input has been padded by the tokenizer, then this would be a vector of 1 for any non padded token and 0 for padded ones.\nspecial_token_mask: If your input contains special tokens such as [CLS], [SEP], [MASK], [PAD], then this would be a vector with 1 in places where a special token has been added.\ntype_ids: If your input was made of multiple \"parts\" such as (question, context), then this would be a vector with for each token the segment it belongs to.\noverflowing: If your input has been truncated into multiple subparts because of a length limit (for BERT for example the sequence length is limited to 512), this will contain all the remaining overflowing parts." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
scotthuang1989/Python-3-Module-of-the-Week
text/difflib.ipynb
apache-2.0
[ "import difflib\n\ntext1 = \"\"\"Lorem ipsum dolor sit amet, consectetuer adipiscing\nelit. Integer eu lacus accumsan arcu fermentum euismod. Donec\npulvinar porttitor tellus. Aliquam venenatis. Donec facilisis\npharetra tortor. In nec mauris eget magna consequat\nconvalis. Nam sed sem vitae odio pellentesque interdum. Sed\nconsequat viverra nisl. Suspendisse arcu metus, blandit quis,\nrhoncus ac, pharetra eget, velit. Mauris urna. Morbi nonummy\nmolestie orci. Praesent nisi elit, fringilla ac, suscipit non,\ntristique vel, mauris. Curabitur vel lorem id nisl porta\nadipiscing. Suspendisse eu lectus. In nunc. Duis vulputate\ntristique enim. Donec quis lectus a justo imperdiet tempus.\"\"\"\n\ntext2 = \"\"\"Lorem ipsum dolor sit amet, consectetuer adipiscing\nelit. Integer eu lacus accumsan arcu fermentum euismod. Donec\npulvinar, porttitor tellus. Aliquam venenatis. Donec facilisis\npharetra tortor. In nec mauris eget magna consequat\nconvalis. Nam cras vitae mi vitae odio pellentesque interdum. Sed\nconsequat viverra nisl. Suspendisse arcu metus, blandit quis,\nrhoncus ac, pharetra eget, velit. Mauris urna. Morbi nonummy\nmolestie orci. Praesent nisi elit, fringilla ac, suscipit non,\ntristique vel, mauris. Curabitur vel lorem id nisl porta\nadipiscing. Duis vulputate tristique enim. Donec quis lectus a\njusto imperdiet tempus. Suspendisse eu lectus. In nunc.\"\"\"\n\ntext1_lines = text1.splitlines()\n\ntext2_lines = text2.splitlines()", "Comparing Bodies of Text\nThe Differ class works on sequences of text lines and produces human-readable deltas, or change instructions, including differences within individual lines. The default output produced by Differ is similar to the diff command-line tool under Unix. It includes the original input values from both lists, including common values, and markup data to indicate which changes were made.\n\nLines prefixed with - were in the first sequence, but not the second.\nines prefixed with + were in the second sequence, but not the first.\nIf a line has an incremental difference between versions, an extra line prefixed with ? is used to highlight the change within the new version.\nIf a line has not changed, it is printed with an extra blank space on the left column so that it is aligned with the other output that may have differences.", "d = difflib.Differ()\n\ndiff = d.compare(text1_lines,text2_lines)\n\nprint('\\n'.join(diff))", "Other Output Formats\nWhile the Differ class shows all of the input lines, a unified diff includes only the modified lines and a bit of context. The unified_diff() function produces this sort of output.", "diff = difflib.unified_diff(\n text1_lines,\n text2_lines,\n lineterm='',\n)\nprint('\\n'.join(list(diff)))", "SequenceMathcer", "from difflib import SequenceMatcher\n\n\ndef show_results(match):\n print(' a = {}'.format(match.a))\n print(' b = {}'.format(match.b))\n print(' size = {}'.format(match.size))\n i, j, k = match\n print(' A[a:a+size] = {!r}'.format(A[i:i + k]))\n print(' B[b:b+size] = {!r}'.format(B[j:j + k]))\n\n\nA = \" abcd\"\nB = \"abcd abcd\"\n\nprint('A = {!r}'.format(A))\nprint('B = {!r}'.format(B))\n\nprint('\\nWithout junk detection:')\ns1 = SequenceMatcher(None, A, B)\nmatch1 = s1.find_longest_match(0, len(A), 0, len(B))\nshow_results(match1)\n\nprint('\\nTreat spaces as junk:')\ns2 = SequenceMatcher(lambda x: x == \" \", A, B)\nmatch2 = s2.find_longest_match(0, len(A), 0, len(B))\nshow_results(match2)\n\nmatch1", "Modify first text to second", "modify_instruction = s2.get_opcodes()\n\nmodify_instruction\n\ns1 = [1, 2, 3, 5, 6, 4]\ns2 = [2, 3, 5, 4, 6, 1]\n\nprint('Initial data:')\nprint('s1 =', s1)\nprint('s2 =', s2)\nprint('s1 == s2:', s1 == s2)\nprint()\n\nmatcher = difflib.SequenceMatcher(None, s1, s2)\nfor tag, i1, i2, j1, j2 in reversed(matcher.get_opcodes()):\n\n if tag == 'delete':\n print('Remove {} from positions [{}:{}]'.format(\n s1[i1:i2], i1, i2))\n print(' before =', s1)\n del s1[i1:i2]\n\n elif tag == 'equal':\n print('s1[{}:{}] and s2[{}:{}] are the same'.format(\n i1, i2, j1, j2))\n\n elif tag == 'insert':\n print('Insert {} from s2[{}:{}] into s1 at {}'.format(\n s2[j1:j2], j1, j2, i1))\n print(' before =', s1)\n s1[i1:i2] = s2[j1:j2]\n\n elif tag == 'replace':\n print(('Replace {} from s1[{}:{}] '\n 'with {} from s2[{}:{}]').format(\n s1[i1:i2], i1, i2, s2[j1:j2], j1, j2))\n print(' before =', s1)\n s1[i1:i2] = s2[j1:j2]\n\n print(' after =', s1, '\\n')\n\nprint('s1 == s2:', s1 == s2)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
VectorBlox/PYNQ
Pynq-Z1/notebooks/examples/audio_playback.ipynb
bsd-3-clause
[ "Welcome to Pynq Audio\nThis notebook shows the basic recording and playback features of the Pynq-Z1.\nIt uses the audio jack to play back recordings from the built-in microphone, as well as a pre-recorded audio sample. Moreover, visualization with matplotlib and playback with IPython.Audio are shown.\nCreate new audio object", "from pynq import Overlay\nfrom pynq.drivers import Audio\n\nOverlay('base.bit').download()\npAudio = Audio()", "Record and play\nRecord a 3-second sample and save it into a file.", "pAudio.record(3)\npAudio.save(\"Recording_1.pdm\")", "Load and play\nLoad a sample and play the loaded sample.", "pAudio.load(\"/home/xilinx/pynq/drivers/tests/pynq_welcome.pdm\")\npAudio.play()", "Play in notebook\nUsers can also play the audio directly in notebook. To do this, the file format has to be converted from Pulse Density Modulation (PDM) to Pulse Code Modulation (PCM). \nFor more information, please refer to: https://en.wikipedia.org/wiki/Pulse-density_modulation.\nStep 1: Preprocessing\nIn this step, we first convert the 32-bit integer buffer to 16-bit. Then we divide 16-bit words (16 1-bit samples each) into 8-bit words with 1-bit sample each.", "import time\nimport numpy as np\n\nstart = time.time()\naf_uint8 = np.unpackbits(pAudio.buffer.astype(np.int16)\n .byteswap(True).view(np.uint8))\nend = time.time()\n\nprint(\"Time to convert {:,d} PDM samples: {:0.2f} seconds\"\n .format(np.size(pAudio.buffer)*16, end-start))\nprint(\"Size of audio data: {:,d} Bytes\"\n .format(af_uint8.nbytes))", "Step 2: Converting PDM to PCM\nWe now convert PDM to PCM by decimation. The sample rate is reduced from 3MHz to 32kHz.\nWe will remove the first and last 10 samples in case there are outliers introduced by decimation. We will also remove the DC offset from the waveform.", "import time\nfrom scipy import signal\n\nstart = time.time()\naf_dec = signal.decimate(af_uint8,8,zero_phase=True)\naf_dec = signal.decimate(af_dec,6,zero_phase=True)\naf_dec = signal.decimate(af_dec,2,zero_phase=True)\naf_dec = (af_dec[10:-10]-af_dec[10:-10].mean())\nend = time.time()\nprint(\"Time to convert {:,d} Bytes: {:0.2f} seconds\"\n .format(af_uint8.nbytes, end-start))\nprint(\"Size of audio data: {:,d} Bytes\"\n .format(af_dec.nbytes))\ndel af_uint8", "Step 3: Audio Playback in Web Browser", "from IPython.display import Audio as IPAudio\nIPAudio(af_dec, rate=32000)", "Plotting PCM data\nUsers can display the audio data in notebook:\n\nPlot the audio signal's amplitude over time.\nPlot the spectrogram of the audio signal.\n\nAmplitude over time", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.figure(num=None, figsize=(15, 5))\ntime_axis = np.arange(0,((len(af_dec))/32000),1/32000)\nplt.title('Audio Signal in Time Domain')\nplt.xlabel('Time in s')\nplt.ylabel('Amplitude')\nplt.plot(time_axis, af_dec)\nplt.show()", "Frequency spectrum", "from scipy.fftpack import fft\n\nyf = fft(af_dec)\nyf_2 = yf[1:len(yf)//2]\nxf = np.linspace(0.0, 32000//2, len(yf_2))\n\nplt.figure(num=None, figsize=(15, 5))\nplt.plot(xf, abs(yf_2))\nplt.title('Magnitudes of Audio Signal Frequency Components')\nplt.xlabel('Frequency in Hz')\nplt.ylabel('Magnitude')\nplt.show()", "Frequency spectrum over time\nUse the classic plot style for better display.", "import matplotlib\n\nnp.seterr(divide='ignore',invalid='ignore')\nmatplotlib.style.use(\"classic\")\nplt.figure(num=None, figsize=(15, 4))\nplt.title('Audio Signal Spectogram')\nplt.xlabel('Time in s')\nplt.ylabel('Frequency in Hz')\n_ = plt.specgram(af_dec, Fs=32000)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ernestyalumni/CompPhys
moreCUDA/samples02/sinsin2dtex.ipynb
apache-2.0
[ "Reading the output from sinsin2dtex.cu\nWe go from a flattened std::vector (C++, representing 2-dimensional data) to a .csv file.", "%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport csv\n\nld = [ 1., 1.]\nWIDTH = 640\nHEIGHT = 640\nprint WIDTH*HEIGHT\nhd = [ld[0]/(float(WIDTH)), ld[1]/(float(HEIGHT)) ]\n\nwith open('sinsin2dtex_result.csv','r') as csvfile_result:\n plot_results = csv.reader(csvfile_result, delimiter=',')\n result_list = list( list(rec) for rec in plot_results ) \n \nwith open('sinsin2dtex_ogref.csv','r') as csvfile_ogref:\n plot_ogref = csv.reader(csvfile_ogref, delimiter=',')\n ogref_list = list( list(rec) for rec in plot_ogref ) \n\ncsvfile_result.close()\ncsvfile_ogref.close()\n\nresult_list = [[float(ele) for ele in row] for row in result_list]\nogref_list = [[float(ele) for ele in row] for row in ogref_list]\n\n# sanity check\nprint len(result_list); print len(result_list[0]); \nprint result_list[ len(result_list)/4][ len(result_list[0])/4 : len(result_list[0])/4+22];\n\nprint len(ogref_list); print len(ogref_list[0]); \nprint ogref_list[ len(ogref_list)/4][ len(ogref_list[0])/4 : len(ogref_list[0])/4+22]", "Quick aside on Wireframe plots in matplotlib\ncf. mplot3d tutorial, matplotlib", "from mpl_toolkits.mplot3d import axes3d\nimport numpy as np\n\nfig = plt.figure()\n\nax = fig.add_subplot(111,projection='3d')\nX, Y, Z = axes3d.get_test_data(0.05)\n\nax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)\n\nplt.show()\n\nfig\n\nprint type(X), type(Y), type(Z); print len(X), len(Y), len(Z); print X.shape, Y.shape, Z.shape;\n\nX\n\nY\n\nZ\n\nX[0][0:10]", "EY : At least what I could surmise or infer the 2-dim. (???) python arrays for X,Y,Z of the wireframe plot work like this: imagine a 2-dimensional grid; on top of each grid point is the x-coordinate, then the y-coordinate, and then the z-coordinate. Thus you have 2-dimensional arrays for each. \nMaking X,Y,Z axes for mplot3d from the .csv files", "X_sinsin = np.array( [[i*hd[0] for i in range(WIDTH)] for j in range(HEIGHT)] )\nY_sinsin = np.array( [[j*hd[1] for i in range(WIDTH)] for j in range(HEIGHT)] )\nZ_sinsinresult = np.array( [[result_list[i][j] for i in range(WIDTH)] for j in range(HEIGHT)] )\nZ_sinsinogref = np.array( [[ogref_list[i][j] for i in range(WIDTH)] for j in range(HEIGHT)] )\n\n\nfig02 = plt.figure()\nax02 = fig02.add_subplot(111,projection='3d')\nax02.plot_wireframe(X_sinsin, Y_sinsin, Z_sinsinresult )\n\nplt.show()\n\nfig02\n\nfig03 = plt.figure()\nax03 = fig03.add_subplot(111,projection='3d')\nax03.plot_wireframe(X_sinsin, Y_sinsin, Z_sinsinogref )\n\nplt.show()\n\nfig03" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
grokkaine/biopycourse
day1/visualization.ipynb
cc0-1.0
[ "Visualization\n\nMatplotlib\nSeaborn\nggplot\nplotly\nBokeh + ipywidgets\nInline network displays\nGUI programming example with wxPython\nipywidgets + networkx\n\nTODO:\n- Vispy\n$ conda install ipywidgets wxpython networkx seaborn matplotlib\n$ jupyter nbextension enable --py widgetsnbextension --sys-prefix\n$ conda install -c bokeh bokeh\nStandard plots with Matplotlib: line, scatter, chart, contour, heatmap\nControl every detail of your graphics programmatically.\nHave a look at more examples:\nhttp://matplotlib.org/gallery.html", "#ipython magic command\n%matplotlib inline\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nmatplotlib.rcParams.update({'font.size': 18, 'font.family': 'arial'})\n\nx = np.linspace(0.1, 10, 100)\ny1 = np.sin(x)\ny2 = np.sin(1/x)\n#y2 = np.exp(x)\n\nfig, ax = plt.subplots(figsize=(13,4))\n\n#axes[0].plot(x, y1, x, y2)\nax.plot(x, y1, label=\"y = sin(x)\")\nax.plot(x, y2, label=r\"$sin(\\frac{1}{x})$\", color=\"#05cd90\", lw=2, ls='dotted', marker='o', markersize=4)\nax.set_xlabel('time')\nax.set_ylabel('harmonics')\nax.set_ylim([- 1.5, 1.5])\n#ax.set_yscale(\"log\")\nax.set_title(\"Line plots\")\nax.legend(loc=2)\nyticks = [0, 1]\nplt.annotate('wow!', xy=(8, 1), xytext=(7, -1), arrowprops=dict(facecolor='black', shrink=0.05))\nax.set_yticks(yticks)\nax.set_yticklabels([r\"rest $\\alpha$\", r\"disturbed $\\delta$\"], fontsize=18)\n\n#fig.savefig(\"filename.png\")\n#fig.savefig(\"filename.png\", dpi=100)\nplt.show()\n\n%matplotlib inline\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfig, axes = plt.subplots(2, 2, figsize=(10,8))\nmatplotlib.rcParams.update({'font.size': 10, 'font.family': 'arial'})\nx = 10*np.random.random(100)\ny = 10*np.random.random(100)\ndef f(x,y): return np.sin(x)**2+np.sin(y)**2\n\naxes[0][0].scatter(x, y, c=f(x,y), cmap=plt.cm.Blues)\naxes[0][0].set_title(\"scatter plot\")\n\nx1 = np.linspace(-10,10,100)\ny1 = np.zeros(100)+5.0\naxes[0][1].bar(x1, f(x1,y1), align=\"center\", width=0.5, alpha=0.5)\naxes[0][1].set_title(\"bar plot\")\n\n\nx = np.linspace(-10,10,100)\ny = np.linspace(-10,10,100)\n\nX,Y = np.meshgrid(x, y)\nZ = f(X,Y)\naxes[1][0].pcolor(x, y, Z, cmap=plt.cm.Blues, vmin=np.abs(Z).min(), vmax=np.abs(Z).max())\n#axes[1][1].pcolor(x, y, Z, cmap=plt.cm.RdBu, vmin=np.abs(Z).min(), vmax=np.abs(Z).max())\naxes[1][0].set_title(\"colour map (heatmap)\")\n\naxes[1][1].contour(Z.T, cmap=plt.cm.Blues, vmin=np.abs(Z).min(), vmax=np.abs(Z).max(), extent=[-10, 10, -10, 10])\naxes[1][1].set_title(\"contour plot\")\n", "Seaborn\nMatplotlib was the first big library for visualization in Python, but after its creator tragically died more and more users migrated to Seaborn. Seaborn offers a much easier entry point to the detriment of high customization. But, it looks great! Here is a violin plot example. Check other examples in the galery.", "import seaborn as sns\nsns.set(style=\"whitegrid\", palette=\"pastel\", color_codes=True)\n\n# Load the example tips dataset\ntips = sns.load_dataset(\"tips\")\n\n# Draw a nested violinplot and split the violins for easier comparison\nsns.violinplot(x=\"day\", y=\"total_bill\", hue=\"sex\", data=tips, split=True,\n inner=\"quart\", palette={\"Male\": \"b\", \"Female\": \"y\"})\nsns.despine(left=True)", "Web publishing with plotly and Bokeh\nBokeh is using D3, one of the most complex and performant web visualization libraries. This makes it easy to add web interaction, and also makes very nice publication quality graphics.\nconda install bokeh\nconda install -n base -c conda-forge jupyterlab_widgets\nconda install -n biopy37 -c conda-forge ipywidgets", "from ipywidgets import interact\nimport numpy as np\n\nfrom bokeh.io import push_notebook, show, output_notebook\nfrom bokeh.plotting import figure\noutput_notebook()\n\nx = np.linspace(0, 2*np.pi, 2000)\ny = np.sin(x)\n\np = figure(title=\"simple line example\", plot_height=300, plot_width=600, y_range=(-5,5))\nr = p.line(x, y, color=\"#2222aa\", line_width=3)\n\ndef update(f, w=1, A=1, phi=0):\n if f == \"sin\": func = np.sin\n elif f == \"cos\": func = np.cos\n elif f == \"tan\": func = np.tan\n r.data_source.data['y'] = A * func(w * x + phi)\n push_notebook()\n\nshow(p, notebook_handle=True)", "Using Jupyter's ipywidgets\nconda install -c conda-forge ipywidgets", "from IPython.display import display\nfrom ipywidgets import *\ninteract(update, f=[\"sin\", \"cos\", \"tan\"], w=(0,100), A=(1,5), phi=(0, 20, 0.1))\n\nfrom IPython.display import display\nfrom ipywidgets import *\nw = IntSlider()\ndisplay(w)\n\ndef f(x):\n return x\ninteract(f, x=10);", "Network layout and display\n\nweird decorator error, do:\nconda install -c conda-forge decorator\nTODO: not working with the latest matplotlib", "%matplotlib inline\nimport networkx as nx\n\nnet = nx.barabasi_albert_graph(100, 1)\nnx.write_gml(net,\"mynetwork.gml\")\n\nimport matplotlib.pyplot as plt\nnx.draw(net)\n#nx.draw(net,pos=nx.spring_layout(net, scale = 5000, iterations = 10))\n\n%matplotlib inline\n\nfrom ipywidgets import interact\nimport matplotlib.pyplot as plt\nimport networkx as nx\n# wrap a few graph generation functions so they have the same signature\n\ndef random_lobster(n, m, k, p):\n return nx.random_lobster(n, p, p / m)\n\ndef powerlaw_cluster(n, m, k, p):\n return nx.powerlaw_cluster_graph(n, m, p)\n\ndef erdos_renyi(n, m, k, p):\n return nx.erdos_renyi_graph(n, p)\n\ndef newman_watts_strogatz(n, m, k, p):\n return nx.newman_watts_strogatz_graph(n, k, p)\n\ndef plot_random_graph(n, m, k, p, generator):\n g = generator(n, m, k, p)\n nx.draw(g)\n plt.show()\n\ninteract(plot_random_graph, n=(2,30), m=(1,10), k=(1,10), p=(0.0, 1.0, 0.001),\n generator={'lobster': random_lobster,\n 'power law': powerlaw_cluster,\n 'Newman-Watts-Strogatz': newman_watts_strogatz,\n u'Erdős-Rényi': erdos_renyi,\n });", "Bokeh + networkx example:", "import networkx as nx\n\nfrom bokeh.io import output_file, show\nfrom bokeh.models import (BoxZoomTool, Circle, HoverTool,\n MultiLine, Plot, Range1d, ResetTool,)\nfrom bokeh.palettes import Spectral4\nfrom bokeh.plotting import from_networkx\n\n# Prepare Data\nG = nx.karate_club_graph()\n\nSAME_CLUB_COLOR, DIFFERENT_CLUB_COLOR = \"black\", \"red\"\nedge_attrs = {}\n\nfor start_node, end_node, _ in G.edges(data=True):\n edge_color = SAME_CLUB_COLOR if G.nodes[start_node][\"club\"] == G.nodes[end_node][\"club\"] else DIFFERENT_CLUB_COLOR\n edge_attrs[(start_node, end_node)] = edge_color\n\nnx.set_edge_attributes(G, edge_attrs, \"edge_color\")\n\n# Show with Bokeh\nplot = Plot(plot_width=400, plot_height=400,\n x_range=Range1d(-1.1, 1.1), y_range=Range1d(-1.1, 1.1))\nplot.title.text = \"Graph Interaction Demonstration\"\n\nnode_hover_tool = HoverTool(tooltips=[(\"index\", \"@index\"), (\"club\", \"@club\")])\nplot.add_tools(node_hover_tool, BoxZoomTool(), ResetTool())\n\ngraph_renderer = from_networkx(G, nx.spring_layout, scale=1, center=(0, 0))\n\ngraph_renderer.node_renderer.glyph = Circle(size=15, fill_color=Spectral4[0])\ngraph_renderer.edge_renderer.glyph = MultiLine(line_color=\"edge_color\", line_alpha=0.8, line_width=1)\nplot.renderers.append(graph_renderer)\n\n#output_file(\"interactive_graphs.html\")\noutput_notebook()\nshow(plot)", "Example of network interactivity in Jupyter:\n- https://github.com/cytoscape/cytoscape-jupyter-widget\n- https://ipycytoscape.readthedocs.io/en/latest/\nipywidgets", "from ipywidgets import widgets \n\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef plot(amplitude, color):\n fig, ax = plt.subplots(figsize=(4, 3),\n subplot_kw={'axisbelow':True})\n ax.grid(color='w', linewidth=2, linestyle='solid')\n x = np.linspace(0, 10, 1000)\n ax.plot(x, amplitude * np.sin(x), color=color,\n lw=5, alpha=0.4)\n ax.set_xlim(0, 10)\n ax.set_ylim(-1.1, 1.1)\n return fig\n\nfrom ipywidgets import interact, FloatSlider, RadioButtons\n\ninteract(plot,\n amplitude=FloatSlider(min=0.1, max=1.0, step=0.1),\n color=RadioButtons(options=['blue', 'green', 'red']))\n\nfrom ipywidgets import *\nIntSlider()\n\nwidgets.Select(\n description='OS:',\n options=['Linux', 'Windows', 'OSX'],\n)\n\nimport ipywidgets as widgets\nfrom IPython.display import display\nname = widgets.Text(description='Name:', padding=4)\n#name.layout.padding = 4\ncolor = widgets.Dropdown(description='Color:', options=['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'])\n#color.layout.padding = 4\npage1 = widgets.Box(children=[name, color])\n#page1.layout.padding = 4\n\nage = widgets.IntSlider(description='Age:', min=0, max=120, value=50)\n#age.layout.padding = 4\ngender = widgets.RadioButtons(description='Gender:', options=['male', 'female'])\n#gender.layout.padding = 4\npage2 = widgets.Box(children=[age, gender])\n#page2.layout.padding = 4\n\ntabs = widgets.Tab(children=[page1, page2])\ndisplay(tabs)\n\ntabs.set_title(0, 'Name')\ntabs.set_title(1, 'Details')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/thu/cmip6/models/ciesm/atmos.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: THU\nSource ID: CIESM\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:39\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'thu', 'ciesm', 'atmos')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Overview\n2. Key Properties --&gt; Resolution\n3. Key Properties --&gt; Timestepping\n4. Key Properties --&gt; Orography\n5. Grid --&gt; Discretisation\n6. Grid --&gt; Discretisation --&gt; Horizontal\n7. Grid --&gt; Discretisation --&gt; Vertical\n8. Dynamical Core\n9. Dynamical Core --&gt; Top Boundary\n10. Dynamical Core --&gt; Lateral Boundary\n11. Dynamical Core --&gt; Diffusion Horizontal\n12. Dynamical Core --&gt; Advection Tracers\n13. Dynamical Core --&gt; Advection Momentum\n14. Radiation\n15. Radiation --&gt; Shortwave Radiation\n16. Radiation --&gt; Shortwave GHG\n17. Radiation --&gt; Shortwave Cloud Ice\n18. Radiation --&gt; Shortwave Cloud Liquid\n19. Radiation --&gt; Shortwave Cloud Inhomogeneity\n20. Radiation --&gt; Shortwave Aerosols\n21. Radiation --&gt; Shortwave Gases\n22. Radiation --&gt; Longwave Radiation\n23. Radiation --&gt; Longwave GHG\n24. Radiation --&gt; Longwave Cloud Ice\n25. Radiation --&gt; Longwave Cloud Liquid\n26. Radiation --&gt; Longwave Cloud Inhomogeneity\n27. Radiation --&gt; Longwave Aerosols\n28. Radiation --&gt; Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --&gt; Boundary Layer Turbulence\n31. Turbulence Convection --&gt; Deep Convection\n32. Turbulence Convection --&gt; Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --&gt; Large Scale Precipitation\n35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --&gt; Optical Cloud Properties\n38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\n39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --&gt; Isscp Attributes\n42. Observation Simulation --&gt; Cosp Attributes\n43. Observation Simulation --&gt; Radar Inputs\n44. Observation Simulation --&gt; Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --&gt; Orographic Gravity Waves\n47. Gravity Waves --&gt; Non Orographic Gravity Waves\n48. Solar\n49. Solar --&gt; Solar Pathways\n50. Solar --&gt; Solar Constant\n51. Solar --&gt; Orbital Parameters\n52. Solar --&gt; Insolation Ozone\n53. Volcanos\n54. Volcanos --&gt; Volcanoes Treatment \n1. Key Properties --&gt; Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of atmospheric model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.4. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.5. High Top\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the orography.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n", "4.2. Changes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n", "5. Grid --&gt; Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Discretisation --&gt; Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n", "6.3. Scheme Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation function order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.4. Horizontal Pole\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal discretisation pole singularity treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7. Grid --&gt; Discretisation --&gt; Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType of vertical coordinate system", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere dynamical core", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the dynamical core of the model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Timestepping Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestepping framework type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of the model prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Dynamical Core --&gt; Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Top Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary heat treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Top Wind\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary wind treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Dynamical Core --&gt; Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nType of lateral boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Dynamical Core --&gt; Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal diffusion scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal diffusion scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Dynamical Core --&gt; Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTracer advection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.3. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.4. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracer advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Dynamical Core --&gt; Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMomentum advection schemes name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Scheme Staggering Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Radiation --&gt; Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nShortwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Radiation --&gt; Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Radiation --&gt; Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18. Radiation --&gt; Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Radiation --&gt; Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Radiation --&gt; Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21. Radiation --&gt; Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Radiation --&gt; Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLongwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23. Radiation --&gt; Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Radiation --&gt; Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Physical Reprenstation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25. Radiation --&gt; Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Radiation --&gt; Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27. Radiation --&gt; Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28. Radiation --&gt; Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere convection and turbulence", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. Turbulence Convection --&gt; Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBoundary layer turbulence scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBoundary layer turbulence scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.3. Closure Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoundary layer turbulence scheme closure order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Counter Gradient\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "31. Turbulence Convection --&gt; Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDeep convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Turbulence Convection --&gt; Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nShallow convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nshallow convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nshallow convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n", "32.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Microphysics Precipitation --&gt; Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.2. Hydrometeors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLarge scale cloud microphysics processes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the atmosphere cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.3. Atmos Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n", "36.4. Uses Separate Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.6. Prognostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.7. Diagnostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.8. Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37. Cloud Scheme --&gt; Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37.2. Cloud Inhomogeneity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "38.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "38.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale water distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "39.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "39.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "39.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of observation simulator characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Observation Simulation --&gt; Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. Top Height Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator ISSCP top height direction", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42. Observation Simulation --&gt; Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP run configuration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42.2. Number Of Grid Points\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of grid points", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.3. Number Of Sub Columns\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.4. Number Of Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of levels", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43. Observation Simulation --&gt; Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar frequency (Hz)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "43.3. Gas Absorption\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses gas absorption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "43.4. Effective Radius\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses effective radius", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "44. Observation Simulation --&gt; Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator lidar ice type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "44.2. Overlap\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator lidar overlap", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "45.2. Sponge Layer\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.3. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground wave distribution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.4. Subgrid Scale Orography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSubgrid scale orography effects taken into account.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46. Gravity Waves --&gt; Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "46.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47. Gravity Waves --&gt; Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "47.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n", "47.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of solar insolation of the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "49. Solar --&gt; Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "50. Solar --&gt; Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the solar constant.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "50.2. Fixed Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "50.3. Transient Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nsolar constant transient characteristics (W m-2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51. Solar --&gt; Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "51.2. Fixed Reference Date\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "51.3. Transient Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of transient orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51.4. Computation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used for computing orbital parameters.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "52. Solar --&gt; Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "54. Volcanos --&gt; Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
statsmodels/statsmodels.github.io
v0.12.1/examples/notebooks/generated/metaanalysis1.ipynb
bsd-3-clause
[ "Meta-Analysis in statsmodels\nStatsmodels include basic methods for meta-analysis. This notebook illustrates the current usage.\nStatus: The results have been verified against R meta and metafor packages. However, the API is still experimental and will still change. Some options for additional methods that are available in R meta and metafor are missing.\nThe support for meta-analysis has 3 parts:\n\neffect size functions: this currently includes\n effectsize_smd computes effect size and their standard errors for standardized mean difference,\neffectsize_2proportions computes effect sizes for comparing two independent proportions using risk difference, (log) risk ratio, (log) odds-ratio or arcsine square root transformation\nThe combine_effects computes fixed and random effects estimate for the overall mean or effect. The returned results instance includes a forest plot function.\nhelper functions to estimate the random effect variance, tau-squared\n\nThe estimate of the overall effect size in combine_effects can also be performed using WLS or GLM with var_weights.\nFinally, the meta-analysis functions currently do not include the Mantel-Hanszel method. However, the fixed effects results can be computed directly using StratifiedTable as illustrated below.", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nfrom scipy import stats, optimize\n\nfrom statsmodels.regression.linear_model import WLS\nfrom statsmodels.genmod.generalized_linear_model import GLM\n\nfrom statsmodels.stats.meta_analysis import (\n effectsize_smd, effectsize_2proportions, combine_effects,\n _fit_tau_iterative, _fit_tau_mm, _fit_tau_iter_mm)\n\n# increase line length for pandas\npd.set_option('display.width', 100)", "Example", "data = [\n[\"Carroll\", 94, 22,60,92, 20,60],\n[\"Grant\", 98, 21,65, 92,22, 65],\n[\"Peck\", 98, 28, 40,88 ,26, 40],\n[\"Donat\", 94,19, 200, 82,17, 200],\n[\"Stewart\", 98, 21,50, 88,22 , 45],\n[\"Young\", 96,21,85, 92 ,22, 85]]\ncolnames = [\"study\",\"mean_t\",\"sd_t\",\"n_t\",\"mean_c\",\"sd_c\",\"n_c\"]\nrownames = [i[0] for i in data]\ndframe1 = pd.DataFrame(data, columns=colnames)\nrownames\n\nmean2, sd2, nobs2, mean1, sd1, nobs1 = np.asarray(dframe1[[\"mean_t\",\"sd_t\",\"n_t\",\"mean_c\",\"sd_c\",\"n_c\"]]).T\nrownames = dframe1[\"study\"]\nrownames.tolist()\n\nnp.array(nobs1 + nobs2)", "estimate effect size standardized mean difference", "eff, var_eff = effectsize_smd(mean2, sd2, nobs2, mean1, sd1, nobs1)", "Using one-step chi2, DerSimonian-Laird estimate for random effects variance tau\nMethod option for random effect method_re=\"chi2\" or method_re=\"dl\", both names are accepted.\nThis is commonly referred to as the DerSimonian-Laird method, it is based on a moment estimator based on pearson chi2 from the fixed effects estimate.", "res3 = combine_effects(eff, var_eff, method_re=\"chi2\", use_t=True, row_names=rownames)\n# TODO: we still need better information about conf_int of individual samples\n# We don't have enough information in the model for individual confidence intervals\n# if those are not based on normal distribution.\nres3.conf_int_samples(nobs=np.array(nobs1 + nobs2))\nprint(res3.summary_frame())\n\nres3.cache_ci\n\nres3.method_re\n\nfig = res3.plot_forest()\nfig.set_figheight(6)\nfig.set_figwidth(6)\n\nres3 = combine_effects(eff, var_eff, method_re=\"chi2\", use_t=False, row_names=rownames)\n# TODO: we still need better information about conf_int of individual samples\n# We don't have enough information in the model for individual confidence intervals\n# if those are not based on normal distribution.\nres3.conf_int_samples(nobs=np.array(nobs1 + nobs2))\nprint(res3.summary_frame())", "Using iterated, Paule-Mandel estimate for random effects variance tau\nThe method commonly referred to as Paule-Mandel estimate is a method of moment estimate for the random effects variance that iterates between mean and variance estimate until convergence.", "res4 = combine_effects(eff, var_eff, method_re=\"iterated\", use_t=False, row_names=rownames)\nres4_df = res4.summary_frame()\nprint(\"method RE:\", res4.method_re)\nprint(res4.summary_frame())\nfig = res4.plot_forest()\n", "Example Kacker interlaboratory mean\nIn this example the effect size is the mean of measurements in a lab. We combine the estimates from several labs to estimate and overall average.", "eff = np.array([61.00, 61.40, 62.21, 62.30, 62.34, 62.60, 62.70,\n 62.84, 65.90])\nvar_eff = np.array([0.2025, 1.2100, 0.0900, 0.2025, 0.3844, 0.5625,\n 0.0676, 0.0225, 1.8225])\nrownames = ['PTB', 'NMi', 'NIMC', 'KRISS', 'LGC', 'NRC', 'IRMM', 'NIST', 'LNE']\n\nres2_DL = combine_effects(eff, var_eff, method_re=\"dl\", use_t=True, row_names=rownames)\nprint(\"method RE:\", res2_DL.method_re)\nprint(res2_DL.summary_frame())\nfig = res2_DL.plot_forest()\nfig.set_figheight(6)\nfig.set_figwidth(6)\n\nres2_PM = combine_effects(eff, var_eff, method_re=\"pm\", use_t=True, row_names=rownames)\nprint(\"method RE:\", res2_PM.method_re)\nprint(res2_PM.summary_frame())\nfig = res2_PM.plot_forest()\nfig.set_figheight(6)\nfig.set_figwidth(6)", "Meta-analysis of proportions\nIn the following example the random effect variance tau is estimated to be zero. \nI then change two counts in the data, so the second example has random effects variance greater than zero.", "import io\n\n\nss = \"\"\"\\\n study,nei,nci,e1i,c1i,e2i,c2i,e3i,c3i,e4i,c4i\n 1,19,22,16.0,20.0,11,12,4.0,8.0,4,3\n 2,34,35,22.0,22.0,18,12,15.0,8.0,15,6\n 3,72,68,44.0,40.0,21,15,10.0,3.0,3,0\n 4,22,20,19.0,12.0,14,5,5.0,4.0,2,3\n 5,70,32,62.0,27.0,42,13,26.0,6.0,15,5\n 6,183,94,130.0,65.0,80,33,47.0,14.0,30,11\n 7,26,50,24.0,30.0,13,18,5.0,10.0,3,9\n 8,61,55,51.0,44.0,37,30,19.0,19.0,11,15\n 9,36,25,30.0,17.0,23,12,13.0,4.0,10,4\n 10,45,35,43.0,35.0,19,14,8.0,4.0,6,0\n 11,246,208,169.0,139.0,106,76,67.0,42.0,51,35\n 12,386,141,279.0,97.0,170,46,97.0,21.0,73,8\n 13,59,32,56.0,30.0,34,17,21.0,9.0,20,7\n 14,45,15,42.0,10.0,18,3,9.0,1.0,9,1\n 15,14,18,14.0,18.0,13,14,12.0,13.0,9,12\n 16,26,19,21.0,15.0,12,10,6.0,4.0,5,1\n 17,74,75,,,42,40,,,23,30\"\"\"\ndf3 = pd.read_csv(io.StringIO(ss))\ndf_12y = df3[[\"e2i\", \"nei\", \"c2i\", \"nci\"]]\n# TODO: currently 1 is reference, switch labels\ncount1, nobs1, count2, nobs2 = df_12y.values.T\ndta = df_12y.values.T\n\neff, var_eff = effectsize_2proportions(*dta, statistic=\"rd\")\n\neff, var_eff\n\nres5 = combine_effects(eff, var_eff, method_re=\"iterated\", use_t=False)#, row_names=rownames)\nres5_df = res5.summary_frame()\nprint(\"method RE:\", res5.method_re)\nprint(\"RE variance tau2:\", res5.tau2)\nprint(res5.summary_frame())\nfig = res5.plot_forest()\nfig.set_figheight(8)\nfig.set_figwidth(6)", "changing data to have positive random effects variance", "dta_c = dta.copy()\ndta_c.T[0, 0] = 18\ndta_c.T[1, 0] = 22\ndta_c.T\n\neff, var_eff = effectsize_2proportions(*dta_c, statistic=\"rd\")\nres5 = combine_effects(eff, var_eff, method_re=\"iterated\", use_t=False)#, row_names=rownames)\nres5_df = res5.summary_frame()\nprint(\"method RE:\", res5.method_re)\nprint(res5.summary_frame())\nfig = res5.plot_forest()\nfig.set_figheight(8)\nfig.set_figwidth(6)\n\nres5 = combine_effects(eff, var_eff, method_re=\"chi2\", use_t=False)\nres5_df = res5.summary_frame()\nprint(\"method RE:\", res5.method_re)\nprint(res5.summary_frame())\nfig = res5.plot_forest()\nfig.set_figheight(8)\nfig.set_figwidth(6)", "Replicate fixed effect analysis using GLM with var_weights\ncombine_effects computes weighted average estimates which can be replicated using GLM with var_weights or with WLS.\nThe scale option in GLM.fit can be used to replicate fixed meta-analysis with fixed and with HKSJ/WLS scale", "from statsmodels.genmod.generalized_linear_model import GLM\n\neff, var_eff = effectsize_2proportions(*dta_c, statistic=\"or\")\nres = combine_effects(eff, var_eff, method_re=\"chi2\", use_t=False)\nres_frame = res.summary_frame()\nprint(res_frame.iloc[-4:])", "We need to fix scale=1 in order to replicate standard errors for the usual meta-analysis.", "weights = 1 / var_eff\nmod_glm = GLM(eff, np.ones(len(eff)),\n var_weights=weights)\nres_glm = mod_glm.fit(scale=1.)\nprint(res_glm.summary().tables[1])\n\n# check results\nres_glm.scale, res_glm.conf_int() - res_frame.loc[\"fixed effect\", [\"ci_low\", \"ci_upp\"]].values", "Using HKSJ variance adjustment in meta-analysis is equivalent to estimating the scale using pearson chi2, which is also the default for the gaussian family.", "res_glm = mod_glm.fit(scale=\"x2\")\nprint(res_glm.summary().tables[1])\n\n# check results\nres_glm.scale, res_glm.conf_int() - res_frame.loc[\"fixed effect\", [\"ci_low\", \"ci_upp\"]].values", "Mantel-Hanszel odds-ratio using contingency tables\nThe fixed effect for the log-odds-ratio using the Mantel-Hanszel can be directly computed using StratifiedTable.\nWe need to create a 2 x 2 x k contingency table to be used with StratifiedTable.", "t, nt, c, nc = dta_c\ncounts = np.column_stack([t, nt - t, c, nc - c])\nctables = counts.T.reshape(2, 2, -1)\nctables[:, :, 0]\n\ncounts[0]\n\ndta_c.T[0]\n\nimport statsmodels.stats.api as smstats\n\nst = smstats.StratifiedTable(ctables.astype(np.float64))", "compare pooled log-odds-ratio and standard error to R meta package", "st.logodds_pooled, st.logodds_pooled - 0.4428186730553189 # R meta\n\nst.logodds_pooled_se, st.logodds_pooled_se - 0.08928560091027186 # R meta\n\nst.logodds_pooled_confint()\n\nprint(st.test_equal_odds())\n\nprint(st.test_null_odds())", "check conversion to stratified contingency table\nRow sums of each table are the sample sizes for treatment and control experiments", "ctables.sum(1)\n\nnt, nc", "Results from R meta package\n```\n\nres_mb_hk = metabin(e2i, nei, c2i, nci, data=dat2, sm=\"OR\", Q.Cochrane=FALSE, method=\"MH\", method.tau=\"DL\", hakn=FALSE, backtransf=FALSE)\nres_mb_hk\n logOR 95%-CI %W(fixed) %W(random)\n1 2.7081 [ 0.5265; 4.8896] 0.3 0.7\n2 1.2567 [ 0.2658; 2.2476] 2.1 3.2\n3 0.3749 [-0.3911; 1.1410] 5.4 5.4\n4 1.6582 [ 0.3245; 2.9920] 0.9 1.8\n5 0.7850 [-0.0673; 1.6372] 3.5 4.4\n6 0.3617 [-0.1528; 0.8762] 12.1 11.8\n7 0.5754 [-0.3861; 1.5368] 3.0 3.4\n8 0.2505 [-0.4881; 0.9892] 6.1 5.8\n9 0.6506 [-0.3877; 1.6889] 2.5 3.0\n10 0.0918 [-0.8067; 0.9903] 4.5 3.9\n11 0.2739 [-0.1047; 0.6525] 23.1 21.4\n12 0.4858 [ 0.0804; 0.8911] 18.6 18.8\n13 0.1823 [-0.6830; 1.0476] 4.6 4.2\n14 0.9808 [-0.4178; 2.3795] 1.3 1.6\n15 1.3122 [-1.0055; 3.6299] 0.4 0.6\n16 -0.2595 [-1.4450; 0.9260] 3.1 2.3\n17 0.1384 [-0.5076; 0.7844] 8.5 7.6\n\nNumber of studies combined: k = 17\n logOR 95%-CI z p-value\n\nFixed effect model 0.4428 [0.2678; 0.6178] 4.96 < 0.0001\nRandom effects model 0.4295 [0.2504; 0.6086] 4.70 < 0.0001\nQuantifying heterogeneity:\n tau^2 = 0.0017 [0.0000; 0.4589]; tau = 0.0410 [0.0000; 0.6774];\n I^2 = 1.1% [0.0%; 51.6%]; H = 1.01 [1.00; 1.44]\nTest of heterogeneity:\n Q d.f. p-value\n 16.18 16 0.4404\nDetails on meta-analytical method:\n- Mantel-Haenszel method\n- DerSimonian-Laird estimator for tau^2\n- Jackson method for confidence interval of tau^2 and tau\n\nres_mb_hk$TE.fixed\n[1] 0.4428186730553189\nres_mb_hk$seTE.fixed\n[1] 0.08928560091027186\nc(res_mb_hk$lower.fixed, res_mb_hk$upper.fixed)\n[1] 0.2678221109331694 0.6178152351774684\n\n```", "print(st.summary())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SylvainCorlay/bqplot
examples/Interactions/Selectors.ipynb
apache-2.0
[ "Selectors\nIndex\n\nIntroduction\nBrush Selectors\nFastIntervalSelector\nLassoSelector\nIndexSelector\nMultiSelector", "import pandas as pd\nimport numpy as np\n\nsymbol = 'Security 1'\nsymbol2 = 'Security 2'\n\nprice_data = pd.DataFrame(np.cumsum(np.random.randn(150, 2).dot([[0.5, 0.4], [0.4, 1.0]]), axis=0) + 100,\n columns=[symbol, symbol2],\n index=pd.date_range(start='01-01-2007', periods=150))\n\ndates_actual = price_data.index.values\nprices = price_data[symbol].values\n\nfrom bqplot import *\nfrom bqplot.interacts import (\n FastIntervalSelector, IndexSelector, BrushIntervalSelector,\n BrushSelector, MultiSelector, LassoSelector,\n)\n\nfrom ipywidgets import ToggleButtons, VBox, HTML", "Introduction <a class=\"anchor\" id=\"introduction\"></a>\nSelectors are part of the Interaction Layer (link).\nThey are used to select subparts of Marks, that correspond to different regions on the Figure canvas. Different types of selectors select different types of regions:\n- BrushSelector, FastIntervalSelector and MultiSelector select rectangular regions\n- IndexSelector selects the elements closest to an abcissa\n- LassoSelector selects elements in a region drawn by the user\nHow they work\nbqplot Selectors need to be tied to two other widgets:\n- One or several marks. Their selected attribute, a list of data indices, will be set by the Selector instance.\n- One (1d selection) or two (2d selection) Scales. These are the scales that the Selector operates on. The Selector's selected attribute will be expressed as values of those scales.\nThe Selector must then be passed to the desired Figure, as its interaction attribute.\nHopefully this will be clear in the following examples.", "# Define scales for the rest of the notebook\nscales = {'x': DateScale(), 'y': LinearScale()}", "Brush Selectors <a class=\"anchor\" id=\"brushselectors\"></a>\nSelects a rectangular region of the Figure.\nUsage:\n\nClick and drag to create a new brush\nDrag the edge of the brush to change its width\nDrag the inside of the brush to translate it\nClicking and dragging outside of the brush deletes it and creates a new one.", "# The Mark we want to select subsamples of\nscatter = Scatter(x=dates_actual, y=prices, scales=scales, colors=['orange'],\n selected_style={'opacity': '1'}, unselected_style={'opacity': '0.2'})\n# Create the brush selector, passing it its corresponding scale.\n# Notice that we do not pass it any marks for now\nbrushintsel = BrushIntervalSelector(scale=scales['x'])\n\nx_ax = Axis(label='Index', scale=scales['x'])\nx_ay = Axis(label=(symbol + ' Price'), scale=scales['y'], orientation='vertical')\n# Pass the Selector instance to the Figure\nfig = Figure(marks=[scatter], axes=[x_ax, x_ay],\n title='''Brush Interval Selector Example. Click and drag on the Figure to action.''',\n interaction=brushintsel)\n\n# The following text widgets are used to display the `selected` attributes\ntext_brush = HTML()\ntext_scatter = HTML()\n\n# This function updates the text, triggered by a change in the selector\ndef update_brush_text(*args):\n text_brush.value = \"The Brush's selected attribute is {}\".format(brushintsel.selected)\ndef update_scatter_text(*args):\n text_scatter.value = \"The scatter's selected indices are {}\".format(scatter.selected)\nbrushintsel.observe(update_brush_text, 'selected')\nscatter.observe(update_scatter_text, 'selected')\n\nupdate_brush_text()\nupdate_scatter_text()\n\n# Display\nVBox([fig, text_brush, text_scatter])", "Linking the brush to the scatter\nPassing a mark (or several) to the selector, will link the mark's selected indices to the selector.", "brushintsel.marks = [scatter]", "From now on we will stop printing out the selected indices, but rather use the selected_style and unselected_style attributes of the Marks to check which elements are selected.", "def create_figure(selector, **selector_kwargs):\n '''\n Returns a Figure with a Scatter and a Selector.\n \n Arguments\n ---------\n selector: The type of Selector, one of\n {'BrushIntervalSelector', 'BrushSelector', 'FastIntervalSelector', 'IndexSelector', 'LassoSelector'}\n selector_kwargs: Arguments to be passed to the Selector\n '''\n scatter = Scatter(x=dates_actual, y=prices, scales=scales, colors=['orange'],\n selected_style={'opacity': '1'}, unselected_style={'opacity': '0.2'})\n sel = selector(marks=[scatter], **selector_kwargs)\n \n text_brush = HTML()\n if selector != LassoSelector:\n def update_text(*args):\n text_brush.value = '{}.selected = {}'.format(selector.__name__, sel.selected)\n sel.observe(update_text, 'selected')\n update_text()\n\n x_ax = Axis(label='Index', scale=scales['x'])\n x_ay = Axis(label=(symbol + ' Price'), scale=scales['y'], orientation='vertical')\n fig = Figure(marks=[scatter], axes=[x_ax, x_ay], title='{} Example'.format(selector.__name__),\n interaction=sel)\n return VBox([fig, text_brush])", "BrushIntervalSelector on the y-axis\nThe attribute orientation can be set to 'vertical' to select on the y-axis. Be careful to pass the corresponding y-scale.", "create_figure(BrushIntervalSelector, orientation='vertical', scale=scales['y'])", "2d BrushSelector\nThe BrushSelector is 2d, and must be fed 2 scales, x_scale and y_scale.\nNote that BrushSelector.selected is now 2x2. It is the coordinates of the lower left-hand and upper right-hand corners of the rectangle.", "create_figure(BrushSelector, x_scale=scales['x'], y_scale=scales['y'])", "FastIntervalSelector <a class=\"anchor\" id=\"fastintervalselector\"></a>\nThe FastIntervalSelector is functionally like a BrushIntervalSelector, but provides a more fluid and rapid interaction.\nUsage:\n\nThe first click creates the selector.\nMoving the mouse up and down widens and narrows the interval width.\nMoving the mouse left and right translates the interval left and right.\nSubsequent clicks will freeze/unfreeze the interval width\nA double-click will freeze both the width and the translation\n\nExperiment and get a feel for it in the example below.", "create_figure(FastIntervalSelector, scale=scales['x'])", "As of the latest version, FastIntervalSelector is only supported for 1d interaction along the x-axis\nLassoSelector <a class=\"anchor\" id=\"lassoselector\"></a>\nThis 2-D selector enables the user to select multiple sets of data points\nby drawing lassos on the figure. \nUsage:\n\nClick and drag to draw a new lasso\nClick a lasso to select (de-select) it. Mult\nPress the 'Delete' button to delete the selected lassos", "create_figure(LassoSelector)", "IndexSelector <a class=\"anchor\" id=\"indexselector\"></a>\nThis 1-D selector selects a unique value on its scale. The attached Mark's selected element is the closest element to that value.\nUsage:\n\nFirst click creates and activates the selector \nMoving the mouse translates the selector\nSubsequent clicks freeze/unfreeze the selector", "create_figure(IndexSelector, scale=scales['x'])", "As of the latest version, IndexSelector is only supported for interaction along the x-axis.\nMultiSelector <a class=\"anchor\" id=\"multiselector\"></a>\nThis 1-D selector is equivalent to multiple brush selectors.\nUsage:\n\nThe first brush works like a regular brush.\nCtrl + click creates a new brush, which works like the regular brush. \nThe active brush has a Green border while all the inactive brushes have a Red border.\nShift + click deactivates the current active brush. Now, click on any inactive brush to make it active.\nCtrl + Shift + click clears and resets all the brushes.\n\nEach brush has a name (0, 1, 2, ... by default), and the selected attribute is a dict {brush_name: brush_extent}", "create_figure(MultiSelector, scale=scales['x'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/zh-cn/tutorials/load_data/numpy.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "使用 tf.data 加载 NumPy 数据\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://tensorflow.google.cn/tutorials/load_data/numpy\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">在 Tensorflow.org 上查看</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/numpy.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 运行</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/numpy.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 Github 上查看源代码</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/load_data/numpy.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\">下载此 notebook</a></td>\n</table>\n\n本教程提供了一个将数据从 NumPy 数组加载到 tf.data.Dataset 中的示例。\n此示例从 .npz 文件加载 MNIST 数据集。但是,NumPy 数组的来源并不重要。\n安装", " \nimport numpy as np\nimport tensorflow as tf", "从 .npz 文件中加载", "DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'\n\npath = tf.keras.utils.get_file('mnist.npz', DATA_URL)\nwith np.load(path) as data:\n train_examples = data['x_train']\n train_labels = data['y_train']\n test_examples = data['x_test']\n test_labels = data['y_test']", "使用 tf.data.Dataset 加载 NumPy 数组\n假设您有一个示例数组和相应的标签数组,请将两个数组作为元组传递给 tf.data.Dataset.from_tensor_slices 以创建 tf.data.Dataset 。", "train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))\ntest_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))", "使用该数据集\n打乱和批次化数据集", "BATCH_SIZE = 64\nSHUFFLE_BUFFER_SIZE = 100\n\ntrain_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)\ntest_dataset = test_dataset.batch(BATCH_SIZE)", "建立和训练模型", "model = tf.keras.Sequential([\n tf.keras.layers.Flatten(input_shape=(28, 28)),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(10)\n])\n\nmodel.compile(optimizer=tf.keras.optimizers.RMSprop(),\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['sparse_categorical_accuracy'])\n\nmodel.fit(train_dataset, epochs=10)\n\nmodel.evaluate(test_dataset)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Mashimo/datascience
02-Classification/knn.ipynb
apache-2.0
[ "K-Nearest Neighbours (KNN)\nK-Neighbours is a supervised classification algorithm. Classification is the process of assigning samples into those groups. Given a set of groups, take a set of samples and mark each sample as being a member of a group. Each group being the correct answer, label, or classification of the sample.\nThe K-Nearest Neighbours - or KNN - classifier, is one of the simplest machine learning algorithms.\nK-Nearest Neighbours works by first simply storing all of your training data samples.\nThen in the future, when you attempt to check the classification of a new, never-before seen sample, it finds the nearest \"K\" number of samples to it from within your training data. You must have numeric features in order for 'nearest' to be meaningful.\nSciKit-Learn's K-Nearest Neighbours only supports numeric features, so you'll have to do whatever has to be done to get your data into that format before proceeding. The distance will be measures as a standard Euclidean.\nWith the nearest neighbors found, K-Neighbours looks at their classes and takes a mode vote to assign a label to the new data point. Further extensions of K-Neighbours can take into account the distance to the samples to weigh their voting power. \nEach new prediction or classification made, the algorithm has to again find the nearest neighbors to that sample in order to call a vote for it. This process is where a majority of the time is spent, so instead of using brute force to search the training data as if it were stored in a list, tree structures are used instead to optimize the search times. Due to this, the number of classes in dataset doesn't have a bearing on its execution speed. Only the number of records in your training data set.\nA wheat example\nYes, always the same example :D \nWe do the usual data reading and pre-processing, concluding with normalisation and splitting the data into test and training datasets. \nRead the data", "import pandas as pd\n\n# \n# Load up the dataset into a variable called X. \n#\nX = pd.read_csv(\"../datasets/wheat.data\", index_col='id')\nX.head()\n", "Data pre-processing\nFirst we separate the target variable", "#\n# Copy the 'wheat_type' series slice out of X, and into a series\n# called 'y'. Then drop the original 'wheat_type' column from the X\n#\ny = X.wheat_type.copy()\nX.drop(['wheat_type'], axis=1, inplace=True)\n\ny_original = y\n\n# Do a quick, \"ordinal\" conversion of 'y'. \n#\ny = y.astype(\"category\").cat.codes", "Fix the invalid values", "#\n# Basic nan munging. Fill each row's nans with the mean of the feature\n#\nX.fillna(X.mean(), inplace=True)", "Split the data into training and testing datasets", "from sklearn.model_selection import train_test_split\n\n#\n# Split X into training and testing data sets\n#\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, \n random_state=1)", "Data normalisation", "from sklearn import preprocessing\n\n# \n# Create an instance of SKLearn's Normalizer class and then train it\n# using its .fit() method against the *training* data.\n#\n#\nnormaliser = preprocessing.Normalizer().fit(X_train)\n\n#\n# With the trained pre-processor, transform both training AND\n# testing data.\n#\n# NOTE: Any testing data has to be transformed with the preprocessor\n# that has been fit against the training data, so that it exist in the same\n# feature-space as the original data used to train the models.\n#\nX_train_normalised = normaliser.transform(X_train)\nX_train = pd.DataFrame(X_train_normalised)\n\nX_test_normalised = normaliser.transform(X_test)\nX_test = pd.DataFrame(X_test_normalised)", "Apply PCA\nFinally apply a PCA transformation. \nThis has to be done because the only way to visualise the decision boundary in 2D would be if the KNN algorithm ran in 2D as well.\nNote that removing the PCA will improve the accuracy (KNeighbours is applied to the entire train data, not just the two principal components).", "from sklearn.decomposition import PCA\n\n#\n# Just like the preprocessing transformation, create a PCA\n# transformation as well. Fit it against the training data, and then\n# project the training and testing features into PCA space using the\n# PCA model's .transform() method.\n#\n#\n\npca_reducer = PCA(n_components=2).fit(X_train_normalised)\n\nX_train_pca = pca_reducer.transform(X_train_normalised)\nX_test_pca = pca_reducer.transform(X_test_normalised)", "KNN algorithm\nNow we finally apply the K-neighbours algorithm, using the related module from SKlearn.\nFor K-Neighbours, generally the higher your \"K\" value, the smoother and less jittery your decision surface becomes. Higher K values also result in your model providing probabilistic information about the ratio of samples per each class. There is a tradeoff though, as higher K values mean the algorithm is less sensitive to local fluctuations since farther samples are taken into account. This causes it to only model the overall classification function without much attention to detail, and increases the computational complexity of the classification.\nWe use here K = 9", "from sklearn.neighbors import KNeighborsClassifier\n\n#\n# Create and train a KNeighborsClassifier. Start with K=9 neighbors.\n# NOTE: Be sure to train the classifier against the pre-processed, PCA-\n# transformed training data above! \n#\nknn = KNeighborsClassifier(n_neighbors=9)\nknn.fit(X_train_pca, y_train) ", "Decision Boundaries\nA unique feature of supervised classification algorithms are their decision boundaries, or more generally, their n-dimensional decision surface: a threshold or region where if superseded, will result in your sample being assigned that class.\nThe decision surface isn't always spherical. In fact, it can take many different types of shapes depending on the algorithm that generated it.\nLet's prepare a function to plot the decision boundaries, that we can use for other examples, later", "import matplotlib.pyplot as plt\nimport matplotlib\n\nmatplotlib.style.use('ggplot') # Look Pretty\n\n\nimport numpy as np\n\n\ndef plotDecisionBoundary(model, X, y, colors, padding=0.6, resolution = 0.0025):\n \n fig = plt.figure(figsize=(8,6))\n ax = fig.add_subplot(111)\n\n\n # Calculate the boundaries\n x_min, x_max = X[:, 0].min(), X[:, 0].max()\n y_min, y_max = X[:, 1].min(), X[:, 1].max()\n x_range = x_max - x_min\n y_range = y_max - y_min\n x_min -= x_range * padding\n y_min -= y_range * padding\n x_max += x_range * padding\n y_max += y_range * padding\n\n\n # Create a 2D Grid Matrix. The values stored in the matrix\n # are the predictions of the class at at said location\n xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution),\n np.arange(y_min, y_max, resolution))\n\n # What class does the classifier say?\n Z = model.predict(np.c_[xx.ravel(), yy.ravel()])\n Z = Z.reshape(xx.shape)\n\n # Plot the contour map using the rainbow colourmap\n #cs = plt.contourf(xx, yy, Z, cmap=plt.cm.rainbow)\n ax.contourf(xx, yy, Z, cmap=plt.cm.rainbow)\n fig.tight_layout(pad=2)\n\n # Plot the testing original points as well...\n for label in np.unique(y):\n indices = np.where(y == label)\n ax.scatter(X[indices, 0], X[indices, 1], c=colors[label], alpha=0.8)\n\n # print the title\n p = model.get_params()\n fig.suptitle('Decision boundaries, K = ' + str(p['n_neighbors']))", "Just a reminder: these are the wheat labels:", "for label in np.unique(y_original):\n print (label)\n\nmyColours = ['royalblue','forestgreen','ghostwhite']\n\nplotDecisionBoundary(knn, X_train_pca, y_train, colors = myColours)", "The KNN (with K=9) algorithm divided the space into three clusters, one for each wheat type.\nThe clusters fit quite well the testing data but not perfectly, some data points are mis-classified.", "#\n# Display the accuracy score of the test data/labels, computed by\n# the KNeighbors model.\n#\n# NOTE: You do NOT have to run .predict before calling .score, since\n# .score will take care of running the predictions automatically.\n#\nprint(knn.score(X_test_pca, y_test))\n", "K-Neighbours is particularly useful when no other model fits your data well, as it is a parameter free approach to classification. So for example, you don't have to worry about things like your data being linearly separable or not.\nSome of the caution-points to keep in mind while using K-Neighbours is that your data needs to be measurable. If there is no metric for discerning distance between your features, K-Neighbours cannot help you. As with all algorithms dependent on distance measures, it is also sensitive to feature scaling, to perturbations and the local structure of your dataset, particularly at lower \"K\" values.\nKNN with hyper-parameters\nWe now explore deeper how the algorithm's parameters impact it, using as an example a dataset to classify a breast tumor as benign or malign. \nBreast cancer doesn't develop over night and, like any other cancer, can be treated extremely effectively if detected in its earlier stages. Part of the understanding cancer is knowing that not all irregular cell growths are malignant; some are benign, or non-dangerous, non-cancerous growths. \nBeing able to properly assess if a tumor is actually benign and ignorable, or malignant and alarming is therefore of importance, and also is a problem that might be solvable through data and machine learning.\nUsing the Breast Cancer Wisconsin Original data set, provided courtesy of UCI's Machine Learning Repository\nRead the data", "# \n# Load in the dataset, identify nans, and set proper headers.\n#\nX = pd.read_csv(\"../Datasets/breast-cancer-wisconsin.data\", header=None,\n names=['sample', 'thickness', 'size', 'shape', 'adhesion', \n 'epithelial', 'nuclei', 'chromatin', 'nucleoli', \n 'mitoses', 'status'], index_col='sample', na_values='?')\n\n\nX.head()", "Data Pre-processing\nExtract the target values, remove all NaN values and split into testing and training data", "# \n# Copy out the status column into a slice, then drop it from the main\n# dataframe. \n#\n#\ny = X.status.copy()\nX.drop(['status'], axis=1, inplace=True)\n\n#\n# With the labels safely extracted from the dataset, replace any nan values\n# with the mean feature / column value\n#\nif X.isnull().values.any() == True:\n print(\"Preprocessing data: substituted all NaN with mean value\")\n X.fillna(X.mean(), inplace=True)\nelse:\n print(\"Preprocessing data: No NaN found!\")\n\n#\n# Do train_test_split. set the random_state=7 for reproduceability, and keep\n# the test_size at 0.5 (50%).\n#\nX_train, X_test, y_train, y_test = train_test_split(X, y, \n test_size=0.5, random_state=7)\n", "Define hyper-parameters.\nWe will loop the KNN algorithm with different parameters, specifically: \n\ndifferent scalers for normalisation\nreduced or not reduced (here PCA but can also use isomap for reduction)\ndifferent weight function\nand different values of K", "# automate the tuning of hyper-parameters using for-loops to traverse the search space. \nreducers = [False, True]\nweights = ['uniform', 'distance']\n\n# Experiment with the basic SKLearn preprocessing scalers. We know that\n# the features consist of different units mixed in together, so it might be\n# reasonable to assume feature scaling is necessary.\nscalers = [preprocessing.Normalizer, preprocessing.StandardScaler,\n preprocessing.MinMaxScaler, preprocessing.RobustScaler]\n\n\nfrom sklearn.decomposition import PCA\nfrom sklearn import manifold\n", "Hyper-parameters tuning\nLoop through all the parameters: fit the model and print the result every time", "# the f print function works from Python 3.6, you can use print otherwise\nseparator = \"--------------------------------------\"\nprint('*** Starting K-neighbours classifier')\nprint(separator)\n\nbestScore = 0.0\n\n# outer loop: the scalers\nfor scaler in scalers:\n print(\"* Scaler = \", scaler)\n\n scalerTrained = scaler().fit(X_train)\n \n X_train_scaled = scalerTrained.transform(X_train)\n X_test_scaled = scalerTrained.transform(X_test)\n \n\n print(\"PCA? | K | Weight | Score\")\n print(separator)\n \n # next loop though PCA reduction or not\n \n reducer = None\n\n for isPCA in reducers:\n if isPCA:\n #\n # Implement PCA here: reduce down to two dimensions.\n #\n reducer = PCA(n_components=2).fit(X_train_scaled)\n\n else:\n #\n # Implement Isomap here. K values can be from 5 to 10.\n # Reduce down to two dimensions.\n #\n reducer = manifold.Isomap(n_neighbors=10, n_components=2).fit(X_train_scaled)\n\n # 2D transformation on both datasets\n X_train_reduced = reducer.transform(X_train_scaled)\n X_test_reduced = reducer.transform(X_test_scaled)\n\n # \n # Implement and train KNeighborsClassifier on the projected 2D\n # training data. You can use any K value from 1 - 15, so play around\n # with it and see what results you can come up. Your goal is to find a\n # good balance where you aren't too specific (low-K), nor are you too\n # general (high-K). You should also experiment with how changing the weights\n # parameter affects the results.\n #\n for k in range(1,16):\n for weight in weights:\n\n #\n # Train the model against data_train. \n #\n knmodel = KNeighborsClassifier(n_neighbors = k, weights = weight)\n knmodel.fit(X_train_reduced, y_train) \n \n \n# INFO: Be sure to always keep the domain of the problem in mind! It's\n# WAY more important to errantly classify a benign tumor as malignant,\n# and have it removed, than to incorrectly leave a malignant tumor, believing\n# it to be benign, and then having the patient progress in cancer. Since the UDF\n# weights don't give you any class information, the only way to introduce this\n# data into SKLearn's KNN Classifier is by \"baking\" it into your data. For\n# example, randomly reducing the ratio of benign samples compared to malignant\n# samples from the training set.\n\n#\n# Calculate + Print the accuracy of the testing set\n#\n currentScore = knmodel.score(X_test_reduced, y_test)\n \n print(f\"{isPCA} | {k} | {weight} | {currentScore}\")\n \n # save the best model for plotting it later\n if (currentScore > bestScore):\n bestScore = currentScore\n bestPCA = isPCA\n bestK = k\n bestWeight = weight\n bestScaler = scaler", "Re-apply the best parameters to the model", "print(\"These are the best parameters for the model:\")\nprint(\"PCA? | K | Weight | Scaler | Score\")\nprint(f\"{bestPCA} | {bestK} | {bestWeight} | {bestScaler} | {bestScore}\")\n\nBestScalerTrained = bestScaler().fit(X_train)\n \nX_train_scaled = BestScalerTrained.transform(X_train)\nX_test_scaled = BestScalerTrained.transform(X_test)\n\n\nif isPCA:\n #\n # Implement PCA here. \n # You should reduce down to two dimensions.\n #\n reducer = PCA(n_components=2).fit(X_train_scaled)\n\nelse:\n #\n # Implement Isomap here. K values from 5-10.\n # You should reduce down to two dimensions.\n #\n reducer = manifold.Isomap(n_neighbors=10, n_components=2).fit(X_train_scaled)\n\n #\n # Train your model against data_train, then transform both\n # data_train and data_test using your model. You can save the results right\n # back into the variables themselves.\n #\nX_train_reduced = reducer.transform(X_train_scaled)\nX_test_reduced = reducer.transform(X_test_scaled)\n\n # \n # Implement and train KNeighborsClassifier on your projected 2D\n # training data here. You can use any K value from 1 - 15, so play around\n # with it and see what results you can come up. Your goal is to find a\n # good balance where you aren't too specific (low-K), nor are you too\n # general (high-K). You should also experiment with how changing the weights\n # parameter affects the results.\n #\nbestKnmodel = KNeighborsClassifier(n_neighbors = bestK, weights = bestWeight)\nbestKnmodel.fit(X_train_reduced, y_train) ", "Plotting the decision boundaries", "# 2 for benign (blue colour), 4 for malignant (red colour)\nmyColours = {2:'royalblue',4:'lightsalmon'} \n\nplotDecisionBoundary(bestKnmodel, X_test_reduced, y_test, colors = myColours, padding = 0.1, resolution = 0.1)", "Another example for KNN with reduction", "import scipy.io\nimport math\n\n# Same datasets as in the PCA example!\n# load up the face_data.mat, calculate the\n# num_pixels value, and rotate the images to being right-side-up\n# instead of sideways.\n#\nmat = scipy.io.loadmat('../datasets/face_data.mat')\ndf = pd.DataFrame(mat['images']).T\nnum_images, num_pixels = df.shape\nnum_pixels = int(math.sqrt(num_pixels))\n\n# Rotate the pictures, so we don't have to crane our necks:\nfor i in range(num_images):\n df.loc[i,:] = df.loc[i,:].values.reshape(num_pixels, num_pixels).T.reshape(-1)\n\n\n#\n# Load up the face_labels dataset. It only has a single column, and\n# we're only interested in that single column. We have to slice the \n# column out so that we have access to it as a \"Series\" rather than as a\n# \"Dataframe\".\n#\nlabelsDF = pd.read_csv(\"../Datasets/face_labels.csv\", header=None)\nlabels = labelsDF.iloc[:,0]\n\n#\n# Do train_test_split. The labels are actually passed in as a series\n# (instead of as an NDArray) to access their underlying indices\n# later on. This is necessary to find the samples in the original\n# dataframe, which is used to plot the testing data as images rather\n# than as points:\n#\ndata_train, data_test, label_train, label_test = train_test_split(df, labels, \n test_size=0.15, random_state=7)\n\n# If you'd like to try with PCA instead of Isomap,\n# as the dimensionality reduction technique:\nTest_PCA = False\n\nif Test_PCA:\n # INFO: PCA is used *before* KNeighbors to simplify the high dimensionality\n # image samples down to just 2 principal components! A lot of information\n # (variance) is lost during the process, as I'm sure you can imagine. But\n # you have to drop the dimension down to two, otherwise you wouldn't be able\n # to visualize a 2D decision surface / boundary. In the wild, you'd probably\n # leave in a lot more dimensions, but wouldn't need to plot the boundary;\n # simply checking the results would suffice.\n #\n # The model should only be trained (fit) against the training data (data_train)\n # Once we've done this, we use the model to transform both data_train\n # and data_test from their original high-D image feature space, down to 2D\n # Finally, storing the results back into\n # data_train and data_test.\n \n pca_reducer = PCA(n_components=2).fit(data_train)\n\n data_train = pca_reducer.transform(data_train)\n data_test = pca_reducer.transform(data_test)\n \nelse:\n \n # INFO: Isomap is used *before* KNeighbors to simplify the high dimensionality\n # image samples down to just 2 components! A lot of information has been is\n # lost during the process, as I'm sure you can imagine. But if you have\n # non-linear data that can be represented on a 2D manifold, you probably will\n # be left with a far superior dataset to use for classification. Plus by\n # having the images in 2D space, you can plot them as well as visualize a 2D\n # decision surface / boundary. In the wild, you'd probably leave in a lot\n # more dimensions, but wouldn't need to plot the boundary; simply checking\n # the results would suffice.\n #\n # The model should only be trained (fit) against the training data (data_train)\n # Once done this, we use the model to transform both data_train\n # and data_test from their original high-D image feature space, down to 2D\n # storing the results back into\n # data_train, and data_test.\n \n iso_reducer = manifold.Isomap(n_neighbors=5, n_components=2).fit(data_train)\n data_train = iso_reducer.transform(data_train)\n data_test = iso_reducer.transform(data_test)\n\n\n#\n# Implement KNeighborsClassifier. \n#\nknn = KNeighborsClassifier(n_neighbors=5)\nknn.fit(data_train, label_train) \n\n#\n# Calculate + Print the accuracy of the testing set (data_test and\n# label_test).\n#\n#\nprint(knn.score(data_test, label_test))\n\n# isomap: 0.961904761905\n# pca: 0.571428571429\n\nmatplotlib.style.use('ggplot') # Look Pretty\n\n\ndef Plot2DBoundary(model, DTrain, LTrain, DTest, LTest):\n # The dots are training samples (img not drawn), and the pics are testing samples (images drawn)\n # Play around with the K values. This is very controlled dataset so it \n # should be able to get perfect classification on testing entries\n\n fig = plt.figure(figsize=(9,8))\n ax = fig.add_subplot(111)\n ax.set_title('Transformed Boundary, Image Space -> 2D')\n\n padding = 0.1 # Zoom out\n resolution = 1 # Don't get too detailed; smaller values (finer rez) will take longer to compute\n colors = ['blue','green','orange','red']\n \n\n # ------\n\n # Calculate the boundaries of the mesh grid. The mesh grid is\n # a standard grid (think graph paper), where each point will be\n # sent to the classifier (KNeighbors) to predict what class it\n # belongs to. This is why KNeighbors has to be trained against\n # 2D data, so we can produce this countour. Once we have the \n # label for each point on the grid, we can color it appropriately\n # and plot it.\n x_min, x_max = DTrain[:, 0].min(), DTrain[:, 0].max()\n y_min, y_max = DTrain[:, 1].min(), DTrain[:, 1].max()\n x_range = x_max - x_min\n y_range = y_max - y_min\n x_min -= x_range * padding\n y_min -= y_range * padding\n x_max += x_range * padding\n y_max += y_range * padding\n\n # Using the boundaries, actually make the 2D Grid Matrix:\n xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution),\n np.arange(y_min, y_max, resolution))\n\n # What class does the classifier say about each spot on the chart?\n # The values stored in the matrix are the predictions of the model\n # at said location:\n Z = model.predict(np.c_[xx.ravel(), yy.ravel()])\n Z = Z.reshape(xx.shape)\n\n # Plot the mesh grid as a filled contour plot:\n ax.contourf(xx, yy, Z, cmap=plt.cm.terrain)\n\n\n # ------\n\n # When plotting the testing images, used to validate if the algorithm\n # is functioning correctly, size them as 5% of the overall chart size\n x_size = x_range * 0.05\n y_size = y_range * 0.05\n \n # First, plot the images in the TEST dataset\n img_num = 0\n for index in LTest.index:\n # DTest is a regular NDArray, so you'll iterate over that 1 at a time.\n x0, y0 = DTest[img_num,0]-x_size/2., DTest[img_num,1]-y_size/2.\n x1, y1 = DTest[img_num,0]+x_size/2., DTest[img_num,1]+y_size/2.\n\n # DTest = our images isomap-transformed into 2D. But we still want\n # to plot the original image, so we look to the original, untouched\n # dataset (at index) to get the pixels:\n img = df.iloc[index,:].values.reshape(num_pixels, num_pixels)\n ax.imshow(img, aspect='auto', cmap=plt.cm.gray, interpolation='nearest', zorder=100000, extent=(x0, x1, y0, y1), alpha=0.8)\n img_num += 1\n\n\n # Plot the TRAINING points as well... as points rather than as images\n for label in range(len(np.unique(LTrain))):\n indices = np.where(LTrain == label)\n ax.scatter(DTrain[indices, 0], DTrain[indices, 1], c=colors[label], alpha=0.8, marker='o')\n \n\n# Chart the combined decision boundary, the training data as 2D plots, and\n# the testing data as small images so we can visually validate performance.\nPlot2DBoundary(knn, data_train, label_train, data_test, label_test)\n", "That's all for KNN!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ceteri/pytextrank
explain_summ.ipynb
apache-2.0
[ "Explain PyTextRank: extractive summarization\nHow does PyTextRank perform extractive summarization on a text document?\n\nFirst we perform some basic housekeeping for Jupyter, then load spaCy with a language model for English ...", "import warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport spacy\nnlp = spacy.load(\"en_core_web_sm\")", "Create some text to use....", "text = \"Compatibility of systems of linear constraints over the set of natural numbers. Criteria of compatibility of a system of linear Diophantine equations, strict inequations, and nonstrict inequations are considered. Upper bounds for components of a minimal set of solutions and algorithms of construction of minimal generating sets of solutions for all types of systems are given. These criteria and the corresponding algorithms for constructing a minimal supporting set of solutions can be used in solving all the considered types systems and systems of mixed types.\"", "Then add PyTextRank into the spaCy pipeline...", "import pytextrank\n\ntr = pytextrank.TextRank()\nnlp.add_pipe(tr.PipelineComponent, name=\"textrank\", last=True)\n\ndoc = nlp(text)", "Examine the results: a list of top-ranked phrases in the document", "for p in doc._.phrases:\n print(\"{:.4f} {:5d} {}\".format(p.rank, p.count, p.text))\n print(p.chunks)", "Construct a list of the sentence boundaries with a phrase vector (initialized to empty set) for each...", "sent_bounds = [ [s.start, s.end, set([])] for s in doc.sents ]\nsent_bounds", "Iterate through the top-ranked phrases, added them to the phrase vector for each sentence...", "limit_phrases = 4\n\nphrase_id = 0\nunit_vector = []\n\nfor p in doc._.phrases:\n print(phrase_id, p.text, p.rank)\n \n unit_vector.append(p.rank)\n \n for chunk in p.chunks:\n print(\" \", chunk.start, chunk.end)\n \n for sent_start, sent_end, sent_vector in sent_bounds:\n if chunk.start >= sent_start and chunk.start <= sent_end:\n print(\" \", sent_start, chunk.start, chunk.end, sent_end)\n sent_vector.add(phrase_id)\n break\n\n phrase_id += 1\n\n if phrase_id == limit_phrases:\n break", "Let's take a look at the results...", "sent_bounds\n\nfor sent in doc.sents:\n print(sent)", "We also construct a unit_vector for all of the phrases, up to the limit requested...", "unit_vector\n\nsum_ranks = sum(unit_vector)\nunit_vector = [ rank/sum_ranks for rank in unit_vector ]\n\nunit_vector", "Iterate through each sentence, calculating its euclidean distance from the unit vector...", "from math import sqrt\n\nsent_rank = {}\nsent_id = 0\n\nfor sent_start, sent_end, sent_vector in sent_bounds:\n print(sent_vector)\n sum_sq = 0.0\n \n for phrase_id in range(len(unit_vector)):\n print(phrase_id, unit_vector[phrase_id])\n \n if phrase_id not in sent_vector:\n sum_sq += unit_vector[phrase_id]**2.0\n\n sent_rank[sent_id] = sqrt(sum_sq)\n sent_id += 1\n\nprint(sent_rank)", "Sort the sentence indexes in descending order", "from operator import itemgetter\n\nsorted(sent_rank.items(), key=itemgetter(1)) ", "Extract the sentences with the lowest distance, up to the limite requested...", "limit_sentences = 2\n\nsent_text = {}\nsent_id = 0\n\nfor sent in doc.sents:\n sent_text[sent_id] = sent.text\n sent_id += 1\n\nnum_sent = 0\n\nfor sent_id, rank in sorted(sent_rank.items(), key=itemgetter(1)):\n print(sent_id, sent_text[sent_id])\n num_sent += 1\n \n if num_sent == limit_sentences:\n break" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ko/addons/tutorials/layers_normalizations.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "정규화\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/addons/tutorials/layers_normalizations\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org에서 보기</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/addons/tutorials/layers_normalizations.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab에서 실행하기</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/addons/tutorials/layers_normalizations.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub에서 소스 보기</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/addons/tutorials/layers_normalizations.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">노트북 다운로드하기</a></td>\n</table>\n\n개요\n이 노트북은 TensorFlow의 정규화 레이어에 대한 간략한 소개를 제공합니다. 현재 지원되는 레이어는 다음과 같습니다.\n\n그룹 정규화(TensorFlow Addons)\n인스턴스 정규화(TensorFlow Addons)\n레이어 정규화(TensorFlow Core)\n\n레이어의 기본 아이디어는 활성 레이어의 출력을 정규화하여 훈련 중 수렴을 개선하는 것입니다. 배치 정규화와 달리, 이러한 정규화는 배치에서 동작하지 않고 대신 단일 샘플의 활성화를 정규화하여 순환 신경망에도 적합합니다.\n일반적으로 정규화는 입력 텐서에서 하위 그룹의 평균과 표준 편차를 계산하여 수행됩니다. 여기에 스케일과 오프셋 인자를 적용하는 것도 가능합니다.\n$y_{i} = \\frac{\\gamma ( x_{i} - \\mu )}{\\sigma }+ \\beta$\n$ y$ : 출력\n$x$ : 입력\n$\\gamma$ : 스케일 인자\n$\\mu$: 평균\n$\\sigma$: 표준 편차\n$\\beta$: 오프셋 인자\n다음 이미지는 기술 간의 차이점을 보여줍니다. 각 하위 플롯은 N이 배치 축, C가 채널 축, (H, W)가 공간 축(예: 그림의 높이 및 너비)인 입력 텐서를 보여줍니다. 파란색 픽셀은 이들 픽셀의 값을 집계하여 계산된, 동일한 평균 및 분산으로 정규화됩니다.\n\n출처: (https://arxiv.org/pdf/1803.08494.pdf)\n가중치 감마 및 베타는 모든 정규화 레이어에서 훈련 가능하며 표현 능력의 손실 가능성을 보상합니다. center 또는 scale 플래그를 True로 설정하여 이들 요소를 활성화할 수 있습니다. 물론 beta 및 gamma에 initializers, constraints 및 regularizer를 사용하여 훈련 프로세스 중에 이들 값을 조정할 수 있습니다. \n설정\nTensorflow 2.0 및 Tensorflow-Addons 설치", "!pip install -U tensorflow-addons\n\nimport tensorflow as tf\nimport tensorflow_addons as tfa", "데이터세트 준비하기", "mnist = tf.keras.datasets.mnist\n\n(x_train, y_train),(x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train / 255.0, x_test / 255.0", "그룹 정규화 튜토리얼\n소개\n그룹 정규화(GN)는 입력 채널을 더 작은 하위 그룹으로 나누고 평균과 분산을 기반으로 값을 정규화합니다. GN은 단일 예제에서 동작하므로 이 기술은 배치 크기와 독립적입니다.\nGN은 실험적으로 이미지 분류 작업에서 배치 정규화와 비슷한 점수를 기록했습니다. 전체 batch_size가 낮은 경우 이때 배치 정규화의 성능이 저하될 수 있으며, 배치 정규화 대신 GN을 사용하는 것이 유용할 수 있습니다.\n표준 \"channels last\" 설정에서 Conv2D 레이어 이후 10개의 채널을 5개의 하위 그룹으로 분할하는 예제", "model = tf.keras.models.Sequential([\n # Reshape into \"channels last\" setup.\n tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)),\n tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format=\"channels_last\"),\n # Groupnorm Layer\n tfa.layers.GroupNormalization(groups=5, axis=3),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(10, activation='softmax')\n])\n\nmodel.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\nmodel.fit(x_test, y_test)", "인스턴스 정규화 튜토리얼\n소개\n인스턴스 정규화는 그룹 크기가 채널 크기(또는 축 크기)와 같은 그룹 정규화의 특수한 경우입니다.\n실험 결과는 배치 정규화를 대체할 때 인스턴스 정규화가 스타일 전송에서 잘 수행됨을 보여줍니다. 최근에는 인스턴스 정규화가 GAN에서 배치 정규화를 대체하는 용도로도 사용되었습니다.\n예제\nConv2D 레이어 다음에 InstanceNormalization을 적용하고 균일하게 초기화된 스케일 및 오프셋 인자를 사용합니다.", "model = tf.keras.models.Sequential([\n # Reshape into \"channels last\" setup.\n tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)),\n tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format=\"channels_last\"),\n # LayerNorm Layer\n tfa.layers.InstanceNormalization(axis=3, \n center=True, \n scale=True,\n beta_initializer=\"random_uniform\",\n gamma_initializer=\"random_uniform\"),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(10, activation='softmax')\n])\n\nmodel.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\nmodel.fit(x_test, y_test)", "레이어 정규화 튜토리얼\n소개\n레이어 정규화는 그룹 크기가 1인 그룹 정규화의 특수한 경우입니다. 평균과 표준 편차는 단일 샘플의 모든 활성화에서 계산됩니다.\n실험 결과는 레이어 정규화가 배치 크기와는 독립적으로 동작하기 때문에 순환 신경망에 적합하다는 것을 보여줍니다.\nExample\nConv2D 레이어 다음에 레이어 정규화를 적용하고 스케일 및 오프셋 인자를 사용합니다.", "model = tf.keras.models.Sequential([\n # Reshape into \"channels last\" setup.\n tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)),\n tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format=\"channels_last\"),\n # LayerNorm Layer\n tf.keras.layers.LayerNormalization(axis=3 , center=True , scale=True),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(10, activation='softmax')\n])\n\nmodel.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\nmodel.fit(x_test, y_test)", "문헌\nLayer norm\nInstance norm\nGroup Norm" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Hasil-Sharma/Neural-Networks-CS231n
assignment1/features.ipynb
gpl-3.0
[ "Image features exercise\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.\nWe have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.\nAll of your work for this exercise will be done in this notebook.", "import random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading extenrnal modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2", "Load data\nSimilar to previous exercises, we will load CIFAR-10 data from disk.", "from cs231n.features import color_histogram_hsv, hog_feature\n\ndef get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):\n # Load the raw CIFAR-10 data\n cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\n X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n \n # Subsample the data\n mask = range(num_training, num_training + num_validation)\n X_val = X_train[mask]\n y_val = y_train[mask]\n mask = range(num_training)\n X_train = X_train[mask]\n y_train = y_train[mask]\n mask = range(num_test)\n X_test = X_test[mask]\n y_test = y_test[mask]\n\n return X_train, y_train, X_val, y_val, X_test, y_test\n\nX_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()", "Extract Features\nFor each image we will compute a Histogram of Oriented\nGradients (HOG) as well as a color histogram using the hue channel in HSV\ncolor space. We form our final feature vector for each image by concatenating\nthe HOG and color histogram feature vectors.\nRoughly speaking, HOG should capture the texture of the image while ignoring\ncolor information, and the color histogram represents the color of the input\nimage while ignoring texture. As a result, we expect that using both together\nought to work better than using either alone. Verifying this assumption would\nbe a good thing to try for the bonus section.\nThe hog_feature and color_histogram_hsv functions both operate on a single\nimage and return a feature vector for that image. The extract_features\nfunction takes a set of images and a list of feature functions and evaluates\neach feature function on each image, storing the results in a matrix where\neach column is the concatenation of all feature vectors for a single image.", "from cs231n.features import *\n\nnum_color_bins = 10 # Number of bins in the color histogram\nfeature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]\nX_train_feats = extract_features(X_train, feature_fns, verbose=True)\nX_val_feats = extract_features(X_val, feature_fns)\nX_test_feats = extract_features(X_test, feature_fns)\n\n# Preprocessing: Subtract the mean feature\nmean_feat = np.mean(X_train_feats, axis=0, keepdims=True)\nX_train_feats -= mean_feat\nX_val_feats -= mean_feat\nX_test_feats -= mean_feat\n\n# Preprocessing: Divide by standard deviation. This ensures that each feature\n# has roughly the same scale.\nstd_feat = np.std(X_train_feats, axis=0, keepdims=True)\nX_train_feats /= std_feat\nX_val_feats /= std_feat\nX_test_feats /= std_feat\n\n# Preprocessing: Add a bias dimension\nX_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])\nX_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])\nX_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])", "Train SVM on features\nUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.\n|Number of Bins|Validation Accuracy|Learning Rate|Regularization Strength|Test Accuracy|\n|------|------|--|\n|10|0.426000|8.000000e-07|5.000000e+04||\n|50|0.440000|8.000000e-07|5.000000e+04|0.428|\n|50|0.441000|3.000000e-07|1.000000e+05|0.428|\n|100|0.440000|2.000000e-07|8.000000e+04|0.414|\n|150|0.428000|8.000000e-07|2.000000e+04|0.388|\nlr 3.000000e-07 reg 1.000000e+05 train accuracy: 0.426041 val accuracy: 0.441000", "# Use the validation set to tune the learning rate and regularization strength\n\nfrom cs231n.classifiers.linear_classifier import LinearSVM\n\nlearning_rates = [1e-7, 2e-7, 3e-7, 5e-5, 8e-7]\nregularization_strengths = [1e4, 2e4, 3e4, 4e4, 5e4, 6e4, 7e4, 8e4, 7e5]\n\nresults = {}\nbest_val = -1\nbest_svm = None\n\n################################################################################\n# TODO: #\n# Use the validation set to set the learning rate and regularization strength. #\n# This should be identical to the validation that you did for the SVM; save #\n# the best trained classifer in best_svm. You might also want to play #\n# with different numbers of bins in the color histogram. If you are careful #\n# you should be able to get accuracy of near 0.44 on the validation set. #\n################################################################################\nfor lr in learning_rates:\n for rs in regularization_strengths:\n svm = LinearSVM()\n svm.train(X_train_feats, y_train, learning_rate = lr, reg = rs, num_iters = 2000)\n train_accuracy = np.mean(y_train == svm.predict(X_train_feats))\n val_accuracy = np.mean(y_val == svm.predict(X_val_feats))\n results[(lr, rs)] = (train_accuracy, val_accuracy)\n if val_accuracy > best_val:\n best_val = val_accuracy\n best_svm = svm\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out results.\nfor lr, reg in sorted(results):\n train_accuracy, val_accuracy = results[(lr, reg)]\n print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (\n lr, reg, train_accuracy, val_accuracy)\n \nprint 'best validation accuracy achieved during cross-validation: %f' % best_val\n\n# Evaluate your trained SVM on the test set\ny_test_pred = best_svm.predict(X_test_feats)\ntest_accuracy = np.mean(y_test == y_test_pred)\nprint test_accuracy\n\n# An important way to gain intuition about how an algorithm works is to\n# visualize the mistakes that it makes. In this visualization, we show examples\n# of images that are misclassified by our current system. The first column\n# shows images that our system labeled as \"plane\" but whose true label is\n# something other than \"plane\".\n\nexamples_per_class = 8\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nfor cls, cls_name in enumerate(classes):\n idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]\n idxs = np.random.choice(idxs, examples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)\n plt.imshow(X_test[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls_name)\nplt.show()", "Inline question 1:\nDescribe the misclassification results that you see. Do they make sense?\nNeural Network on image features\nEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. \nFor completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.", "print X_train_feats.shape", "| Learning Rate| Regularization Rate | Validation Accuracy | Test Accuracy |\n| --- | --- | --- | --- |\n| 0.1 | 0.0001 | 0.544 | 0.534 |\n| 0.1 | 0.000215443469003 | 0.544 | 0.538 |\n| 0.1 | 0.000464158883361 | 0.542 | 0.534 |\n| 0.1 | 0.001 | 0.537 | 0.535 |\n| 0.1 | 0.00215443469003 | 0.536 | 0.533 |\n| 0.1 | 0.00464158883361 | 0.529 | 0.533 |\n| 0.1 | 0.01 | 0.524 | 0.522 |\n| 0.1 | 0.0215443469003 | 0.508 | 0.508 |\n| 0.1 | 0.0464158883361 | 0.51 | 0.489 |\n| 0.1 | 0.1 | 0.434 | 0.446 |\n| 0.215443469003 | 0.0001 | 0.594 | 0.58 |\n| 0.215443469003 | 0.000215443469003 | 0.604 | 0.578 |\n| 0.215443469003 | 0.000464158883361 | 0.601 | 0.58 |\n| 0.215443469003 | 0.001 | 0.593 | 0.586 |\n| 0.215443469003 | 0.00215443469003 | 0.597 | 0.569 |\n| 0.215443469003 | 0.00464158883361 | 0.579 | 0.56 |\n| 0.215443469003 | 0.01 | 0.554 | 0.539 |\n| 0.215443469003 | 0.0215443469003 | 0.515 | 0.517 |\n| 0.215443469003 | 0.0464158883361 | 0.508 | 0.491 |\n| 0.215443469003 | 0.1 | 0.441 | 0.446 |\n| 0.464158883361 | 0.0001 | 0.595 | 0.599 |\n| 0.464158883361 | 0.000215443469003 | 0.601 | 0.597 |\n| 0.464158883361 | 0.000464158883361 | 0.594 | 0.6 |\n| 0.464158883361 | 0.001 | 0.616 | 0.596 |\n| 0.464158883361 | 0.00215443469003 | 0.609 | 0.601 |\n| 0.464158883361 | 0.00464158883361 | 0.603 | 0.575 |\n| 0.464158883361 | 0.01 | 0.573 | 0.551 |\n| 0.464158883361 | 0.0215443469003 | 0.525 | 0.517 |\n| 0.464158883361 | 0.0464158883361 | 0.502 | 0.503 |\n| 0.464158883361 | 0.1 | 0.44 | 0.447 |\n| 1.0 | 0.0001 | 0.568 | 0.566 |\n| 1.0 | 0.000215443469003 | 0.588 | 0.589 |\n| 1.0 | 0.000464158883361 | 0.591 | 0.571 |\n| 1.0 | 0.001 | 0.61 | 0.587 |\n| 1.0 | 0.00215443469003 | 0.614 | 0.603 |\n| 1.0 | 0.00464158883361 | 0.62 | 0.587 |\n| 1.0 | 0.01 | 0.574 | 0.557 |\n| 1.0 | 0.0215443469003 | 0.521 | 0.517 |\n| 1.0 | 0.0464158883361 | 0.498 | 0.492 |\n| 1.0 | 0.1 | 0.433 | 0.441 |\n| 2.15443469003 | 0.0001 | 0.547 | 0.559 |\n| 2.15443469003 | 0.000215443469003 | 0.571 | 0.564 |\n| 2.15443469003 | 0.000464158883361 | 0.563 | 0.578 |\n| 2.15443469003 | 0.001 | 0.6 | 0.592 |\n| 2.15443469003 | 0.00215443469003 | 0.615 | 0.613 |\n| 2.15443469003 | 0.00464158883361 | 0.611 | 0.6 |\n| 2.15443469003 | 0.01 | 0.578 | 0.558 |\n| 2.15443469003 | 0.0215443469003 | 0.525 | 0.511 |\n| 2.15443469003 | 0.0464158883361 | 0.491 | 0.485 |\n| 2.15443469003 | 0.1 | 0.449 | 0.454 |\n| 4.64158883361 | 0.0001 | 0.087 | 0.103 |\n| 4.64158883361 | 0.000215443469003 | 0.087 | 0.103 |\n| 4.64158883361 | 0.000464158883361 | 0.087 | 0.103 |\n| 4.64158883361 | 0.001 | 0.087 | 0.103 |\n| 4.64158883361 | 0.00215443469003 | 0.087 | 0.103 |\n| 4.64158883361 | 0.00464158883361 | 0.087 | 0.103 |\n| 4.64158883361 | 0.01 | 0.087 | 0.103 |\n| 4.64158883361 | 0.0215443469003 | 0.087 | 0.103 |\n| 4.64158883361 | 0.0464158883361 | 0.087 | 0.103 |\n| 4.64158883361 | 0.1 | 0.087 | 0.103 |\n| 10.0 | 0.0001 | 0.087 | 0.103 |\n| 10.0 | 0.000215443469003 | 0.087 | 0.103 |\n| 10.0 | 0.000464158883361 | 0.087 | 0.103 |\n| 10.0 | 0.001 | 0.087 | 0.103 |\n| 10.0 | 0.00215443469003 | 0.087 | 0.103 |\n| 10.0 | 0.00464158883361 | 0.087 | 0.103 |\n| 10.0 | 0.01 | 0.087 | 0.103 |\n| 10.0 | 0.0215443469003 | 0.087 | 0.103 |\n| 10.0 | 0.0464158883361 | 0.087 | 0.103 |\n| 10.0 | 0.1 | 0.087 | 0.103 |\n| 21.5443469003 | 0.0001 | 0.087 | 0.103 |\n| 21.5443469003 | 0.000215443469003 | 0.087 | 0.103 |\n| 21.5443469003 | 0.000464158883361 | 0.087 | 0.103 |\n| 21.5443469003 | 0.001 | 0.087 | 0.103 |\n| 21.5443469003 | 0.00215443469003 | 0.087 | 0.103 |\n| 21.5443469003 | 0.00464158883361 | 0.087 | 0.103 |\n| 21.5443469003 | 0.01 | 0.087 | 0.103 |\n| 21.5443469003 | 0.0215443469003 | 0.087 | 0.103 |\n| 21.5443469003 | 0.0464158883361 | 0.087 | 0.103 |\n| 21.5443469003 | 0.1 | 0.087 | 0.103 |\n| 46.4158883361 | 0.0001 | 0.087 | 0.103 |\n| 46.4158883361 | 0.000215443469003 | 0.087 | 0.103 |\n| 46.4158883361 | 0.000464158883361 | 0.087 | 0.103 |\n| 46.4158883361 | 0.001 | 0.087 | 0.103 |\n| 46.4158883361 | 0.00215443469003 | 0.087 | 0.103 |\n| 46.4158883361 | 0.00464158883361 | 0.087 | 0.103 |\n| 46.4158883361 | 0.01 | 0.087 | 0.103 |\n| 46.4158883361 | 0.0215443469003 | 0.087 | 0.103 |\n| 46.4158883361 | 0.0464158883361 | 0.087 | 0.103 |\n| 46.4158883361 | 0.1 | 0.087 | 0.103 |\n| 100.0 | 0.0001 | 0.087 | 0.103 |\n| 100.0 | 0.000215443469003 | 0.087 | 0.103 |\n| 100.0 | 0.000464158883361 | 0.087 | 0.103 |\n| 100.0 | 0.001 | 0.087 | 0.103 |\n| 100.0 | 0.00215443469003 | 0.087 | 0.103 |\n| 100.0 | 0.00464158883361 | 0.087 | 0.103 |\n| 100.0 | 0.01 | 0.087 | 0.103 |\n| 100.0 | 0.0215443469003 | 0.087 | 0.103 |\n| 100.0 | 0.0464158883361 | 0.087 | 0.103 |\n| 100.0 | 0.1 | 0.087 | 0.103 |", "from cs231n.classifiers.neural_net import TwoLayerNet\n\ninput_dim = X_train_feats.shape[1]\nhidden_dim = 500\nnum_classes = 10\n\nbest_net = None\nbest_val_acc = 0.0\nbest_hidden_size = None\nbest_learning_rate = None\nbest_regularization_strength = None\n################################################################################\n# TODO: Train a two-layer neural network on image features. You may want to #\n# cross-validate various parameters as in previous sections. Store your best #\n# model in the best_net variable. #\n################################################################################\nlearning_rates = np.logspace(-1, 2, 10)\nregularization_strengths = np.logspace(-4, -1, 10)\n\nprint '| Learning Rate| Regularization Rate | Validation Accuracy | Test Accuracy |'\nprint '| --- | --- | --- | --- |'\nfor learning_rate in learning_rates:\n for regularization_strength in regularization_strengths:\n net = TwoLayerNet(input_dim, hidden_dim, num_classes)\n\n # Train the network\n stats = net.train(X_train_feats, y_train, X_val_feats, y_val,\n num_iters=5000, batch_size=500,\n learning_rate=learning_rate, learning_rate_decay=0.95,\n reg=regularization_strength, verbose=False)\n\n # Predict on the validation set\n val_acc = (net.predict(X_val_feats) == y_val).mean()\n test_acc = (net.predict(X_test_feats) == y_test).mean()\n if best_val_acc < val_acc:\n best_val_acc = val_acc\n best_net = net\n best_learning_rate = learning_rate\n best_regularization_strength = regularization_strength\n print '|', learning_rate, '|', regularization_strength,'|', val_acc,'|',test_acc, '|'\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Run your neural net classifier on the test set. You should be able to\n# get more than 55% accuracy.\n\ntest_acc = (best_net.predict(X_test_feats) == y_test).mean()\nprint test_acc", "Bonus: Design your own features!\nYou have seen that simple image features can improve classification performance. So far we have tried HOG and color histograms, but other types of features may be able to achieve even better classification performance.\nFor bonus points, design and implement a new type of feature and use it for image classification on CIFAR-10. Explain how your feature works and why you expect it to be useful for image classification. Implement it in this notebook, cross-validate any hyperparameters, and compare its performance to the HOG + Color histogram baseline.\nBonus: Do something extra!\nUse the material and code we have presented in this assignment to do something interesting. Was there another question we should have asked? Did any cool ideas pop into your head as you were working on the assignment? This is your chance to show off!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tjwei/HackNTU_Data_2017
Week05/From NumPy to Logistic Regression.ipynb
mit
[ "起手式,導入 numpy, matplotlib", "from PIL import Image\nimport numpy as np\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nmatplotlib.style.use('bmh')\nmatplotlib.rcParams['figure.figsize']=(8,5)", "使用之前下載的 mnist 資料,載入訓練資料 train_set 和測試資料 test_set", "import gzip\nimport pickle\nwith gzip.open('../Week02/mnist.pkl.gz', 'rb') as f:\n train_set, validation_set, test_set = pickle.load(f, encoding='latin1')\n \ntrain_X, train_y = train_set\nvalidation_X, validation_y = validation_set\ntest_X, test_y = test_set", "之前的看圖片函數", "from IPython.display import display\ndef showX(X):\n int_X = (X*255).clip(0,255).astype('uint8')\n # N*784 -> N*28*28 -> 28*N*28 -> 28 * 28N\n int_X_reshape = int_X.reshape(-1,28,28).swapaxes(0,1).reshape(28,-1)\n display(Image.fromarray(int_X_reshape))\n# 訓練資料, X 的前 20 筆\nshowX(train_X[:20])", "train_set 是用來訓練我們的模型用的\n我們的模型是很簡單的 logistic regression 模型,用到的參數只有一個 784x10 的矩陣 W 和一個長度 10 的向量 b。\n我們先用均勻隨機亂數來設定 W 和 b 。", "W = np.random.uniform(low=-1, high=1, size=(28*28,10))\nb = np.random.uniform(low=-1, high=1, size=10)\n", "完整的模型如下\n將圖片看成是長度 784 的向量 x\n計算 $Wx+b$, 然後再取 $exp$。 最後得到的十個數值。將這些數值除以他們的總和。\n我們希望出來的數字會符合這張圖片是這個數字的機率。\n$ \\Pr(Y=i|x, W, b) = \\frac {e^{W_i x + b_i}} {\\sum_j e^{W_j x + b_j}}$\n先拿第一筆資料試試看, x 是輸入。 y 是這張圖片對應到的數字(以這個例子來說 y=5)。", "x = train_X[0]\ny = train_y[0]\nshowX(x)\ny", "先計算 $e^{Wx+b} $", "Pr = np.exp(x @ W + b)\nPr.shape", "然後 normalize,讓總和變成 1 (符合機率的意義)", "Pr = Pr/Pr.sum()\nPr", "由於 $W$ 和 $b$ 都是隨機設定的,所以上面我們算出的機率也是隨機的。\n正確解是 $y=5$, 運氣好有可能猜中\n為了要評斷我們的預測的品質,要設計一個評斷誤差的方式,我們用的方法如下(不是常見的方差,而是用熵的方式來算,好處是容易微分,效果好)\n$ loss = - \\log(\\Pr(Y=y|x, W,b)) $\n上述的誤差評分方式,常常稱作 error 或者 loss,數學式可能有點費解。實際計算其實很簡單,就是下面的式子", "loss = -np.log(Pr[y])\nloss", "想辦法改進。\n我們用一種被稱作是 gradient descent 的方式來改善我們的誤差。\n因為我們知道 gradient 是讓函數上升最快的方向。所以我們如果朝 gradient 的反方向走一點點(也就是下降最快的方向),那麼得到的函數值應該會小一點。\n記得我們的變數是 $W$ 和 $b$ (裡面總共有 28*20+10 個變數),所以我們要把 $loss$ 對 $W$ 和 $b$ 裡面的每一個參數來偏微分。\n還好這個偏微分是可以用手算出他的形式,而最後偏微分的式子也不會很複雜。\n$loss$ 展開後可以寫成\n$loss = \\log(\\sum_j e^{W_j x + b_j}) - W_i x - b_i$\n對 $k \\neq i$ 時, $loss$ 對 $b_k$ 的偏微分是 \n $$ \\frac{e^{W_k x + b_k}}{\\sum_j e^{W_j x + b_j}} = \\Pr(Y=k | x, W, b)$$\n對 $k = i$ 時, $loss$ 對 $b_k$ 的偏微分是 \n$$ \\Pr(Y=k | x, W, b) - 1$$", "gradb = Pr.copy()\ngradb[y] -= 1\nprint(gradb)", "對 $W$ 的偏微分也不難\n對 $k \\neq i$ 時, $loss$ 對 $W_{k,t}$ 的偏微分是 \n $$ \\frac{e^{W_k x + b_k} W_{k,t} x_t}{\\sum_j e^{W_j x + b_j}} = \\Pr(Y=k | x, W, b) x_t$$\n對 $k = i$ 時, $loss$ 對 $W_{k,t}$ 的偏微分是 \n$$ \\Pr(Y=k | x, W, b) x_t - x_t$$", "print(Pr.shape, x.shape, W.shape)\ngradW = x.reshape(784,1) @ Pr.reshape(1,10)\ngradW[:, y] -= x", "算好 gradient 後,讓 W 和 b 分別往 gradient 反方向走一點點,得到新的 W 和 b", "W -= 0.1 * gradW\nb -= 0.1 * gradb", "再一次計算 $\\Pr$ 以及 $loss$", "Pr = np.exp(x @ W + b)\nPr = Pr/Pr.sum()\nloss = -np.log(Pr[y])\nloss", "Q\n\n看看 Pr , 然後找出機率最大者, predict y 值\n再跑一遍上面程序,看看誤差是否變小?\n拿其他的測試資料來看看,我們的 W, b 學到了什麼?\n\n我們將同樣的方式輪流對五萬筆訓練資料來做,看看情形會如何", "W = np.random.uniform(low=-1, high=1, size=(28*28,10))\nb = np.random.uniform(low=-1, high=1, size=10)\nscore = 0\nN=50000*20\nd = 0.001\nlearning_rate = 1e-2\nfor i in range(N):\n if i%50000==0:\n print(i, \"%5.3f%%\"%(score*100))\n x = train_X[i%50000]\n y = train_y[i%50000]\n Pr = np.exp( x @ W +b)\n Pr = Pr/Pr.sum()\n loss = -np.log(Pr[y])\n score *=(1-d)\n if Pr.argmax() == y:\n score += d\n gradb = Pr.copy()\n gradb[y] -= 1\n gradW = x.reshape(784,1) @ Pr.reshape(1,10)\n gradW[:, y] -= x\n W -= learning_rate * gradW\n b -= learning_rate * gradb\n ", "結果發現正確率大約是 92%, 但這是對訓練資料而不是對測試資料\n而且,一筆一筆的訓練資也有點慢,線性代數的特點就是能夠向量運算。如果把很多筆 $x$ 當成列向量組合成一個矩陣(然後叫做 $X$),由於矩陣乘法的原理,我們還是一樣計算 $WX+b$ , 就可以同時得到多筆結果。\n下面的函數,可以一次輸入多筆 $x$, 同時一次計算多筆 $x$ 的結果和準確率。", "def compute_Pr(X):\n Pr = np.exp(X @ W + b)\n return Pr/Pr.sum(axis=1, keepdims=True)\ndef compute_accuracy(Pr, y):\n return (Pr.argmax(axis=1)==y).mean()", "下面是更新過得訓練過程, 當 i%100000 時,順便計算一下 test accuracy 和 valid accuracy。", "%%timeit -r 1 -n 1\ndef compute_Pr(X):\n Pr = np.exp(X @ W + b)\n return Pr/Pr.sum(axis=1, keepdims=True)\ndef compute_accuracy(Pr, y):\n return (Pr.argmax(axis=1)==y).mean()\n\nW = np.random.uniform(low=-1, high=1, size=(28*28,10))\nb = np.random.uniform(low=-1, high=1, size=10)\nscore = 0\nN=20000\nbatch_size = 128\nlearning_rate = 0.5\nfor i in range(0, N):\n if (i+1)%2000==0: \n test_score = compute_accuracy(compute_Pr(test_X), test_y)*100 \n train_score = compute_accuracy(compute_Pr(train_X), train_y)*100\n print(i+1, \"%5.2f%%\"%test_score, \"%5.2f%%\"%train_score)\n # 隨機選出一些訓練資料出來\n rndidx = np.random.choice(train_X.shape[0], batch_size, replace=False)\n X, y = train_X[rndidx], train_y[rndidx]\n # 一次計算所有的 Pr\n Pr = compute_Pr(X)\n # 計算平均 gradient \n Pr_one_y = Pr-np.eye(10)[y]\n gradb = Pr_one_y.mean(axis=0)\n gradW = X.T @ (Pr_one_y) / batch_size\n # 更新 W 和 ba\n W -= learning_rate * gradW\n b -= learning_rate * gradb", "最後得到的準確率是 92%-93%\n不算完美,不過畢竟這只有一個矩陣而已。\n光看數據沒感覺,我們來看看前十筆測試資料跑起來的情形\n可以看到前十筆只有錯一個", "Pr = compute_Pr(test_X[:10])\npred_y =Pr.argmax(axis=1)\nfor i in range(10):\n print(pred_y[i], test_y[i])\n showX(test_X[i])", "看看前一百筆資料中,是哪些情況算錯", "Pr = compute_Pr(test_X[:100])\npred_y = Pr.argmax(axis=1)\nfor i in range(100):\n if pred_y[i] != test_y[i]:\n print(pred_y[i], test_y[i])\n showX(test_X[i])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jarrison/trEFM-learn
Examples/demo.ipynb
mit
[ "Welcome!\nLet's start by assuming you have downloaded the code, and ran the setup.py . This demonstration will show the user how predict the time constant of their trEFM data using the methods of statistical learning. Let's start by importing the data simulation module trEFMlearn package. This package contains methods to numerically simulate some experimental data.", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom trEFMlearn import data_sim\n%matplotlib inline", "Simulation\nYou can create an array of time constants that you would like to simulate the data for. This array can then be input into the simulation function which simulates the data as well as fits it using standard vector regression. This function can take a few minutes dpending on the number of time constants you provide. Run this cell and wait for the function to complete. There may be an error that occurs, don't fret as this has no effect.", "tau_array = np.logspace(-8, -5, 100)\n\nfit_object, fit_tau = data_sim.sim_fit(tau_array)", "Neato!\nLooks like that function is all done. We now have an SVR Object called \"fit_object\" as well as a result of the fit called \"fit_tau\". Let's take a look at the result of the fit by comparing it to the actual input tau.", "plt.figure()\nplt.title('Fit Time Constant vs. Actual')\nplt.plot(fit_tau, 'bo')\nplt.plot(tau_array,'g')\nplt.ylabel('Tau (s)')\nplt.yscale('log')\nplt.show()\n\n# Calculate the error at each measurement.\nerror = (tau_array - fit_tau) / tau_array\n\nplt.figure()\nplt.title('Error Signal')\nplt.plot(tau_array, error)\nplt.ylabel('Error (%)')\nplt.xlabel('Time Constant (s)')\nplt.xscale('log')\nplt.show()", "Clearly the SVR method is quite capable of reproducing the time constants simulated data using very simple to calculate features. We observe some lower limit to the model's ability to calculate time constants, which is quite interesting. However, this lower limit appears below 100 nanoseconds, a time-scale that is seldom seen in the real world. This could be quite useful for extracting time constant data!\nAnalyzing a Real Image\nThe Data\nIn order to assess the ability of the model to apply to real images, I have taken a trEFM image of an MDMO photovoltaic material. There are large aggregates of acceptor material that should show a nice contrast in the way that they generate and hold charge. Each pixel of this image has been pre-averaged before being saved with this demo program. Each pixel is a measurement of the AFM cantilever position as a function of time. \nThe Process\nOur mission is to extract the time constant out of this signal using the SVR fit of our simulated data. We accomplish this by importing and calling the \"process_image\" function.", "from trEFMlearn import process_image", "The image processing function needs two inputs. First we show the function the path to the provided image data. We then provide the function with the SVR object that was previously generated using the simulated cantilever data. Processing this image should only take 15 to 30 seconds.", "tau_img, real_sum_img, fft_sum_img, amp_diff_img = process_image.analyze_image('.\\\\image data\\\\', fit_object)", "Awesome. That was pretty quick huh? Without this machine learning method, the exact same image we just analyzed takes over 8 minutes to run. Yes! Now let's take a look at what we get.", "# Something went wrong in the data on the first line. Let's skip it.\ntau_img = tau_img[1:]\nreal_sum_img = real_sum_img[1:]\nfft_sum_img = fft_sum_img[1:]\namp_diff_img = amp_diff_img[1:]\n\nplt.figure() \nupper_lim = (tau_img.mean() + 2*tau_img.std())\nlower_lim = (tau_img.mean() - 2*tau_img.std())\nplt.imshow(tau_img,vmin=lower_lim, vmax=upper_lim,cmap = 'cubehelix')\nplt.show()", "You can definitely begin to make out some of the structure that is occuring in the photovoltaic performance of this device. This image looks great, but there are still many areas of improvement. For example, I will need to extensively prove that this image is not purely a result of topographical cross-talk. If this image is correct, this is a significant improvement on our current imaging technique.\nThe Features\nIn the next cell we show an image of the various features that were calculated from the raw deflection signal. Some features more clearly matter than others and indicate that the search for better and more representative features is desirable. However, I think this is a great start to a project I hope to continue developing in the future.", "fig, axs = plt.subplots(nrows=3)\naxs[0].imshow(real_sum_img ,'hot')\naxs[0].set_title('Total Signal Sum')\n\naxs[1].imshow(fft_sum_img, cmap='hot')\naxs[1].set_title('Sum of the FFT Power Spectrum')\n\naxs[2].imshow(amp_diff_img, cmap='hot')\naxs[2].set_title('Difference in Amplitude After Trigger')\nplt.tight_layout()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Timonzimm/CS-401
HW02/Homework 2.ipynb
mit
[ "02 - Data from the Web\nDeadline\nWednesday October 25, 2017 at 11:59PM\nImportant Notes\n\nMake sure you push on GitHub your Notebook with all the cells already evaluated (i.e., you don't want your colleagues to generate unnecessary Web traffic during the peer review)\nDon't forget to add a textual description of your thought process, the assumptions you made, and the solution you plan to implement!\nPlease write all your comments in English, and use meaningful variable names in your code.\n\nBackground\nIn this homework we will extract interesting information from www.toptop_universities.com and www.timeshighereducation.com, two platforms that maintain a global ranking of worldwide top_universities. This ranking is not offered as a downloadable dataset, so you will have to find a way to scrape the information we need!\nYou are not allowed to download manually the entire ranking -- rather you have to understand how the server loads it in your browser. For this task, Postman with the Interceptor extension can help you greatly. We recommend that you watch this brief tutorial to understand quickly how to use it.\nAssignment\n\nObtain the 200 top-ranking top_universities in www.toptop_universities.com (ranking 2018). In particular, extract the following fields for each university: name, rank, country and region, number of faculty members (international and total) and number of students (international and total). Some information is not available in the main list and you have to find them in the details page.\nStore the resulting dataset in a pandas DataFrame and answer the following questions:\nWhich are the best top_universities in term of: (a) ratio between faculty members and students, (b) ratio of international students?\nAnswer the previous question aggregating the data by (c) country and (d) region.\n\nPlot your data using bar charts and describe briefly what you observed.\n\n\nObtain the 200 top-ranking top_universities in www.timeshighereducation.com (ranking 2018). Repeat the analysis of the previous point and discuss briefly what you observed.\n\n\nMerge the two DataFrames created in questions 1 and 2 using university names. Match top_universities' names as well as you can, and explain your strategy. Keep track of the original position in both rankings.\n\n\nFind useful insights in the data by performing an exploratory analysis. Can you find a strong correlation between any pair of variables in the dataset you just created? Example: when a university is strong in its international dimension, can you observe a consistency both for students and faculty members?\n\n\nCan you find the best university taking in consideration both rankings? Explain your approach.\n\n\nHints:\n- Keep your Notebook clean and don't print the verbose output of the requests if this does not add useful information for the reader.\n- In case of tie, use the order defined in the webpage.", "import requests, re, html\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom bs4 import BeautifulSoup\n\nfrom tqdm import tqdm_notebook\nimport warnings\nwarnings.filterwarnings('ignore')\n\nNUM_OBS = 200", "www.topuniversities.com", "root_url_1 = 'https://www.topuniversities.com'\n# we use the link to the API from where the website fetches its data instead of BeautifulSoup\n# much much cleaner\nlist_url_1 = root_url_1 + '/sites/default/files/qs-rankings-data/357051_indicators.txt'\n\nr = requests.get(list_url_1)\ntop_universities = pd.DataFrame()\ntop_universities = top_universities.from_dict(r.json()['data'])[['uni', 'overall_rank', 'location', 'region']]\n# get the university name and details URL with a regex\ntop_universities['name'] = top_universities['uni'].apply(lambda name: html.unescape(re.findall('<a[^>]+href=\\\"(.*?)\\\"[^>]*>(.*)?</a>', name)[0][1]))\ntop_universities['url'] = top_universities['uni'].apply(lambda name: html.unescape(re.findall('<a[^>]+href=\\\"(.*?)\\\"[^>]*>(.*)?</a>', name)[0][0]))\ntop_universities.drop('uni', axis=1, inplace=True)\ntop_universities['overall_rank'] = top_universities['overall_rank'].astype(int)\n\n# selects the top N rows based on the colum_name of the dataframe df \ndef select_top_N(df, column_name, N):\n df = df.sort_values(by=column_name)\n df = df[df[column_name] <= N]\n return df\n\n# get only the first top-200 universities by overall rank\ntop_universities = select_top_N(top_universities, 'overall_rank', NUM_OBS)\ntop_universities.head()\n\nstudents_total = []\nstudents_inter = []\nfaculty_total = []\nfaculty_inter = []\n\ndef get_num(soup, selector):\n scraped = soup.select(selector)\n # Some top_universities don't have stats, return NaN for these case\n if scraped:\n return int(scraped[0].contents[0].replace(',', ''))\n else:\n return np.NaN\n\n\nfor details_url in tqdm_notebook(top_universities['url']):\n soup = BeautifulSoup(requests.get(root_url_1 + details_url).text, 'html.parser')\n \n students_total.append(get_num(soup, 'div.total.student div.number'))\n students_inter.append(get_num(soup, 'div.total.inter div.number'))\n faculty_total.append(get_num(soup, 'div.total.faculty div.number'))\n faculty_inter.append(get_num(soup, 'div.inter.faculty div.number'))\n\n\ntop_universities['students_total'] = students_total\ntop_universities['students_international'] = students_inter\ntop_universities['students_national'] = top_universities['students_total'] - top_universities['students_international']\ntop_universities['faculty_total'] = faculty_total\ntop_universities['faculty_international'] = faculty_inter\ntop_universities['faculty_national'] = top_universities['faculty_total'] - top_universities['faculty_international']\n\ntop_universities.head()\n\n#defining colors for each type of plot\ncolors_1 = ['#FF9F9A', '#D0BBFF']\ncolors_2 = ['#92C6FF', '#97F0AA']\nplt.style.use('ggplot')", "Best universities in term of:\nWe selected the top 10 universities in point (a) and (b). For point (c) and (d), the top 200 universities were used in order to have more data.\n(a) ratio between faculty members and students", "top = 10\ntop_universities_ratio = select_top_N(top_universities, 'overall_rank', top)\n\ntop_universities_ratio_sf = top_universities_ratio[['name', 'students_total', 'faculty_total']]\ntop_universities_ratio_sf = top_universities_ratio_sf.set_index(['name'])\ntop_universities_ratio_sf.index.name = None\n\nfig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True)\ntop_universities_ratio_sf.plot.bar(stacked=True, color=colors_1, ax=axes)\naxes.set_title('Top 10 ratio\\'s between students and faculty members among universities')\naxes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\nfig.autofmt_xdate()\nplt.show()", "Comments: We see that it is rather difficult to compare the ratios of the different universities. This is due to the different sizes of the population. In order to draw more precise information about it, we need to normalize the data with repect to each university.", "# normalize the data to be able to make a good comparison\ntop_universities_ratio_normed = top_universities_ratio_sf.div(top_universities_ratio_sf.sum(1), axis=0).sort_values(by='faculty_total', ascending=False)\ntop_universities_ratio_normed.index.name = None\n\nfig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True)\ntop_universities_ratio_normed.plot.bar(stacked=True, color=colors_1, ax=axes)\naxes.set_title('Top 10 ratio\\'s between students and faculty members among universities')\naxes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\n# we can restrict the range on the y axis to avoid displaying unnecessary content\naxes.set_ylim([0.7,1])\nfig.autofmt_xdate()\nplt.show()", "Comments: You noticed that the y-axis ranges from 0.7 to 1. We limited the visualization to this interval because the complete interval does not add meaningful insight about the data. Analyzing the results, we see that the Caltech university is the university in the top 10 offering more faculty members to its students. ETHZ is in the last position.\n(b) ratio of international students", "top_universities_ratio_s = top_universities_ratio[['name', 'students_international', 'students_national']]\ntop_universities_ratio_s = top_universities_ratio_s.set_index(['name'])\ntop_universities_ratio_s_normed = top_universities_ratio_s.div(top_universities_ratio_s.sum(1), axis=0).sort_values(by='students_international', ascending=False)\ntop_universities_ratio_s_normed.index.name = None\n\nfig, axes = plt.subplots(1, 1, figsize=(10, 5))\ntop_universities_ratio_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes)\naxes.set_title('Top 10 ratio\\'s of international and national students among universities')\naxes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\naxes.set_ylim([0, 0.6])\nfig.autofmt_xdate()\nplt.show()", "Comments: The most international university, by its students, among the top 10 universities is the Imperial College London. Notice that ETHZ is in the third position.\n(c) same comparisons by country", "ratio_country_sf = top_universities.groupby(['location'])['students_total', 'faculty_total'].sum()\nratio_country_sf_normed = ratio_country_sf.div(ratio_country_sf.sum(1), axis=0).sort_values(by='faculty_total', ascending=False)\nratio_country_sf_normed.index.name = None\n\nfig, axes = plt.subplots(1, 1, figsize=(15, 5))\nratio_country_sf_normed.plot.bar(stacked=True, color=colors_1, ax=axes)\naxes.set_title('Ratio of students and faculty members by country')\naxes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\naxes.set_ylim([0.8,1])\nfig.autofmt_xdate()\nplt.show()\n\nratio_country_s = top_universities.groupby(['location'])['students_international', 'students_national'].sum()\nratio_country_s_normed = ratio_country_s.div(ratio_country_s.sum(1), axis=0).sort_values(by='students_international', ascending=False)\nratio_country_s_normed.index.name = None\n\nfig, axes = plt.subplots(1, 1, figsize=(15, 5))\nratio_country_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes)\naxes.set_title('Ratio of international and national students by country')\naxes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\naxes.set_ylim([0, 0.4])\nfig.autofmt_xdate()\nplt.show()", "Comments: Aggregating the data by country, we see that Russia is the country offering more faculty members for its student, followed by Danemark and Saudi Arabia. The most international university in terms of students is Australia, followed by United Kingdom and Hong Kong. Switzerland is in the fifth position and India is the country with the lowest ratio of international students. \n(d) same comparisons by region", "ratio_region_s = top_universities.groupby(['region'])['students_total', 'faculty_total'].sum()\nratio_region_s_normed = ratio_region_s.div(ratio_region_s.sum(1), axis=0).sort_values(by='faculty_total', ascending=False)\nratio_region_s_normed.index.name = None\n\nfig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True)\nratio_region_s_normed.plot.bar(stacked=True, color=colors_1, ax=axes)\naxes.set_title('Ratio of students and faculty members by region')\naxes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\naxes.set_ylim([0.8,1])\naxes.yaxis.grid(True)\nfig.autofmt_xdate()\nplt.show()\n\nratio_region_s = top_universities.groupby(['region'])['students_international', 'students_national'].sum()\nratio_region_s_normed = ratio_region_s.div(ratio_region_s.sum(1), axis=0).sort_values(by='students_international', ascending=False)\nratio_region_s_normed.index.name = None\n\nfig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True)\nratio_region_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes)\naxes.set_title('Ratio of international and national students by region')\naxes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\naxes.set_ylim([0,0.4])\naxes.yaxis.grid(True)\nfig.autofmt_xdate()\nplt.show()", "Comments: Asia is the region offering more faculty members to its students. It is followed by North America and Europe. The most international university in terms of students is Oceania. Europe is second.\nAnalysis of the two methods\nWe get consistent results comparing the results obtained by region or by country about the ratio of international students. By country, we get Australia and by region, Oceania. This makes sense as Australia owns nine of the eleven top_universities of Oceania. \nwww.timeshighereducation.com", "# we repeat the same procedure as for www.topuniversities.com\nroot_url_2 = 'https://www.timeshighereducation.com'\nlist_url_2 = root_url_2 + '/sites/default/files/the_data_rankings/world_university_rankings_2018_limit0_369a9045a203e176392b9fb8f8c1cb2a.json'\n\nr = requests.get(list_url_2)\ntimes_higher_education = pd.DataFrame()\ntimes_higher_education = times_higher_education.from_dict(r.json()['data'])[['rank', 'location', 'location', 'name', 'url', 'stats_number_students', 'stats_pc_intl_students', 'stats_student_staff_ratio']]\n\n# rename columns as is the first dataframe\ntimes_higher_education.columns = ['overall_rank', 'location', 'region', 'name', 'url', 'students_total', 'ratio_inter', 'student_staff_ratio']\n\n\n# as the ranks have different represetation we had to delete the '=' in front of universities that have the same rank,\n# rewrite the rank when it is represented as an interval (ex: 201-250) and finally delete the '+' in the end for the last ones \ntimes_higher_education['overall_rank'] = times_higher_education['overall_rank'].apply(lambda rank: re.sub('[=]', '', rank))\ntimes_higher_education['overall_rank'] = times_higher_education['overall_rank'].apply(lambda rank: rank.split('–')[0])\ntimes_higher_education['overall_rank'] = times_higher_education['overall_rank'].apply(lambda rank: re.sub('[+]', '', rank)).astype(int)\n\n# remaps a ranking in order to make selection by ranking easier\n# ex: 1,2,3,3,5,6,7 -> 1,2,3,3,4,5,6\ndef remap_ranking(rank):\n last=0\n for i in range(len(rank)):\n if last == rank[i]-1:\n #no problem\n last = rank[i]\n elif last != rank[i]:\n last = last+1\n rank[i] = last\n rank[(i+1):] = rank[(i+1):]-1\n return rank\n\ntimes_higher_education['overall_rank'] = remap_ranking(times_higher_education['overall_rank'].copy())\n\n\n# in the following lines we make the necessary transformation in order to get the right type or numbers for each column\ntimes_higher_education['students_total'] = times_higher_education['students_total'].apply(lambda x: re.sub('[^0-9]','', x)).astype(int) \ntimes_higher_education['ratio_inter'] = times_higher_education['ratio_inter'].apply(lambda x: re.sub('[^0-9]','', x)).astype(float) \ntimes_higher_education['student_staff_ratio'] = times_higher_education['student_staff_ratio'].astype(float) \n\ntimes_higher_education['students_international'] = (times_higher_education['students_total'] * (times_higher_education['ratio_inter']/100)).astype(int)\ntimes_higher_education['students_national'] = times_higher_education['students_total'] - times_higher_education['students_international']\n\ntimes_higher_education['faculty_total'] = (times_higher_education['students_total'] / times_higher_education['student_staff_ratio']).astype(int)\ntimes_higher_education['faculty_international'] = np.NaN\ntimes_higher_education['faculty_national'] = np.NaN \n\ntimes_higher_education['region'] = np.NaN\n\n# resolve ties\ntimes_higher_education['overall_rank'] = np.arange(1, times_higher_education.shape[0]+1)\n\n\n# resolve N/A region\nloc_to_reg = top_universities[['location', 'region']]\nloc_to_reg = set(loc_to_reg.apply(lambda x: '{}_{}'.format(x['location'], x['region']), axis=1).values)\nloc_to_reg = {x.split('_')[0]: x.split('_')[1] for x in loc_to_reg}\nfrom collections import defaultdict\nloc_to_reg = defaultdict(lambda: 'N/A', loc_to_reg)\ndef resolve_uni(x):\n x['region'] = loc_to_reg[x['location']]\n return x\n\ntimes_higher_education = times_higher_education.apply(resolve_uni, axis=1)\n\ndel times_higher_education['ratio_inter']\ndel times_higher_education['student_staff_ratio']\n\n# get only the first top-200 universities by overall rank\ntimes_higher_education = select_top_N(times_higher_education, 'overall_rank', NUM_OBS)\ntimes_higher_education.head()", "Best universities in term of:\nWe selected the top 10 universities in point (a) and (b). For point (c) and (d), the top 200 universities were used in order to have more data.\n(a) ratio between faculty members and students", "top = 10\ntimes_higher_education_ratio = select_top_N(times_higher_education, 'overall_rank', top)\n\ntimes_higher_education_ratio_sf = times_higher_education_ratio[['name', 'students_total', 'faculty_total']]\ntimes_higher_education_ratio_sf = times_higher_education_ratio_sf.set_index(['name'])\ntimes_higher_education_ratio_normed = times_higher_education_ratio_sf.div(times_higher_education_ratio_sf.sum(1), axis=0).sort_values(by='faculty_total', ascending=False)\ntimes_higher_education_ratio_normed.index.name = None\n\nfig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True)\ntimes_higher_education_ratio_normed.plot.bar(stacked=True, color=colors_1, ax=axes)\naxes.set_title('Top 10 ratio\\'s between students and faculty members among universities')\naxes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\naxes.set_ylim([0.8,1])\nfig.autofmt_xdate()\nplt.show()", "Comments: The university of Chicago is the faculty with the more faculty members by students. It is closely followed by the California Institute of Technology.\n(b) ratio of international students", "times_higher_education_ratio_s = times_higher_education_ratio[['name', 'students_international', 'students_national']]\ntimes_higher_education_ratio_s = times_higher_education_ratio_s.set_index(['name'])\ntimes_higher_education_ratio_s_normed = times_higher_education_ratio_s.div(times_higher_education_ratio_s.sum(1), axis=0).sort_values(by='students_international', ascending=False)\ntimes_higher_education_ratio_s_normed.index.name = None\n\nfig, axes = plt.subplots(1, 1, figsize=(10, 5))\ntimes_higher_education_ratio_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes)\naxes.set_title('Top 10 ratio\\'s of international and national students among universities')\naxes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\naxes.set_ylim([0.2, 0.6])\nfig.autofmt_xdate()\nplt.show()", "Comments: The Imperial College Longon university has a strong lead in the internationalization of its student. Oxford and ETHZ are following bunched together.\n(c) same comparisons by country", "ratio_country_sf = times_higher_education.groupby(['location'])['students_total', 'faculty_total'].sum()\nratio_country_sf_normed = ratio_country_sf.div(ratio_country_sf.sum(1), axis=0).sort_values(by='faculty_total', ascending=False)\nratio_country_sf_normed.index.name = None\n\nfig, axes = plt.subplots(1, 1, figsize=(15, 5))\nratio_country_sf_normed.plot.bar(stacked=True, color=colors_1, ax=axes)\naxes.set_title('Ratio of students and faculty members by country')\naxes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\naxes.set_ylim([0.8,1])\nfig.autofmt_xdate()\nplt.show()", "Comments: Denmark is in the first position. We find the Russian Federation in the second place. This is the same result obtained with the top universities website the other way around. This shows that either the universities of each country have different ranking in each website or each website has different information about each university.", "ratio_country_s = times_higher_education.groupby(['location'])['students_international', 'students_national'].sum()\nratio_country_s_normed = ratio_country_s.div(ratio_country_s.sum(1), axis=0).sort_values(by='students_international', ascending=False)\nratio_country_s_normed.index.name = None\n\nfig, axes = plt.subplots(1, 1, figsize=(15, 5))\nratio_country_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes)\naxes.set_title('Ratio of international and national students by country')\naxes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\naxes.set_ylim([0, 0.6])\nfig.autofmt_xdate()\nplt.show()", "Comments: Luxembourg has more international than national students which allows it to be in first position without difficulty. Switzerland is in the sixth position (versus fifth for top university website).\n(d) same comparisons by region", "# Some countries have their field 'region' filled with 'N/A': this is due to the technique we used to write the\n# correct region for each university. In the sample we are considering, let's see how many universities are concerned:\ntimes_higher_education[times_higher_education['region'] == 'N/A']\n\n# As there is only two universities concerned, we can rapidly write it by hand. Of course we should have develop a\n# more generalized manner to do it, if we had a much larger sample.\ntimes_higher_education.set_value(178, 'region', 'Europe')\ntimes_higher_education.set_value(193, 'region', 'Europe')\n\nratio_region_s = times_higher_education.groupby(['region'])['students_total', 'faculty_total'].sum()\nratio_region_s_normed = ratio_region_s.div(ratio_region_s.sum(1), axis=0).sort_values(by='faculty_total', ascending=False)\nratio_region_s_normed.index.name = None\n\nfig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True)\nratio_region_s_normed.plot.bar(stacked=True, color=colors_1, ax=axes)\naxes.set_title('Ratio of students and faculty members by region')\naxes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\naxes.set_ylim([0.8,1])\naxes.yaxis.grid(True)\nfig.autofmt_xdate()\nplt.show()\n\nratio_region_s = times_higher_education.groupby(['region'])['students_international', 'students_national'].sum()\nratio_region_s_normed = ratio_region_s.div(ratio_region_s.sum(1), axis=0).sort_values(by='students_international', ascending=False)\nratio_region_s_normed.index.name = None\n\nfig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True)\nratio_region_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes)\naxes.set_title('Ratio of international and national students by region')\naxes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\naxes.set_ylim([0,0.4])\naxes.yaxis.grid(True)\nfig.autofmt_xdate()\nplt.show()", "Comments: In the first plot, we see that Africa is the region where there is more faculty members by students. The two following regions are very close to each other. In the second plot, Oceania is the more internationalized school in terms of its students and Europe is second. We had similar results by the other website concerning this last outcome.", "# Detects same universities with different names in the two dataframes before merging\n# using Jaccard similarity and same location rule (seems to keep matching entry)\ndef t(x):\n # Compute Jaccard score (intersection over union)\n def jaccard(a, b):\n u = set(a.split(' '))\n v = set(b.split(' '))\n return len(u.intersection(v)) / len(u.union(v))\n \n names = top_universities['name'].tolist()\n locations = top_universities['location'].tolist()\n scores = np.array([jaccard(x['name'], n) for n in names])\n m = scores.max()\n i = scores.argmax()\n # Jaccard score for similarity and location match to filter out name with different locations\n if m > 0.5 and x['location'] == locations[i]:\n x['name'] = names[i]\n return x\n \n# Match universities name in both dataframes \ntimes_higher_education = times_higher_education.apply(t, axis=1)\n# Intersection on the name column of the two datasets \nmerged = pd.merge(top_universities, times_higher_education, on='name', how='inner')\n\nmerged.head()", "Insights\nHere we first proceed by creating the correlation matrix (since it's a symetric matrix we only kept the lower triangle). We then plot it using a heatmap to see correlation between columns of the dataframe. We also made another heatmap with only correlation whose absolute value is greater than 0.5. Finally we averaged the features when they were available for the two websites (except rankings).\nCorrelations analysis\nSome correlations bring interesting information:\n- $Corr(overall_rank_x, overall_rank_y) = 0.7$ <br />\nWe get a strong correlation between the ranking of the first website and the second one. It shows us that the two website ranking methods lead on similar results (since the correlation is positive). It's insightfull since even if the features are approximately the same for the two websites, their methodology to attribute a rank could be really different. This important positive correlation reveals that the methodologies doesn't differ so much between the two websites.\n- $Corr(students_international_avg, faculty_international_avg) = 0.59$ <br />\nHere we have an interesting correlation between the number of international students and the number of international staff members.\n- We have strong correlation between same features but coming from different websites. It's not really interesting since difference in same features from the two websites are likely to be small. Also we have important correlation between \"total\" features and their sub-categories like \"international\" and \"national\". These are not interesting too because they follow a simple relation: when the total is higher, the sub-categories are likely to be higher numbers too (i.e. if we have more students, we are likely to have more national or international students).", "merged_num = merged.select_dtypes(include=[np.number])\nmerged_num.dropna(how='all', axis=1)\nmerged_num.dropna(how='any', axis=0)\n\ndef avg_feature(x):\n cols = set([x for x in x.index if 'overall' not in x])\n cols_common = set([x[0:-2] for x in cols])\n for cc in cols_common:\n cc_x = '{}_x'.format(cc)\n cc_y = '{}_y'.format(cc)\n if cc_y in cols:\n x['{}_avg'.format(cc)] = (x[cc_x] + x[cc_y]) / 2\n else:\n x['{}_avg'.format(cc)] = x[cc_x] / 2\n for c in cols:\n del x[c]\n return x\n \nmerged_num_avg = merged_num.apply(avg_feature, axis=1)\n\nmerged_num.head()\ncorr = merged_num.corr()\nmask = np.zeros_like(corr, dtype=np.bool)\nmask[np.triu_indices_from(mask)] = True\n\nfig, ax = plt.subplots(figsize=(10,10))\nsns.heatmap(corr, ax=ax, mask=mask, annot=True, square=True)\nplt.show()\n\n# Keep only correlation with and absolute value superior to 0.5 \ncorr[(corr < 0.5) & (corr > -0.5)] = 0\nfig, ax = plt.subplots(figsize=(10,10))\nsns.heatmap(corr, ax=ax, mask=mask, annot=True, square=True)\nplt.show()\n\n# Keep only correlation with and absolute value superior to 0.5 for averaged features\ncorr = merged_num_avg.corr()\nmask = np.zeros_like(corr, dtype=np.bool)\nmask[np.triu_indices_from(mask)] = True\ncorr[(corr < 0.5) & (corr > -0.5)] = 0\nfig, ax = plt.subplots(figsize=(10,10))\nsns.heatmap(corr, ax=ax, mask=mask, annot=True, square=True)\nplt.show()", "Best university\nFirst we have to transform ranking in some score. Here we assume a linear relation for the score given the ranking,\nso we gave a score of 1 for the best ranking and 0 for the worst ranking with linear mapping between these two. We did it for each of the ranking (the two websites).\nAlso we don't really know if a website is most trustworthy than the other, so a good merging for the ranking would be to take the average of the two scores with equal weights for each score.\nFinally we also took into account the ratio of staff member per students:\n$finalScore = mean(score1, score2, staff per studiants)$\nAfter computing these values, we found that Caltech is the best university (according to our assumptions).\nPer Website ranking:\nCaltech: top_universities -> 4 | times_higher_education = -> 3 | staff per student ratio -> 0.15 | => final score: 0.71", "r = merged[['name', 'overall_rank_x', 'overall_rank_y']]\nr.head()\n\ndef lin(df):\n best_rank = df.min() \n worst_rank = df.max() \n a = 1 / (best_rank - worst_rank)\n b = 1 - a*best_rank\n return df.apply(lambda x: a*x + b)\n\nr['stud_staff_ratio'] = merged[['faculty_international_x', 'faculty_international_y']].mean(axis=1) / \\\n merged[['students_total_x', 'students_total_y']].mean(axis=1)\nr['score_x'] = lin(r['overall_rank_x'])\nr['score_y'] = lin(r['overall_rank_y'])\nr['overall_score'] = r[['score_x', 'score_y', 'stud_staff_ratio']].mean(axis=1) \nr = r.dropna()\n\nr[r['overall_score'] == r['overall_score'].max()]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dwhswenson/contact_map
examples/changing_defaults.ipynb
lgpl-2.1
[ "Changing defaults\nUp until now, we've only used the default parameters. Contact Map Explorer always determines contacts based on atom-atom distances, i.e., residues are considered in contact if a pair of an atom from one residue is within a cutoff distance of an atom from another residue. But which atoms? And what is that cutoff distance? These decisions can be customized, and you can get improved performance by customizing them.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport mdtraj as md\ntraj = md.load(\"5550217/kras.xtc\", top=\"5550217/kras.pdb\")\ntopology = traj.topology\n\nfrom contact_map import ContactFrequency", "Customizing the atoms involved in the contact\nContactFrequency takes the parameters query and haystack, which are lists of atom indices. It will then search for all contacts between atoms in query and atoms in haystack. This allows you to, for example, focus on the contacts between two distinct parts of a protein. By only including some atoms in the search, the contacts are calculated more quickly.\nThis also allows you to fundamentally change the definition of a contact by making it about C$_\\alpha$ or about all atoms, instead of heavy atoms as is the default (though if you change that, you should also change the cutoff value). \nIn general, it is easiest to get the list of atom indices from MDTraj using its atom selection language. The default behavior is to look for contacts between all heavy (i.e., non-hydrogen), non-water atoms.", "# the default selection is\ndefault_selection = topology.select(\"not water and symbol != 'H'\")\nprint(len(default_selection))", "Note that the general assumption is that the query is no larger than the haystack. If this isn't obeyed, you'll still get correct answers, but some algorithms may be less efficient, and visualizations have also been designed with this in mind.\nChanging the query\nNow let's focus in on contacts involving specific regions of KRas. In your work, this might be contacts between different parts of one molecule, or contacts between two different molecules, such as in drug binding or DNA-protein interactions. First, let's look at the contacts between the switch 1 region and all other atoms in our default selection. So switch 1 will be our query.\nMDTraj allows queries based on different numbering systems: resid and resSeq. The resid is the internally-used residue number, and starts from 0. On the other hand, resSeq is the residue number given in the PDB, which usually starts from 1 (and is the number we usually refer to in literature).", "switch1 = topology.select(\"resSeq 32 to 38 and symbol != 'H'\")\n\n%%time\nsw1_contacts = ContactFrequency(trajectory=traj, query=switch1)\n\nsw1_contacts.residue_contacts.plot();", "This shows all contacts of switch 1 with anything else in the system. Here, we automatically zoom in to have query on the x axis and the rest on the y axis. The boxes are long rectangles instead of squares as in the default selection. The box represents the residue number (in the resid numbering system) that is to its left and under it. Let's also zoom out to see the complete symmetric plot instead:", "fig, ax = sw1_contacts.residue_contacts.plot()\nax.set_xlim(0, sw1_contacts.residue_contacts.max_size)\nax.set_ylim(0, sw1_contacts.residue_contacts.max_size);", "Changing query and haystack\nWhat if we wanted to zoom in even more, and only look at the contacts between the switch 1 and cations in the system? We make one of the the query and the other the haystack. Since switch1 contains more atoms than cations, we'll use switch1 as the haystack.", "cations = topology.select(\"resname NA or resname MG\")\n\n%%time\ncations_switch1 = ContactFrequency(trajectory=traj, query=cations, haystack=switch1)\n\ncations_switch1.residue_contacts.plot();", "Now we'll plot again, but we'll change the x and y axes so that we now can see switch 1 along x and cations (the query) along y:", "fig, ax = cations_switch1.residue_contacts.plot()\nax.set_xlim(*cations_switch1.haystack_residue_range) \nax.set_ylim(*cations_switch1.query_residue_range);", "Here you can see that the most significant contacts here are between residue 36 and the ion listed as residue 167. Let's see just how frequently that contact is made:", "print(cations_switch1.residue_contacts.counter[frozenset([36, 167])])", "So about half the time. Now, which residue/ion are these? Remember, these indices start at 0, even though the tradition in science (and the PDB) is to count from 1. Furthermore, the PDB residue numbers for the ions skip the section of the protein that has been removed. But we can easily obtain the relevant residues:", "print(topology.residue(36))\nprint(topology.residue(167))", "So this is a contact between the Glu37 and the magnesium ion (which is listed as residue 202 in the PDB).\nChanging the cutoff\nDepending on the atoms you use to select contacts, you might choose different cutoff distances. The default cutoff of 0.45 nm is reasonable for heavy atom contacts. However, if you use all atoms (including hydrogens), you'll probably want a smaller cutoff distance. If you use $\\textrm{C}_\\alpha$ distances, you'll want a larger cutoff distance.\nThe cutoff distance in controlled using the cutoff parameter to ContactFrequency. The performance of Contact Map Explorer largely depends on how many atoms are within the cutoff distance, so making the cutoff distance larger while keeping the same number of atoms will have a significant effect.", "%%time\nlarge_cutoff = ContactFrequency(trajectory=traj[0], cutoff=1.5)\n\n%%time\nlarge_cutoff.residue_contacts.plot();", "The larger cutoff leads to a more dense contact matrix. The performance of plotting depends on how dense the contact matrix is -- for tricks to plot dense matrices more quickly, see the documentation on customizing plotting.\nChanging the number of ignored neighbors\nBy default, Contact Map Explorer ignore atoms from 2 residues on either side of the given residue (and in the same chain). This is easily changed. However, even when you say to ignore no neighbors, you still ignore the residue's interactions with itself.\nNote: for non-protein contacts, the chain is often poorly defined. In this example, the GTP and the Mg are listed sequentially in residue order, and therefore they are considered \"neighbors\" and their contacts are ignored.", "%%time\nignore_none = ContactFrequency(trajectory=traj, n_neighbors_ignored=0)\n\nignore_none.residue_contacts.plot();", "Refresher: The default parameters\n\nWhich atoms are involved in the contact? The default value is non-hydrogen, non-water atoms.\nWhat is the cutoff distance? The default value is 0.45 nm.\nHow many neighboring residues are ignored? The default value is to ignore 2 residues on either side (i±2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ethen8181/machine-learning
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
mit
[ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Rossman-Data-Preparation\" data-toc-modified-id=\"Rossman-Data-Preparation-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Rossman Data Preparation</a></span><ul class=\"toc-item\"><li><span><a href=\"#Individual-Data-Source\" data-toc-modified-id=\"Individual-Data-Source-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Individual Data Source</a></span></li><li><span><a href=\"#Merging-Various-Data-Source\" data-toc-modified-id=\"Merging-Various-Data-Source-1.2\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Merging Various Data Source</a></span></li><li><span><a href=\"#Final-Data\" data-toc-modified-id=\"Final-Data-1.3\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Final Data</a></span><ul class=\"toc-item\"><li><span><a href=\"#Durations\" data-toc-modified-id=\"Durations-1.3.1\"><span class=\"toc-item-num\">1.3.1&nbsp;&nbsp;</span>Durations</a></span></li></ul></li></ul></li><li><span><a href=\"#Reference\" data-toc-modified-id=\"Reference-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Reference</a></span></li></ul></div>", "from jupyterthemes import get_themes\nfrom jupyterthemes.stylefx import set_nb_theme\nthemes = get_themes()\nset_nb_theme(themes[3])\n\n# 1. magic for inline plot\n# 2. magic to print version\n# 3. magic so that the notebook will reload external python modules\n# 4. magic to enable retina (high resolution) plots\n# https://gist.github.com/minrk/3301035\n%matplotlib inline\n%load_ext watermark\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format='retina'\n\nimport os\nimport time\nimport numba\nimport numpy as np\nimport pandas as pd\n\n%watermark -a 'Ethen' -d -t -v -p numpy,pandas,numba", "Rossman Data Preparation\nIndividual Data Source\nIn addition to the data provided by the competition, we will be using external datasets put together by participants in the Kaggle competition. We can download all of them here. Then we should untar them in the directory to which data_dir is pointing to.", "data_dir = 'rossmann'\nprint('available files: ', os.listdir(data_dir))\n\nfile_names = ['train', 'store', 'store_states', 'state_names',\n 'googletrend', 'weather', 'test']\npath_names = {file_name: os.path.join(data_dir, file_name + '.csv')\n for file_name in file_names}\n\ndf_train = pd.read_csv(path_names['train'], low_memory=False)\ndf_test = pd.read_csv(path_names['test'], low_memory=False)\nprint('training data dimension: ', df_train.shape)\nprint('testing data dimension: ', df_test.shape)\ndf_train.head()", "We turn state Holidays to booleans, to make them more convenient for modeling.", "df_train['StateHoliday'] = df_train['StateHoliday'] != '0'\ndf_test['StateHoliday'] = df_test['StateHoliday'] != '0'", "For the weather and state names data, we perform a join on a state name field and create a single dataframe.", "df_weather = pd.read_csv(path_names['weather'])\nprint('weather data dimension: ', df_weather.shape)\ndf_weather.head()\n\ndf_state_names = pd.read_csv(path_names['state_names'])\nprint('state names data dimension: ', df_state_names.shape)\ndf_state_names.head()\n\ndf_weather = df_weather.rename(columns={'file': 'StateName'})\ndf_weather = df_weather.merge(df_state_names, on=\"StateName\", how='left')\ndf_weather.head()", "For the google trend data. We're going to extract the state and date information from the raw dataset, also replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'.", "df_googletrend = pd.read_csv(path_names['googletrend'])\nprint('google trend data dimension: ', df_googletrend.shape)\ndf_googletrend.head()\n\ndf_googletrend['Date'] = df_googletrend['week'].str.split(' - ', expand=True)[0]\ndf_googletrend['State'] = df_googletrend['file'].str.split('_', expand=True)[2]\ndf_googletrend.loc[df_googletrend['State'] == 'NI', 'State'] = 'HB,NI'\ndf_googletrend.head()", "The following code chunks extracts particular date fields from a complete datetime for the purpose of constructing categoricals.\nWe should always consider this feature extraction step when working with date-time. Without expanding our date-time into these additional fields, we can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.", "DEFAULT_DT_ATTRIBUTES = [\n 'Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',\n 'Is_month_end', 'Is_month_start', 'Is_quarter_end',\n 'Is_quarter_start', 'Is_year_end', 'Is_year_start'\n]\n\ndef add_datepart(df, colname, drop_original_col=False,\n dt_attributes=DEFAULT_DT_ATTRIBUTES,\n add_elapse_col=True):\n \"\"\"\n Extract various date time components out of a date column, this modifies\n the dataframe inplace.\n\n References\n ----------\n - https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#time-date-components\n \"\"\"\n df[colname] = pd.to_datetime(df[colname], infer_datetime_format=True)\n \n if dt_attributes:\n for attr in dt_attributes:\n df[attr] = getattr(df[colname].dt, attr.lower())\n\n # representing the number of seconds elapsed from 1970-01-01 00:00:00\n # https://stackoverflow.com/questions/15203623/convert-pandas-datetimeindex-to-unix-time\n if add_elapse_col:\n df['Elapsed'] = df[colname].astype(np.int64) // 10 ** 9\n if drop_original_col:\n df = df.drop(colname, axis=1)\n\n return df\n\ndf_weather.head()\n\ndf_weather = add_datepart(\n df_weather, 'Date',\n dt_attributes=None, add_elapse_col=False)\ndf_googletrend = add_datepart(\n df_googletrend, 'Date', drop_original_col=True,\n dt_attributes=['Year', 'Week'], add_elapse_col=False)\ndf_train = add_datepart(df_train, 'Date')\ndf_test = add_datepart(df_test, 'Date')\n\nprint('training data dimension: ', df_train.shape)\ndf_train.head()", "The Google trends data has a special category for the whole of the Germany - we'll pull that out so we can use it explicitly.", "df_trend_de = df_googletrend.loc[df_googletrend['file'] == 'Rossmann_DE',\n ['Year', 'Week', 'trend']]\ndf_trend_de.head()", "Merging Various Data Source\nNow we can outer join all of our data into a single dataframe. Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields. One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.\nAside: Why not just do an inner join? If we are assuming that all records are complete and match on the field we desire, an inner join will do the same thing as an outer join. However, in the event we are not sure, an outer join followed by a null-check will catch it. (Comparing before/after # of rows for inner join is an equivalent approach).\nDuring the merging process, we'll print out the first few rows of the dataframe and the column names so we can keep track of how the dataframe evolves as we join with a new data source.", "df_store = pd.read_csv(path_names['store'])\nprint('store data dimension: ', df_store.shape)\ndf_store.head()\n\ndf_store_states = pd.read_csv(path_names['store_states'])\nprint('store states data dimension: ', df_store_states.shape)\ndf_store_states.head()\n\ndf_store = df_store.merge(df_store_states, on='Store', how='left')\nprint('null count: ', len(df_store[df_store['State'].isnull()]))\ndf_store.head()\n\ndf_joined_train = df_train.merge(df_store, on='Store', how='left')\ndf_joined_test = df_test.merge(df_store, on='Store', how='left')\n\nnull_count_train = len(df_joined_train[df_joined_train['StoreType'].isnull()])\nnull_count_test = len(df_joined_test[df_joined_test['StoreType'].isnull()])\nprint('null count: ', null_count_train, null_count_test)\nprint('dimension: ', df_joined_train.shape)\ndf_joined_train.head()\n\ndf_joined_train.columns\n\ndf_joined_train = df_joined_train.merge(df_weather, on=['State', 'Date'], how='left')\ndf_joined_test = df_joined_test.merge(df_weather, on=['State', 'Date'], how='left')\n\nnull_count_train = len(df_joined_train[df_joined_train['Mean_TemperatureC'].isnull()])\nnull_count_test = len(df_joined_test[df_joined_test['Mean_TemperatureC'].isnull()])\nprint('null count: ', null_count_train, null_count_test)\nprint('dimension: ', df_joined_train.shape)\ndf_joined_train.head()\n\ndf_joined_train.columns\n\ndf_joined_train = df_joined_train.merge(df_googletrend,\n on=['State', 'Year', 'Week'],\n how='left')\ndf_joined_test = df_joined_test.merge(df_googletrend,\n on=['State', 'Year', 'Week'],\n how='left')\n\nnull_count_train = len(df_joined_train[df_joined_train['trend'].isnull()])\nnull_count_test = len(df_joined_test[df_joined_test['trend'].isnull()])\nprint('null count: ', null_count_train, null_count_test)\nprint('dimension: ', df_joined_train.shape)\ndf_joined_train.head()\n\ndf_joined_train.columns\n\ndf_joined_train = df_joined_train.merge(df_trend_de,\n on=['Year', 'Week'],\n suffixes=('', '_DE'),\n how='left')\ndf_joined_test = df_joined_test.merge(df_trend_de,\n on=['Year', 'Week'],\n suffixes=('', '_DE'),\n how='left')\n\nnull_count_train = len(df_joined_train[df_joined_train['trend_DE'].isnull()])\nnull_count_test = len(df_joined_test[df_joined_test['trend_DE'].isnull()])\nprint('null count: ', null_count_train, null_count_test)\nprint('dimension: ', df_joined_train.shape)\ndf_joined_train.head()\n\ndf_joined_train.columns", "Final Data\nAfter merging all the various data source to create our master dataframe, we'll still perform some additional feature engineering steps including:\n\nSome of the rows contain missing values for some columns, we'll impute them here. What values to impute is pretty subjective then we don't really know the root cause of why it is missing, we won't spend too much time on it here. One common strategy for imputing missing categorical features is to pick an arbitrary signal value that otherwise doesn't appear in the data, e.g. -1, -999. Or impute it with the mean, majority value and create another column that takes on a binary value indicating whether or not that value is missing in the first place.\nCreate some duration features with Competition and Promo column.", "for df in (df_joined_train, df_joined_test):\n df['CompetitionOpenSinceYear'] = (df['CompetitionOpenSinceYear']\n .fillna(1900)\n .astype(np.int32))\n df['CompetitionOpenSinceMonth'] = (df['CompetitionOpenSinceMonth']\n .fillna(1)\n .astype(np.int32))\n df['Promo2SinceYear'] = df['Promo2SinceYear'].fillna(1900).astype(np.int32)\n df['Promo2SinceWeek'] = df['Promo2SinceWeek'].fillna(1).astype(np.int32)\n\nfor df in (df_joined_train, df_joined_test):\n df['CompetitionOpenSince'] = pd.to_datetime(dict(\n year=df['CompetitionOpenSinceYear'], \n month=df['CompetitionOpenSinceMonth'],\n day=15\n ))\n df['CompetitionDaysOpen'] = df['Date'].subtract(df['CompetitionOpenSince']).dt.days", "For the CompetitionMonthsOpen field, we limit the maximum to 2 years to limit the number of unique categories.", "for df in (df_joined_train, df_joined_test):\n df['CompetitionMonthsOpen'] = df['CompetitionDaysOpen'] // 30\n df.loc[df['CompetitionMonthsOpen'] > 24, 'CompetitionMonthsOpen'] = 24\n df.loc[df['CompetitionMonthsOpen'] < -24, 'CompetitionMonthsOpen'] = -24\n\ndf_joined_train['CompetitionMonthsOpen'].unique()", "Repeat the same process for Promo", "from isoweek import Week\n\nfor df in (df_joined_train, df_joined_test):\n df['Promo2Since'] = pd.to_datetime(df.apply(lambda x: Week(\n x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1))\n df['Promo2Days'] = df['Date'].subtract(df['Promo2Since']).dt.days\n\nfor df in (df_joined_train, df_joined_test):\n df['Promo2Weeks'] = df['Promo2Days'] // 7\n df.loc[df['Promo2Weeks'] < 0, 'Promo2Weeks'] = 0\n df.loc[df['Promo2Weeks'] > 25, 'Promo2Weeks'] = 25\n\ndf_joined_train['Promo2Weeks'].unique()\n\ndf_joined_train.columns", "Durations\nIt is common when working with time series data to extract features that captures relationships across rows instead of between columns. e.g. time until next event, time since last event.\nHere, we would like to compute features such as days until next promotion or days before next promotion. And the same process can be repeated for state/school holiday.", "columns = ['Date', 'Store', 'Promo', 'StateHoliday', 'SchoolHoliday']\ndf = df_joined_train[columns].append(df_joined_test[columns])\ndf['DateUnixSeconds'] = df['Date'].astype(np.int64) // 10 ** 9\ndf.head()\n\[email protected]\ndef compute_duration(store_arr, date_unix_seconds_arr, field_arr):\n \"\"\"\n For each store, track the day since/before the occurrence of a field.\n The store and date are assumed to be already sorted.\n \n Parameters\n ----------\n store_arr : 1d ndarray[int]\n\n date_unix_seconds_arr : 1d ndarray[int]\n The date should be represented in unix timestamp (seconds).\n\n field_arr : 1d ndarray[bool]/ndarray[int]\n The field that we're interested in. If int, it should take value\n of 1/0 indicating whether the field/event occurred or not.\n\n Returns\n -------\n result : list[int]\n Days since/before the occurrence of a field.\n \"\"\"\n result = []\n last_store = 0\n\n zipped = zip(store_arr, date_unix_seconds_arr, field_arr)\n for store, date_unix_seconds, field in zipped:\n if store != last_store:\n last_store = store\n last_date = date_unix_seconds\n\n if field:\n last_date = date_unix_seconds\n\n diff_day = (date_unix_seconds - last_date) // 86400\n result.append(diff_day)\n\n return result\n\ndf = df.sort_values(['Store', 'Date'])\n\nstart = time.time()\n\nfor col in ('SchoolHoliday', 'StateHoliday', 'Promo'):\n result = compute_duration(df['Store'].values,\n df['DateUnixSeconds'].values,\n df[col].values)\n df['After' + col] = result\n \nend = time.time()\nprint('elapsed: ', end - start)\n\ndf.head(10)", "If we look at the values in the AfterStateHoliday column, we can see that the first row of the StateHoliday column is True, therefore, the corresponding AfterStateHoliday is therefore 0 indicating it's a state holiday that day, after encountering a state holiday, the AfterStateHoliday column will start incrementing until it sees the next StateHoliday, which will then reset this counter.\nNote that for Promo, it starts out with a 0, but the AfterPromo starts accumulating until it sees the next Promo. Here, we're not exactly sure when was the last promo before 2013-01-01 since we don't have the data for it. Nonetheless we'll still start incrementing the counter. Another approach is to fill it all with 0.", "df = df.sort_values(['Store', 'Date'], ascending=[True, False])\n\nstart = time.time()\n\nfor col in ('SchoolHoliday', 'StateHoliday', 'Promo'):\n result = compute_duration(df['Store'].values,\n df['DateUnixSeconds'].values,\n df[col].values)\n df['Before' + col] = result\n \nend = time.time()\nprint('elapsed: ', end - start)\n\ndf.head(10)", "After creating these new features, we join it back to the original dataframe.", "df = df.drop(['Promo', 'StateHoliday', 'SchoolHoliday', 'DateUnixSeconds'], axis=1)\ndf_joined_train = df_joined_train.merge(df, on=['Date', 'Store'], how='inner')\ndf_joined_test = df_joined_test.merge(df, on=['Date', 'Store'], how='inner')\n\nprint('dimension: ', df_joined_train.shape)\ndf_joined_train.head()\n\ndf_joined_train.columns", "We save the cleaned data so we won't have to repeat this data preparation step again.", "output_dir = 'cleaned_data'\nif not os.path.isdir(output_dir):\n os.makedirs(output_dir, exist_ok=True)\n\nengine = 'pyarrow'\noutput_path_train = os.path.join(output_dir, 'train_clean.parquet')\noutput_path_test = os.path.join(output_dir, 'test_clean.parquet')\ndf_joined_train.to_parquet(output_path_train, engine=engine)\ndf_joined_test.to_parquet(output_path_test, engine=engine)", "Reference\n\nJupyter Notebook: Fastai Course v3 - Rossman Data Clean" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.14/_downloads/plot_resample.ipynb
bsd-3-clause
[ "%matplotlib inline", "Resampling data\nWhen performing experiments where timing is critical, a signal with a high\nsampling rate is desired. However, having a signal with a much higher sampling\nrate than is necessary needlessly consumes memory and slows down computations\noperating on the data.\nThis example downsamples from 600 Hz to 100 Hz. This achieves a 6-fold\nreduction in data size, at the cost of an equal loss of temporal resolution.", "# Authors: Marijn van Vliet <[email protected]>\n#\n# License: BSD (3-clause)\n#\nfrom __future__ import print_function\n\nfrom matplotlib import pyplot as plt\n\nimport mne\nfrom mne.datasets import sample", "Setting up data paths and loading raw data (skip some data for speed)", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nraw = mne.io.read_raw_fif(raw_fname).crop(120, 240).load_data()", "Since downsampling reduces the timing precision of events, we recommend\nfirst extracting epochs and downsampling the Epochs object:", "events = mne.find_events(raw)\nepochs = mne.Epochs(raw, events, event_id=2, tmin=-0.1, tmax=0.8, preload=True)\n\n# Downsample to 100 Hz\nprint('Original sampling rate:', epochs.info['sfreq'], 'Hz')\nepochs_resampled = epochs.copy().resample(100, npad='auto')\nprint('New sampling rate:', epochs_resampled.info['sfreq'], 'Hz')\n\n# Plot a piece of data to see the effects of downsampling\nplt.figure(figsize=(7, 3))\n\nn_samples_to_plot = int(0.5 * epochs.info['sfreq']) # plot 0.5 seconds of data\nplt.plot(epochs.times[:n_samples_to_plot],\n epochs.get_data()[0, 0, :n_samples_to_plot], color='black')\n\nn_samples_to_plot = int(0.5 * epochs_resampled.info['sfreq'])\nplt.plot(epochs_resampled.times[:n_samples_to_plot],\n epochs_resampled.get_data()[0, 0, :n_samples_to_plot],\n '-o', color='red')\n\nplt.xlabel('time (s)')\nplt.legend(['original', 'downsampled'], loc='best')\nplt.title('Effect of downsampling')\nmne.viz.tight_layout()", "When resampling epochs is unwanted or impossible, for example when the data\ndoesn't fit into memory or your analysis pipeline doesn't involve epochs at\nall, the alternative approach is to resample the continuous data. This\ncan also be done on non-preloaded data.", "# Resample to 300 Hz\nraw_resampled = raw.copy().resample(300, npad='auto')", "Because resampling also affects the stim channels, some trigger onsets might\nbe lost in this case. While MNE attempts to downsample the stim channels in\nan intelligent manner to avoid this, the recommended approach is to find\nevents on the original data before downsampling.", "print('Number of events before resampling:', len(mne.find_events(raw)))\n\n# Resample to 100 Hz (generates warning)\nraw_resampled = raw.copy().resample(100, npad='auto')\nprint('Number of events after resampling:',\n len(mne.find_events(raw_resampled)))\n\n# To avoid losing events, jointly resample the data and event matrix\nevents = mne.find_events(raw)\nraw_resampled, events_resampled = raw.copy().resample(\n 100, npad='auto', events=events)\nprint('Number of events after resampling:', len(events_resampled))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
neerajdixit/car-lane-detection
car-lane-detection.ipynb
apache-2.0
[ "Import required packages", "import os\nimport math\nimport glob\nimport cv2\nfrom collections import deque\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nfrom moviepy.editor import VideoFileClip\n%matplotlib inline", "Create a utility class for camera calibration\n\nThis is used for calibrating camera and undistorting the images", "class cam_util():\n \"\"\"\n util class for camera operations\n \"\"\"\n ret = None\n mtx = None\n dist = None\n rvecs = None\n tvecs = None\n \n # Arrays to store object points and image points from all the images.\n objpoints = [] # 3d points in real world space\n imgpoints = [] # 2d points in image plane.\n \n def gen_camera_points(self):\n \"\"\"\n generate objpoints and impoints from calibration images\n \"\"\"\n objp = np.zeros((6*9,3), np.float32)\n objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)\n # Make a list of calibration images\n images = glob.glob('camera_cal/calibration*.jpg')\n # Step through the list and search for chessboard corners\n for fname in images:\n img = cv2.imread(fname)\n gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n # Find the chessboard corners\n ret, corners = cv2.findChessboardCorners(gray, (9,6),None)\n # If found, add object points, image points\n if ret == True:\n self.objpoints.append(objp)\n self.imgpoints.append(corners)\n \n def undistort(self, img):\n \"\"\"\n undistort an image with camera matrix\n \"\"\"\n if self.mtx is None:\n self.ret, self.mtx, self.dist, self.rvecs, self.tvecs = cv2.calibrateCamera(self.objpoints, self.imgpoints,\n img.shape[:2],None,None)\n h, w = img.shape[:2]\n newcameramtx, roi=cv2.getOptimalNewCameraMatrix(self.mtx, self.dist, (w,h), 1, (w,h))\n dst = cv2.undistort(img, self.mtx, self.dist, None, newcameramtx)\n x,y,w,h = roi\n return dst[y:y+h, x:x+w]\n \n def clean_mat(self):\n \"\"\"\n Reset camera calibration\n \"\"\"\n self.ret = None\n self.mtx = None\n self.dist = None\n self.rvecs = None\n self.tvecs = None", "Create a class to keep track of lane detections\n\nHere we use the average of last maxSamples to identify the lane", "class Line():\n \"\"\"\n class to store detected lane stats\n \"\"\"\n def __init__(self, maxSamples=15):\n \n self.maxSamples = maxSamples \n # x values of the last n fits of the line\n self.recent_xfitted = deque(maxlen=self.maxSamples)\n #polynomial coefficients for the most recent fit\n self.current_fit = [np.array([False])] \n #polynomial coefficients averaged over the last n iterations\n self.best_fit = None \n #difference in fit coefficients between last and new fits\n self.diffs = np.array([0,0,0], dtype='float')\n #average x values of the fitted line over the last n iterations\n self.bestx = None\n # was the line detected in the last iteration?\n self.detected = False \n #radius of curvature of the line in some units\n self.radius_of_curvature = None \n #distance in meters of vehicle center from the line\n self.line_base_pos = None \n \n def update_lane(self, ally, allx):\n \"\"\"\n Function to update the stats\n \"\"\"\n # get the mean as the best x \n self.bestx = np.mean(allx, axis=0)\n # fit a 2 order polynomial\n new_fit = np.polyfit(ally, allx, 2)\n # calculate the difference between last fit and new fit\n self.diffs = np.subtract(self.current_fit, new_fit)\n # update current fit\n self.current_fit = new_fit\n # add the new fit to the queue\n self.recent_xfitted.append(self.current_fit)\n # Use the queue mean as the best fit\n self.best_fit = np.mean(self.recent_xfitted, axis=0)\n # meters per pixel in y dimension\n ym_per_pix = 30/720\n # meters per pixel in x dimension\n xm_per_pix = 3.7/700\n # Calculate radius of curvature\n fit_cr = np.polyfit(ally*ym_per_pix, allx*xm_per_pix, 2)\n y_eval = np.max(ally)\n self.radius_of_curvature = ((1 + (2*fit_cr[0]*y_eval*ym_per_pix + fit_cr[1])**2)**1.5) / np.absolute(2*fit_cr[0])\n\n# Utility Functions\n\ndef get_roi(img, vertices):\n \"\"\"\n Apply mask and get region of interest within the mask\n \"\"\"\n mask = np.zeros_like(img)\n if len(img.shape) > 2:\n channel_count = img.shape[2]\n ignore_mask_color = (255,) * channel_count\n else:\n ignore_mask_color = 255 \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\ndef hide_roi(img, vertices):\n \"\"\"\n Apply mask and get region of interest outside the mask\n \"\"\"\n mask = np.zeros_like(img)\n mask=mask+255\n if len(img.shape) > 2:\n channel_count = img.shape[2]\n ignore_mask_color = (0,) * channel_count\n else:\n ignore_mask_color = 0 \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\ndef drow_on_images(img, vertices):\n \"\"\"\n Draw ploygon on image\n \"\"\"\n cv2.polylines(img, [vertices], True, (255,255,255), 2)\n plot_img(img, 'img drawing', True)\n\ndef plot_img(img, step, show_stages=False):\n \"\"\"\n plot image\n \"\"\"\n if show_stages:\n print('######################## '+step+' ########################')\n plt.imshow(img, cmap='gray')\n plt.show()\n \ndef plot_hist(histogram, show_stages=False):\n \"\"\"\n plot histogram\n \"\"\"\n if show_stages:\n print('######################## histogram ########################')\n plt.plot(histogram)\n plt.show()", "Use the lane pixals identified to fit a ploygon and draw it back on the original image", "def write_stats(img):\n \"\"\"\n Write lane stats on image\n \"\"\"\n font = cv2.FONT_HERSHEY_SIMPLEX\n size = 1\n weight = 2\n color = (255,70,0)\n cv2.putText(img,'Left Curve : '+ '{0:.2f}'.format(left_line.radius_of_curvature)+' m',(10,30), font, size, color, weight)\n cv2.putText(img,'Right Curve : '+ '{0:.2f}'.format(right_line.radius_of_curvature)+' m',(10,60), font, size, color, weight)\n cv2.putText(img,'Left Lane Pos: '+ '{0:.2f}'.format(left_line.bestx),(10,100), font, size, color, weight)\n cv2.putText(img,'Right Lane Pos: '+ '{0:.2f}'.format(right_line.bestx),(10,130), font, size, color, weight)\n cv2.putText(img,'Distance from center: '+ \"{0:.2f}\".format(left_line.line_base_pos)+' m',(10,180), font, size, color, weight)\n \ndef draw_lane(undist, img, Minv):\n \"\"\"\n Draw the detected lane bak on the image\n \"\"\"\n # Generate x and y values for plotting\n ploty = np.linspace(300, 700)\n # Create an image to draw the lines on\n warp_zero = np.zeros_like(img).astype(np.uint8)\n color_warp = np.dstack((warp_zero, warp_zero, warp_zero))\n \n left_fit = left_line.best_fit\n right_fit = right_line.best_fit\n \n if left_fit is not None and right_fit is not None:\n left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]\n # Recast the x and y points into usable format for cv2.fillPoly()\n pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])\n right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]\n pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])\n pts = np.hstack((pts_left, pts_right))\n # Draw the lane onto the warped blank image\n cv2.fillPoly(color_warp, np.int_([pts]), (20,120, 80))\n # Warp the blank back to original image space using inverse perspective matrix (Minv)\n newwarp = cv2.warpPerspective(color_warp, Minv, (img.shape[1], img.shape[0])) \n # Combine the result with the original image\n result = cv2.addWeighted(undist, 1, newwarp, 0.6, 0)\n write_stats(result)\n return result\n return undist", "Here we validate the detected lines and add them to the lane class\nA valid detection satisfies below rules\n\nMinmum number of pixals must be greater than 2000\nLeft lane mean should be more than a minimum\nRight lane mean should be less then a minimum\nLane width whoud be atlest 300 and atmost 800\nNew detections must be within 100px of the average of last n detections", "def validate_Update_lane(img, nonzero, nonzerox, nonzeroy, left_lane_inds, right_lane_inds, show_stages=False):\n \"\"\"\n Validate the detected lane ids and update the lane stats if valid.\n \"\"\"\n # Extract left and right line pixel positions\n left_line_allx = nonzerox[left_lane_inds]\n left_line_ally = nonzeroy[left_lane_inds] \n right_line_allx = nonzerox[right_lane_inds]\n right_line_ally = nonzeroy[right_lane_inds]\n \n # Discard the detections if any of the detected lane is less than 2000 pixals. \n # This is done because for very small size the poly fit function gives unpredictable results.\n # A better approch would be to use the largest lane curvature to extend the other one\n if len(left_line_allx) <= 2000 or len(right_line_allx) <= 2000:\n left_line.detected = False\n right_line.detected = False\n return\n \n left_x_mean = np.mean(left_line_allx, axis=0)\n right_x_mean = np.mean(right_line_allx, axis=0)\n lane_width = np.subtract(right_x_mean, left_x_mean)\n \n # Discard the detections if the lane with is too large or too small\n if left_x_mean > 450 or right_x_mean < 850:\n left_line.detected = False\n right_line.detected = False\n return\n if lane_width < 300 or lane_width > 800:\n left_line.detected = False\n right_line.detected = False\n return \n \n # Update the lane stats if the current detection is the first one or\n # the detection is within 100 pixals of the last n detection mean\n if left_line.bestx is None or np.abs(np.subtract(left_line.bestx, np.mean(left_line_allx, axis=0))) < 100:\n left_line.update_lane(left_line_ally, left_line_allx)\n left_line.detected = True\n else:\n left_line.detected = False\n if right_line.bestx is None or np.abs(np.subtract(right_line.bestx, np.mean(right_line_allx, axis=0))) < 100:\n right_line.update_lane(right_line_ally, right_line_allx)\n right_line.detected = True\n else:\n right_line.detected = False\n \n # Calculate the distance of car from center of lane\n lane_center = right_line.bestx - left_line.bestx\n left_line.line_base_pos = ((img.shape[1]*0.5 - lane_center)*3.7)/700\n right_line.line_base_pos = left_line.line_base_pos", "Find the lane using sliding window technique\n\nUse the minimum of bottom 1/4 of the histogram to find the initial left and right base\nUse the base points to find more points within a margin and min number of pixals\nUsing \nwindows size = 9 \nmargin = 80\nmin pixals = 30", "def window_search(img, nonzero, nonzerox, nonzeroy, show_stages=False):\n \"\"\"\n Perform a sliding window search to detect lane pixals.\n \"\"\"\n # Temp image to draw detections on\n out_img = np.dstack((img, img, img))*255\n # Calculate histogram\n histogram = np.sum(img[img.shape[0]*.75:,:], axis=0)\n plot_hist(histogram, show_stages)\n # Take the midpoint and use the max on each side as starting point\n midpoint = np.int(histogram.shape[0]/2)\n leftx_base = np.argmax(histogram[0:midpoint])\n rightx_base = np.argmax(histogram[midpoint:histogram.shape[0]]) + midpoint\n # Choose the number of sliding windows\n nwindows = 9\n # Set height of windows\n window_height = np.int(img.shape[0]/nwindows)\n # Current positions to be updated for each window\n leftx_current = leftx_base\n rightx_current = rightx_base\n # Set the width of the windows +/- margin\n margin = 80\n # Set minimum number of pixels found to recenter window\n minpix = 30\n # Create empty lists to receive left and right lane pixel indices\n left_lane_inds = []\n right_lane_inds = []\n # Step through the windows one by one\n for window in range(nwindows):\n # Identify window boundaries in x and y (and right and left)\n win_y_low = img.shape[0] - (window+1)*window_height\n win_y_high = img.shape[0] - window*window_height\n \n win_xleft_low = leftx_current - margin\n win_xleft_high = leftx_current + margin\n win_xright_low = rightx_current - margin\n win_xright_high = rightx_current + margin\n # Draw the windows on the visualization image\n cv2.rectangle(out_img,(win_xleft_low,win_y_low),(win_xleft_high,win_y_high),(0,255,0), 2) \n cv2.rectangle(out_img,(win_xright_low,win_y_low),(win_xright_high,win_y_high),(0,255,0), 2) \n \n # Identify the nonzero pixels in x and y within the window\n good_left_inds = ((nonzeroy >= win_y_low)\n & (nonzeroy < win_y_high)\n & (nonzerox >= win_xleft_low)\n & (nonzerox < win_xleft_high)).nonzero()[0]\n good_right_inds = ((nonzeroy >= win_y_low)\n & (nonzeroy < win_y_high)\n & (nonzerox >= win_xright_low)\n & (nonzerox < win_xright_high)).nonzero()[0]\n\n left_lane_inds.append(good_left_inds)\n right_lane_inds.append(good_right_inds)\n # If found > minpix pixels, recenter next window on their mean position\n if len(good_left_inds) > minpix:\n leftx_current = np.int(np.mean(nonzerox[good_left_inds])) \n if len(good_right_inds) > minpix:\n rightx_current = np.int(np.mean(nonzerox[good_right_inds]))\n \n # Concatenate the arrays of indices\n left_lane_inds = np.concatenate(left_lane_inds)\n right_lane_inds = np.concatenate(right_lane_inds)\n plot_img(out_img, 'sliding window marked', show_stages)\n return left_lane_inds, right_lane_inds", "Find Lanes Wrapper\n\n\nIf left or right lane found in the last iteration. Get the pixals in a margin of 30 and validate\n\n\nIf the validation fails or this is the first iteration use the sliding window technique to find lanes and then validate.", "def find_lanes(img, show_stages=False):\n \"\"\"\n Lane finding wrapper function\n \"\"\"\n # Get the foreground pixals\n nonzero = img.nonzero()\n nonzeroy = np.array(nonzero[0])\n nonzerox = np.array(nonzero[1])\n # If the last detection was successful take the non zero pixals within the 30 pixal margin as the new detections\n if left_line.detected and right_line.detected:\n margin = 30\n left_lane_inds = ((nonzerox > (left_line.current_fit[0]*(nonzeroy**2) + left_line.current_fit[1]*nonzeroy + left_line.current_fit[2] - margin))\n & (nonzerox < (left_line.current_fit[0]*(nonzeroy**2) + left_line.current_fit[1]*nonzeroy + left_line.current_fit[2] + margin))) \n right_lane_inds = ((nonzerox > (right_line.current_fit[0]*(nonzeroy**2) + right_line.current_fit[1]*nonzeroy + right_line.current_fit[2] - margin))\n & (nonzerox < (right_line.current_fit[0]*(nonzeroy**2) + right_line.current_fit[1]*nonzeroy + right_line.current_fit[2] + margin)))\n # Update the lane detections\n validate_Update_lane(img, nonzero, nonzerox, nonzeroy, left_lane_inds, right_lane_inds)\n # If first detection or the last detection was unsuccessful perform a sliding window search\n else:\n #print('doing window search')\n left_lane_inds, right_lane_inds = window_search(img, nonzero, nonzerox, nonzeroy, show_stages)\n # Update the lane detections\n validate_Update_lane(img, nonzero, nonzerox, nonzeroy, left_lane_inds, right_lane_inds)", "Warp the image to get birds' eye view\n\n\nUse source points\n\nbounding_top_right = [img_shape[1]0.5 + 90,img_shape[0]0.70]\nbounding_btm_right = [img_shape[1]*0.5 + 450,img_shape[0]]\nbounding_btm_left = [img_shape[1]*0.5 - 400,img_shape[0]]\nbounding_top_left = [img_shape[1]0.5 - 60,img_shape[0]0.70]\n\n\n\nDestinations points\n\nbounding_top_right = [img_shape[1]0.5 + 250,img_shape[0]0.60]\nbounding_btm_right = [img_shape[1]*0.5 + 390,img_shape[0]]\nbounding_btm_left = [img_shape[1]*0.5 - 345,img_shape[0]]\nbounding_top_left = [img_shape[1]0.5 - 205,img_shape[0]0.60]\n\n\n\nGet perpective transform\n\nGet inverse perpective transform\nwarp the image using perspective transform", "def warp(img):\n \"\"\"\n Warp the image to get birds eye view.\n \"\"\"\n img_shape = img.shape\n bounding_top_right = [img_shape[1]*0.5 + 90,img_shape[0]*0.70]\n bounding_btm_right = [img_shape[1]*0.5 + 450,img_shape[0]]\n bounding_btm_left = [img_shape[1]*0.5 - 400,img_shape[0]]\n bounding_top_left = [img_shape[1]*0.5 - 60,img_shape[0]*0.70]\n # Select source points\n pts1 = np.float32([bounding_top_right,bounding_btm_right,bounding_btm_left,bounding_top_left])\n # Select destination points\n pts2 = np.float32([[img_shape[1]*0.5 + 250,img_shape[0]*0.60],\n [img_shape[1]*0.5 + 390,img_shape[0]],\n [img_shape[1]*0.5 - 345,img_shape[0]],\n [img_shape[1]*0.5 - 205,img_shape[0]*0.60]])\n # Get Perspective Transform \n M = cv2.getPerspectiveTransform(pts1, pts2)\n # Get inverse Perspective Transform \n Minv = cv2.getPerspectiveTransform(pts2, pts1)\n # Apply warp transform on source image\n dst = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]), flags=cv2.INTER_LINEAR)\n return dst, Minv", "Threshold\n\nUse color threshold\nThe number of lane pixals must be considerably less than the background pixals and have a minimum value.\nWe use this to recursively increase or decrease the minimum threshold value to find the optimal value.\n\n\nUse Sobel operator to find gradients\nCombine the two to get the result", "def rec_threshold(img, roi, t_min=140, t_max=255):\n \"\"\"\n Funtion to apply recursive threshold with increasing/decreasing boundries\n based on the area of lane within a region of interest.\n \"\"\"\n binary = np.zeros_like(img)\n binary[(img >= t_min) & (img <= t_max)] = 1\n # retrun last val if the threshold levels reach minimum or maximum.\n if t_min <= 40 or t_min >= 220:\n return binary\n binary_1 = get_roi(binary, roi)\n #print(np.sum(binary_1.nonzero()))\n if np.sum(binary_1.nonzero()) > 9800000:\n binary = rec_threshold(img, roi, t_min+10)\n elif np.sum(binary_1.nonzero()) < 100000:\n binary = rec_threshold(img, roi, t_min-10) \n return binary\n\ndef threshold(img, roi, show_stages=False):\n \"\"\"\n Apply threshold\n \"\"\"\n # Convert image the HSV\n hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)\n # Take v channel\n v_channel = hsv[:,:,2]\n plot_img(v_channel, 'v channel', show_stages)\n # Apply threshold to find lane\n v_binary = rec_threshold(v_channel, roi)\n plot_img(v_binary, 'color threshold', show_stages)\n\n # Convert image to grayscale\n gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Take the derivative in x\n sobelx = cv2.Sobel(gray, cv2.CV_32F, 1, 0)\n #sobelx = cv2.Sobel(sobelx, cv2.CV_32F, 0, 1) # Take the derivative \n #plot_img(sobelx, show_stages)\n # Absolute x derivative to \n abs_sobelx = np.absolute(sobelx)\n #accentuate lines away from horizontal\n scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))\n #plot_img(sobelx, show_stages)\n sxbinary = np.zeros_like(scaled_sobel)\n # perform threshold\n sxbinary[(scaled_sobel >= 100) & (scaled_sobel <= 255)] = 1\n plot_img(sobelx, 'sobel', show_stages)\n color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, v_binary))\n combined_binary = np.zeros_like(sxbinary)\n # conbine color and sobel threshold\n combined_binary[(v_binary == 1) | (sxbinary == 1)] = 1\n plot_img(combined_binary, 'combined threshold', show_stages)\n return combined_binary", "Apply all the steps\n\nUndistort the image\nApply perspective transform\nApply threshold\nFind lanes\nDraw the result back on image", "def process_image(image, show_stages=False):\n \"\"\"\n Wrapper function for all image processing\n \"\"\"\n # Undistort the image\n undistorted = cam.undistort(image)\n plot_img(undistorted, 'undistorted', show_stages)\n # Apply perpective transform\n img, Minv = warp(undistorted)\n plot_img(img, 'warped', show_stages)\n # Get points for region of interst\n vertices = np.array([[(image.shape[1]*0.1,image.shape[0]-50),\n (image.shape[1]*0.5-100,image.shape[0]*0.60),\n (image.shape[1]*0.5+100,image.shape[0]*0.60),\n (image.shape[1]*0.95,image.shape[0]-50)]], \n dtype=np.int32)\n # Apply threshold\n img = threshold(img, vertices, show_stages)\n vertices = np.array([[(200,img.shape[0]),\n (200,0),\n (1050,0),\n (1050,img.shape[0])]], dtype=np.int32)\n # Get roi\n img = get_roi(img, vertices)\n # Find Lanes\n find_lanes(img, show_stages)\n # Draw lanes on image\n res = draw_lane(undistorted, img, Minv); \n #plot_img(res, show_stages)\n return res", "Generate obj points and img points", "# init camera\ncam = cam_util()\ncam.gen_camera_points()", "Calibrate camera and undistort the chessbaord images", "# Undistort a sample calibration image\ncal_dir = \"camera_cal/\"\ncal_images = glob.glob(cal_dir+'*.jpg')\n\nfor cal_image in cal_images:\n cimg = mpimg.imread(cal_image)\n cimg_undistort = cam.undistort(cimg)\n cv2.imwrite('output_images/undistort_'+cal_image.split('/')[1],cimg_undistort)\nprint('calibration done')\n\n# Clean camera matrix\ncam.clean_mat()", "Test on images", "# Test on images\ntest_dir = \"test_images/\"\ntest_images = glob.glob(test_dir+'test*.jpg')\n#test_images = glob.glob(test_dir+'straight_lines*.jpg')\n#test_images = glob.glob(test_dir+'*.jpg')\nfor test_image in test_images:\n left_line = Line()\n right_line = Line()\n image = mpimg.imread(test_image)\n res = process_image(image, False)\n #plot_img(res, True)\n\nprint('######################## Sample Stages ########################')\nprint()\n# display stages for a sample image\nleft_line = Line()\nright_line = Line()\nimage = mpimg.imread('test_images/test3.jpg')\nplot_img(image, 'Initial', True)\nres = process_image(image, True)\nplot_img(res, 'Final', True)", "Test on videos", "# Test on Videos\n\n# Clean data for video\n#\"\"\"\nleft_line = Line()\nright_line = Line()\ncam.clean_mat()\nproject_video_res = 'project_video_res.mp4'\nclip1 = VideoFileClip(\"project_video.mp4\")\nproject_video_clip = clip1.fl_image(process_image)\nproject_video_clip.write_videofile(project_video_res, audio=False)\n#\"\"\"\n\n# Clean data for video\n#\"\"\"\nleft_line = Line()\nright_line = Line()\ncam.clean_mat()\nchallenge_video_res = 'challenge_video_res.mp4'\nclip2 = VideoFileClip('challenge_video.mp4')\nchallenge_video_clip = clip2.fl_image(process_image)\nchallenge_video_clip.write_videofile(challenge_video_res, audio=False)\n#\"\"\"\n\n# Clean data for video\n#\"\"\"\nleft_line = Line()\nright_line = Line()\ncam.clean_mat()\nharder_challenge_video_res = 'harder_challenge_video_res.mp4'\nclip2 = VideoFileClip('harder_challenge_video.mp4')\nharder_challenge_video_clip = clip2.fl_image(process_image)\nharder_challenge_video_clip.write_videofile(harder_challenge_video_res, audio=False)\n#\"\"\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
splicemachine/splice-community-sample-code
twimlcon-workshop-materials/5 - Model Governance.ipynb
apache-2.0
[ "<img src=\"Images/Splice_logo.jpeg\" width=\"250\" height=\"200\" align=\"left\" >\nUsing the Feature Store, and Database Deployment, for governance\nStart Spark Session", "#Begin spark session \nfrom pyspark.sql import SparkSession\nspark = SparkSession.builder.getOrCreate()\n\n#Create pysplice context. Allows you to create a Spark dataframe using our Native Spark DataSource \nfrom splicemachine.spark import PySpliceContext\nsplice = PySpliceContext(spark)\n\n#Initialize our Feature Store API\nfrom splicemachine.features import FeatureStore\nfrom splicemachine.features.constants import FeatureType\nfs = FeatureStore(splice)\n\n#Initialize MLFlow\nfrom splicemachine.mlflow_support import *\nmlflow.register_feature_store(fs)\nmlflow.register_splice_context(splice)", "See the distribution of predictions over time", "fs.display_model_drift('deployed_models','twimlcon_regression', 5)", "See the distribution of features at the time a model was trained, and the distribution seen by the deployed model", "fs.display_model_feature_drift('deployed_models','twimlcon_regression')", "Investigate individual predictions", "%%sql\nselect * \nfrom deployed_models.twimlcon_regression\nwhere customerid = 12526 and (eval_time >= '2020-11-01' and eval_time <= '2020-11-07')\n\nfrom splicemachine.notebook import get_mlflow_ui\nget_mlflow_ui()\n\n#tags.\"Run ID\" = {runid}\n\nspark.stop()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
makism/dyfunconn
tutorials/fMRI - 1 - Graph Analysis (Group).ipynb
bsd-3-clause
[ "%matplotlib inline", "If this tutorial we are going to use estimate the connectivity and subsequently filter them.\n\nLoad data", "import sys\nimport tqdm\n\nimport warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)\n\nimport numpy as np\nnp.set_printoptions(threshold=sys.maxsize)\n\nfmri = np.load('data/fmri_autism_ts.npy', allow_pickle=True)\nlabels = np.load('data/fmri_autism_labels.npy')\n\nnum_subjects = len(fmri)\nnum_samples, num_rois = np.shape(fmri[0])", "Compute the connectivity", "conn_mtx = np.zeros((num_subjects, num_rois, num_rois))\n\nfor subj in tqdm.tqdm(range(num_subjects)):\n fmri_ts = fmri[subj]\n conn_mtx[subj, ...] = np.corrcoef(fmri_ts.T)\n\nnp.save('data/fmri_autism_conn_mtx.npy', conn_mtx)", "Filter connectivity matrices", "thres_conn_mtx = np.zeros_like(conn_mtx)\n\nfrom dyconnmap.graphs import threshold_eco\n\nfor subj in tqdm.tqdm(range(num_subjects)):\n subj_conn_mtx = np.abs(conn_mtx[subj])\n _, CIJtree, _ = threshold_eco(subj_conn_mtx)\n \n thres_conn_mtx[subj] = CIJtree\n\nnp.save('data/fmri_autism_thres_conn_mtx.npy', thres_conn_mtx)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PyladiesMx/Empezando-con-Python
UsoJupyter/CuadernoJupyter.ipynb
mit
[ "Bienvenid@s a Jupyter\n\nLos cuadernos de Jupyter son una herramienta interactiva que te permite preparar documentos con código ejecutable, ecuaciones, texto, imágenes, videos, entre otros, que te ayuda a enriquecer o explicar la lógica detallada de tu código. \nLos cuadernos de jupyter son comúnmente usados en: ciencia, análisis de datos y educación. Eso no significa que son de uso exclusivo para esos casos, en mi experiencia los cuadernos me han ayudado a organizar mis pensamientos y visualizar mejor los datos que analizo.\nTal vez te preguntes ¿qué puede hacer Jupyter por mí? Bueno, aquí hay unas cuántas ventajas de usar Jupyter:\n\n\nTrabajar en tu lenguaje preferido.\n\nJupyter tiene soporte para más de 40 lenguajes de programación, incluyendo los que son populares para el análisis de datos como Python (obviamente), R, Julia y Scala. Así que si por algún extraño motivo decides que no te gustó python (lo dudo :P) puedes seguir disfrutando las libretas de jupyter.\n\n\n\nCompartir tus libretas con quien quieras.\n\nEn donde quieras y como quieras. Esto significa que puedes escribir tareas y reportes en tus libretas y mandarlos a alguien específico o subirlos a tu repositorio de Github y que todo el mundo sepa lo que hiciste. Esto es como reproducibilidad al máximo...\n\n\n\nConvierte tus libretas a archivos de diferentas formatos.\n\n\nPuedes convertir tus libretas a documentos estáticos como HTML, LaTeX, PDF, Markdown, reStructuredText,etc. La documentación está en este vínculo y la forma fácil de hacerlo es:\n\n\nClick en \"File\"\n\n\nPon el cursos sobre \"Download as\"\n\n\nSelecciona la opción que prefieras.\n\n\n\n\n\n\nWidgets interactivos\n\nPuedes crear salidas con videos, barras para cambiar valores... Exploraremos esto más adelante.\n\n\n\nAhora que ya sabes lo que Jupyter es, vamos a usarlo!!\nFormato y ejecución de celdas\nlos cuadernos de Jupyter se manejan con celdas y es importante saber que cada celda puede ser de distintos tipos, unas van a ser de código (ya sea python, r, julia o el kernel deseado) y otras de markdown.\nLo primero que vas a hacer es dar click en la celda anterior y en esta para que notes que ambas son celdas de markdown.\nPuedes notar que hay formas para que en el documento aparezcan enlaces, listas, palabras resaltadas en negritas, encabezados. Aunado a esto puedes insertar fórmulas en LaTex, código en HTML, imágenes, tablas y código no ejecutable pero resaltado con la sintaxis apropiada.\nEjemplos con estilos usados normalmente\nEncabezados\nSi escribes esto:\n# Encabezado 1\n## Encabezado 2\n### Encabezado 3\n... \n###### Encabezado 6\n\nObtienes esto:\nEncabezado 1\nEncabezado 2\nEncabezado 3\n...\nEncabezado 6\nModificaciones en las letras\nPuedes hacer que las letras se vean en negritas, cursivas o tachadas de la siguiente forma:\n**Negritas** o __Negritas__\n*Cursivas* o _Cursivas_\n~~Tachado~~\n\nNegrita, Negrita,\nCursiva, Cursiva,\n~~Tachada~~\nListas\nPuedes listar objetos ya sea con puntos o con números de la siguiente forma\n- Objeto 1\n - Sub objeto\n - Otro sub objeto\n- Objeto 2\n\n1. Objeto 1\n2. Objeto 2\n\n\nObjeto 1\nsub objeto\notro sub objeto\n\n\n\nObjeto 2\n\n\nObjeto\n\nObjeto\n\nEnlaces a páginas\nPara insertar enlaces a páginas relevantes puedes hacerlo directamente copiando y pegando el url o puedes hacer que una palabra esté ligada a un vínculo de a siguiente forma\n[palabra](dirección)\n\nCheatsheet de Markdown. En esta página puedes encontrar la forma de hacer muchas cosas más\nEjercicio 1\nCrea una tabla que tenga como encabezados:\n\nMiembro\nEdad\nGénero\n\nDe todos los miembros de tu familia\nLas celdas por default van a estar en modo de código pero puedes cambiarla de dos formas, una en la barra que está llena de íconos hay una parte que dice Code, esta nada más tienes que hacer un click y cambiarlo a Markdown. La seguna es usando el teclado y lo que debes de hacer es seleccionar tu celda, presionar la tecla Esc, presionar la letra m y presionar Enter para empezar a escribir...\nUna vez que terminaste de escribir, puedes ejecutar cada celda ya sea presionando Shift + Enter o con el boton de \"play\" en la barra de herramientas\nYa que vimos la parte de Markdown veamos la parte de código.\nCada vez que creas una libreta nueva tu puedes decir en qué lenguaje quieres tu kernel, en nuestro caso sólo veremos python 3 porque no hemos instalado ningún otro kernel para jupyter pero si te interesa puedes ver cómo hacerlo aquí Así que el código que veremos a continuación es en python.", "# Lo primero que ejecutarás será 'Hola Jupyter'\nprint('Hola a Todos')", "Cada celda la puedes usar para escribir el código que tu quieras y si de repente se te olvida alguna función o tienes duda de si el nombre es correcto IPython es muy amable en ese sentido. \nPara saber acerca de una función, es decir cuál es su salida o los parámetros que necesita puedes usar el signo de interrogación al final del nombre de la función.\nEjercicio 2\nEn la siguiente celda busca las siguientes funciones: sum, max, round, mean. No olvides ejecutar la celda después de haber escrito las funciones.", "sum?\n\nmax?\n\nround?\n\nmean?", "Como te pudiste dar cuenta, cuando no encuentra la función te da un error...\nEn IPython, y por lo tanto en Jupyter, hay una utilidad de completar con Tab. Esto quiere decir que si tu empiezas a escribir el nombre de una variable, función o atributo, no tienes que escribirlo todo, puedes empezar con unas cuantas letras y automáticamente (si es lo único que empieza de esa forma) lo va a completar por ti. Todos los flojos y/o olvidadizos amamos esta herramienta. En caso de que haya varias opciones no se va a completar, pero si lo vuelves a oprimir te va a mostrar en la celda todas las opciones que tienes...", "variable = 50\nsaludo = 'Hola'", "Ejercicio 3\nEmpieza a escribir las primeras tres letras de cada elemento de la celda anterior y presiona tab para ver si se puede autocompletar", "vars?", "También hay funciones mágicas que nos permitirán hacer diversas tareas como mostrar las gráficas que se produzcan en el código dentro de una celda, medir el tiempo de ejecución del código y cambiar del directorio de trabajo, entre otras.\npara ver qué funciones mágicas hay en Jupyter sólo tienes que escribir\npython\n%magic\nTodas las funciones \"mágicas\" empiezan con el signo de porcentaje %", "%magic", "Gráficas\nAhora veremos unos ejemplos de gráficas y cómo hacerlas interactivas. Estos ejemplos fueron tomados de la libreta para demostración de nature", "# Importa matplotlib (paquete para graficar) y numpy (paquete para arreglos).\n# Fíjate en el la función mágica para que aparezca nuestra gráfica en la celda.\n%matplotlib inline\nimport matplotlib.pyplot as plt \nimport numpy as np\n\n# Crea un arreglo de 30 valores para x que va de 0 a 5. \nx = np.linspace(0, 5, 30)\ny = np.sin(x)\n\n# grafica y versus x\nfig, ax = plt.subplots(nrows=1, ncols=1)\nax.plot(x, y, color='red')\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_title('A simple graph of $y=x^2$')", "La gráfica que estás viendo sigue la siguiente ecuación $$y=x^2$$\nEjercicio 4\nEdita el código de arriba y vuélvelo a correr pero ahora intenta reemplazar la expresión: \ny = x**2 \ncon:\ny=np.sin(x)\nGráficas interactivas", "# Importa matplotlib y numpy \n# con la misma \"magia\".\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Importa la función interactiva de IPython usada\n# para construir los widgets interactivos\nfrom IPython.html.widgets import interact\n\ndef plot_sine(frequency=4.0, grid_points=12, plot_original=True):\n \"\"\"\n Grafica muestras discretas de una curva sinoidal en ``[0, 1]``.\n \"\"\"\n x = np.linspace(0, 1, grid_points + 2)\n y = np.sin(2 * frequency * np.pi * x)\n\n xf = np.linspace(0, 1, 1000)\n yf = np.sin(2 * frequency * np.pi * xf)\n\n fig, ax = plt.subplots(figsize=(8, 6))\n ax.set_xlabel('x')\n ax.set_ylabel('signal')\n ax.set_title('Aliasing in discretely sampled periodic signal')\n\n if plot_original:\n ax.plot(xf, yf, color='red', linestyle='solid', linewidth=2)\n\n ax.plot(x, y, marker='o', linewidth=2)\n\n# la función interactiva construye automáticamente una interfase de usuario para explorar\n# la gráfica de la función de seno.\ninteract(plot_sine, frequency=(1.0, 22.0, 0.5), grid_points=(10, 16, 1), plot_original=True)", "Esto apenas es una probadita de todo lo que se puede hacer con Jupyter y python. Espero que les haya gustado y los exhorto a conocer más acerca de estas funciones.\nGracias por su atención" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.24/_downloads/9a8829bce25fe2d64489ddf54437cd3c/30_info.ipynb
bsd-3-clause
[ "%matplotlib inline", "The Info data structure\nThis tutorial describes the :class:mne.Info data structure, which keeps track\nof various recording details, and is attached to :class:~mne.io.Raw,\n:class:~mne.Epochs, and :class:~mne.Evoked objects.\nWe will begin by loading the Python modules we need, and loading the same\nexample data &lt;sample-dataset&gt; we used in the introductory tutorial\n&lt;tut-overview&gt;:", "import os\nimport mne\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_filt-0-40_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file)", "As seen in the introductory tutorial &lt;tut-overview&gt;, when a\n:class:~mne.io.Raw object is loaded, an :class:~mne.Info object is\ncreated automatically, and stored in the raw.info attribute:", "print(raw.info)", "However, it is not strictly necessary to load the :class:~mne.io.Raw object\nin order to view or edit the :class:~mne.Info object; you can extract all\nthe relevant information into a stand-alone :class:~mne.Info object using\n:func:mne.io.read_info:", "info = mne.io.read_info(sample_data_raw_file)\nprint(info)", "As you can see, the :class:~mne.Info object keeps track of a lot of\ninformation about:\n\nthe recording system (gantry angle, HPI details, sensor digitizations,\n channel names, ...)\nthe experiment (project name and ID, subject information, recording date,\n experimenter name or ID, ...)\nthe data (sampling frequency, applied filter frequencies, bad channels,\n projectors, ...)\n\nThe complete list of fields is given in :class:the API documentation\n&lt;mne.Info&gt;.\nQuerying the Info object\nThe fields in a :class:~mne.Info object act like Python :class:dictionary\n&lt;dict&gt; keys, using square brackets and strings to access the contents of a\nfield:", "print(info.keys())\nprint() # insert a blank line\nprint(info['ch_names'])", "Most of the fields contain :class:int, :class:float, or :class:list\ndata, but the chs field bears special mention: it contains a list of\ndictionaries (one :class:dict per channel) containing everything there is\nto know about a channel other than the data it recorded. Normally it is not\nnecessary to dig into the details of the chs field — various MNE-Python\nfunctions can extract the information more cleanly than iterating over the\nlist of dicts yourself — but it can be helpful to know what is in there. Here\nwe show the keys for the first channel's :class:dict:", "print(info['chs'][0].keys())", "Obtaining subsets of channels\nIt is often useful to convert between channel names and the integer indices\nidentifying rows of the data array where those channels' measurements are\nstored. The :class:~mne.Info object is useful for this task; two\nconvenience functions that rely on the :class:mne.Info object for picking\nchannels are :func:mne.pick_channels and :func:mne.pick_types.\n:func:~mne.pick_channels minimally takes a list of all channel names and a\nlist of channel names to include; it is also possible to provide an empty\nlist to include and specify which channels to exclude instead:", "print(mne.pick_channels(info['ch_names'], include=['MEG 0312', 'EEG 005']))\n\nprint(mne.pick_channels(info['ch_names'], include=[],\n exclude=['MEG 0312', 'EEG 005']))", ":func:~mne.pick_types works differently, since channel type cannot always\nbe reliably determined from channel name alone. Consequently,\n:func:~mne.pick_types needs an :class:~mne.Info object instead of just a\nlist of channel names, and has boolean keyword arguments for each channel\ntype. Default behavior is to pick only MEG channels (and MEG reference\nchannels if present) and exclude any channels already marked as \"bad\" in the\nbads field of the :class:~mne.Info object. Therefore, to get all and\nonly the EEG channel indices (including the \"bad\" EEG channels) we must\npass meg=False and exclude=[]:", "print(mne.pick_types(info, meg=False, eeg=True, exclude=[]))", "Note that the meg and fnirs parameters of :func:~mne.pick_types\naccept strings as well as boolean values, to allow selecting only\nmagnetometer or gradiometer channels (via meg='mag' or meg='grad') or\nto pick only oxyhemoglobin or deoxyhemoglobin channels (via fnirs='hbo'\nor fnirs='hbr', respectively).\nA third way to pick channels from an :class:~mne.Info object is to apply\nregular expression_ matching to the channel names using\n:func:mne.pick_channels_regexp. Here the ^ represents the beginning of\nthe string and . character matches any single character, so both EEG and\nEOG channels will be selected:", "print(mne.pick_channels_regexp(info['ch_names'], '^E.G'))", ":func:~mne.pick_channels_regexp can be especially useful for channels named\naccording to the 10-20 &lt;ten-twenty_&gt;_ system (e.g., to select all channels\nending in \"z\" to get the midline, or all channels beginning with \"O\" to get\nthe occipital channels). Note that :func:~mne.pick_channels_regexp uses the\nPython standard module :mod:re to perform regular expression matching; see\nthe documentation of the :mod:re module for implementation details.\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>Both :func:`~mne.pick_channels` and :func:`~mne.pick_channels_regexp`\n operate on lists of channel names, so they are unaware of which channels\n (if any) have been marked as \"bad\" in ``info['bads']``. Use caution to\n avoid accidentally selecting bad channels.</p></div>\n\nObtaining channel type information\nSometimes it can be useful to know channel type based on its index in the\ndata array. For this case, use :func:mne.channel_type, which takes\nan :class:~mne.Info object and a single integer channel index:", "print(mne.channel_type(info, 25))", "To obtain several channel types at once, you could embed\n:func:~mne.channel_type in a :term:list comprehension, or use the\n:meth:~mne.io.Raw.get_channel_types method of a :class:~mne.io.Raw,\n:class:~mne.Epochs, or :class:~mne.Evoked instance:", "picks = (25, 76, 77, 319)\nprint([mne.channel_type(info, x) for x in picks])\nprint(raw.get_channel_types(picks=picks))", "Alternatively, you can get the indices of all channels of all channel types\npresent in the data, using :func:~mne.channel_indices_by_type,\nwhich returns a :class:dict with channel types as keys, and lists of\nchannel indices as values:", "ch_idx_by_type = mne.channel_indices_by_type(info)\nprint(ch_idx_by_type.keys())\nprint(ch_idx_by_type['eog'])", "Dropping channels from an Info object\nIf you want to modify an :class:~mne.Info object by eliminating some of the\nchannels in it, you can use the :func:mne.pick_info function to pick the\nchannels you want to keep and omit the rest:", "print(info['nchan'])\neeg_indices = mne.pick_types(info, meg=False, eeg=True)\nprint(mne.pick_info(info, eeg_indices)['nchan'])", "We can also get a nice HTML representation in IPython like:", "info", "By default, :func:~mne.pick_info will make a copy of the original\n:class:~mne.Info object before modifying it; if you want to modify it\nin-place, include the parameter copy=False.\n.. LINKS" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
louisdorard/bml-base
credentials/Test.ipynb
mit
[ "Credentials\nMake sure to go through Setup first!\nLet's check that the environment variables have been set... We'll just try one:", "GPRED_PROJECT_ID = %env GPRED_PROJECT_ID", "Google Cloud Storage\nLet's see if we can create a bucket with boto (using credentials, project ID, etc. specified in boto config file)...", "import datetime\nnow = datetime.datetime.now()\nBUCKET_NAME = 'test_' + GPRED_PROJECT_ID + now.strftime(\"%Y-%m-%d\") # lower case letters required, no upper case allowed\n\nimport boto\nimport gcs_oauth2_boto_plugin\nproject_id = %env GPRED_PROJECT_ID\nheader_values = {\"x-goog-project-id\": project_id}\nboto.storage_uri(BUCKET_NAME, 'gs').create_bucket(headers=header_values)", "Listing existing buckets...", "uri = boto.storage_uri('', 'gs')\n# If the default project is defined, call get_all_buckets() without arguments.\nfor bucket in uri.get_all_buckets(headers=header_values):\n print bucket.name", "Upload a file to the new bucket", "import os\nos.system(\"echo 'hello!' > newfile\")\nfilename = 'newfile'\nboto.storage_uri(BUCKET_NAME + '/' + filename, 'gs').new_key().set_contents_from_file(open(filename))", "See contents of the bucket on the web interface (URL will be outputted below)", "print \"https://console.developers.google.com/project/\" + project_id + \"/storage/browser/\" + BUCKET_NAME + \"/?authuser=0\"", "Google Prediction\nInitialize API wrapper", "import googleapiclient.gpred as gpred\noauth_file = %env GPRED_OAUTH_FILE\napi = gpred.api(oauth_file)", "Making predictions against a hosted model\nLet's use the sample.sentiment hosted model (made publicly available by Google)", "# projectname has to be 414649711441\nprediction_request = api.hostedmodels().predict(project='414649711441',\n hostedModelName='sample.sentiment',\n body={'input': {'csvInstance': ['I hate that stuff is so stupid']}})\n\nresult = prediction_request.execute()\n# We can print the raw result\nprint result" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Naereen/notebooks
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
mit
[ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#TP-2---Programmation-pour-la-préparation-à-l'agrégation-maths-option-info\" data-toc-modified-id=\"TP-2---Programmation-pour-la-préparation-à-l'agrégation-maths-option-info-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>TP 2 - Programmation pour la préparation à l'agrégation maths option info</a></span></li><li><span><a href=\"#Listes\" data-toc-modified-id=\"Listes-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Listes</a></span><ul class=\"toc-item\"><li><span><a href=\"#Exercice-1-:-taille\" data-toc-modified-id=\"Exercice-1-:-taille-2.1\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Exercice 1 : <code>taille</code></a></span></li><li><span><a href=\"#Exercice-2-:-concat\" data-toc-modified-id=\"Exercice-2-:-concat-2.2\"><span class=\"toc-item-num\">2.2&nbsp;&nbsp;</span>Exercice 2 : <code>concat</code></a></span></li><li><span><a href=\"#Exercice-3-:-appartient\" data-toc-modified-id=\"Exercice-3-:-appartient-2.3\"><span class=\"toc-item-num\">2.3&nbsp;&nbsp;</span>Exercice 3 : <code>appartient</code></a></span></li><li><span><a href=\"#Exercice-4-:-miroir\" data-toc-modified-id=\"Exercice-4-:-miroir-2.4\"><span class=\"toc-item-num\">2.4&nbsp;&nbsp;</span>Exercice 4 : <code>miroir</code></a></span></li><li><span><a href=\"#Exercice-5-:-alterne\" data-toc-modified-id=\"Exercice-5-:-alterne-2.5\"><span class=\"toc-item-num\">2.5&nbsp;&nbsp;</span>Exercice 5 : <code>alterne</code></a></span></li><li><span><a href=\"#Exercice-6-:-nb_occurrences\" data-toc-modified-id=\"Exercice-6-:-nb_occurrences-2.6\"><span class=\"toc-item-num\">2.6&nbsp;&nbsp;</span>Exercice 6 : <code>nb_occurrences</code></a></span></li><li><span><a href=\"#Exercice-7-:-pairs\" data-toc-modified-id=\"Exercice-7-:-pairs-2.7\"><span class=\"toc-item-num\">2.7&nbsp;&nbsp;</span>Exercice 7 : <code>pairs</code></a></span></li><li><span><a href=\"#Exercice-8-:-range\" data-toc-modified-id=\"Exercice-8-:-range-2.8\"><span class=\"toc-item-num\">2.8&nbsp;&nbsp;</span>Exercice 8 : <code>range</code></a></span></li><li><span><a href=\"#Exercice-9-:-premiers\" data-toc-modified-id=\"Exercice-9-:-premiers-2.9\"><span class=\"toc-item-num\">2.9&nbsp;&nbsp;</span>Exercice 9 : <code>premiers</code></a></span></li></ul></li><li><span><a href=\"#Listes-simplement-chaînée-(manuellement-définies)\" data-toc-modified-id=\"Listes-simplement-chaînée-(manuellement-définies)-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Listes simplement chaînée (manuellement définies)</a></span><ul class=\"toc-item\"><li><span><a href=\"#La-classe-ListeChainee\" data-toc-modified-id=\"La-classe-ListeChainee-3.1\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>La classe <code>ListeChainee</code></a></span></li><li><span><a href=\"#Exercice-1-:-taille\" data-toc-modified-id=\"Exercice-1-:-taille-3.2\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>Exercice 1 : <code>taille</code></a></span></li><li><span><a href=\"#Exercice-2-:-concat\" data-toc-modified-id=\"Exercice-2-:-concat-3.3\"><span class=\"toc-item-num\">3.3&nbsp;&nbsp;</span>Exercice 2 : <code>concat</code></a></span></li><li><span><a href=\"#Exercice-3-:-appartient\" data-toc-modified-id=\"Exercice-3-:-appartient-3.4\"><span class=\"toc-item-num\">3.4&nbsp;&nbsp;</span>Exercice 3 : <code>appartient</code></a></span></li><li><span><a href=\"#Exercice-4-:-miroir\" data-toc-modified-id=\"Exercice-4-:-miroir-3.5\"><span class=\"toc-item-num\">3.5&nbsp;&nbsp;</span>Exercice 4 : <code>miroir</code></a></span></li><li><span><a href=\"#Exercice-5-:-alterne\" data-toc-modified-id=\"Exercice-5-:-alterne-3.6\"><span class=\"toc-item-num\">3.6&nbsp;&nbsp;</span>Exercice 5 : <code>alterne</code></a></span></li><li><span><a href=\"#Exercice-6-:-nb_occurrences\" data-toc-modified-id=\"Exercice-6-:-nb_occurrences-3.7\"><span class=\"toc-item-num\">3.7&nbsp;&nbsp;</span>Exercice 6 : <code>nb_occurrences</code></a></span></li><li><span><a href=\"#Exercice-7-:-pairs\" data-toc-modified-id=\"Exercice-7-:-pairs-3.8\"><span class=\"toc-item-num\">3.8&nbsp;&nbsp;</span>Exercice 7 : <code>pairs</code></a></span></li><li><span><a href=\"#Exercice-8-:-range\" data-toc-modified-id=\"Exercice-8-:-range-3.9\"><span class=\"toc-item-num\">3.9&nbsp;&nbsp;</span>Exercice 8 : <code>range</code></a></span></li><li><span><a href=\"#Exercice-9-:-premiers\" data-toc-modified-id=\"Exercice-9-:-premiers-3.10\"><span class=\"toc-item-num\">3.10&nbsp;&nbsp;</span>Exercice 9 : <code>premiers</code></a></span></li></ul></li><li><span><a href=\"#Quelques-tris-par-comparaison\" data-toc-modified-id=\"Quelques-tris-par-comparaison-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Quelques tris par comparaison</a></span><ul class=\"toc-item\"><li><span><a href=\"#Exercice-10-:-Tri-insertion\" data-toc-modified-id=\"Exercice-10-:-Tri-insertion-4.1\"><span class=\"toc-item-num\">4.1&nbsp;&nbsp;</span>Exercice 10 : Tri insertion</a></span></li><li><span><a href=\"#Exercice-11-:-Tri-insertion-générique\" data-toc-modified-id=\"Exercice-11-:-Tri-insertion-générique-4.2\"><span class=\"toc-item-num\">4.2&nbsp;&nbsp;</span>Exercice 11 : Tri insertion générique</a></span></li><li><span><a href=\"#Exercice-12-:-Tri-selection\" data-toc-modified-id=\"Exercice-12-:-Tri-selection-4.3\"><span class=\"toc-item-num\">4.3&nbsp;&nbsp;</span>Exercice 12 : Tri selection</a></span></li><li><span><a href=\"#Exercices-13,-14,-15-:-Tri-fusion\" data-toc-modified-id=\"Exercices-13,-14,-15-:-Tri-fusion-4.4\"><span class=\"toc-item-num\">4.4&nbsp;&nbsp;</span>Exercices 13, 14, 15 : Tri fusion</a></span></li><li><span><a href=\"#Comparaisons\" data-toc-modified-id=\"Comparaisons-4.5\"><span class=\"toc-item-num\">4.5&nbsp;&nbsp;</span>Comparaisons</a></span></li></ul></li><li><span><a href=\"#Listes-:-l'ordre-supérieur\" data-toc-modified-id=\"Listes-:-l'ordre-supérieur-5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>Listes : l'ordre supérieur</a></span><ul class=\"toc-item\"><li><span><a href=\"#Exercice-16-:-applique\" data-toc-modified-id=\"Exercice-16-:-applique-5.1\"><span class=\"toc-item-num\">5.1&nbsp;&nbsp;</span>Exercice 16 : <code>applique</code></a></span></li><li><span><a href=\"#Exercice-17\" data-toc-modified-id=\"Exercice-17-5.2\"><span class=\"toc-item-num\">5.2&nbsp;&nbsp;</span>Exercice 17</a></span></li><li><span><a href=\"#Exercice-18-:-itere\" data-toc-modified-id=\"Exercice-18-:-itere-5.3\"><span class=\"toc-item-num\">5.3&nbsp;&nbsp;</span>Exercice 18 : <code>itere</code></a></span></li><li><span><a href=\"#Exercice-19\" data-toc-modified-id=\"Exercice-19-5.4\"><span class=\"toc-item-num\">5.4&nbsp;&nbsp;</span>Exercice 19</a></span></li><li><span><a href=\"#Exercice-20-:-qqsoit-et-ilexiste\" data-toc-modified-id=\"Exercice-20-:-qqsoit-et-ilexiste-5.5\"><span class=\"toc-item-num\">5.5&nbsp;&nbsp;</span>Exercice 20 : <code>qqsoit</code> et <code>ilexiste</code></a></span></li><li><span><a href=\"#Exercice-21-:-appartient-version-2\" data-toc-modified-id=\"Exercice-21-:-appartient-version-2-5.6\"><span class=\"toc-item-num\">5.6&nbsp;&nbsp;</span>Exercice 21 : <code>appartient</code> version 2</a></span></li><li><span><a href=\"#Exercice-22-:-filtre\" data-toc-modified-id=\"Exercice-22-:-filtre-5.7\"><span class=\"toc-item-num\">5.7&nbsp;&nbsp;</span>Exercice 22 : <code>filtre</code></a></span></li><li><span><a href=\"#Exercice-23\" data-toc-modified-id=\"Exercice-23-5.8\"><span class=\"toc-item-num\">5.8&nbsp;&nbsp;</span>Exercice 23</a></span></li><li><span><a href=\"#Exercice-24-:-reduit\" data-toc-modified-id=\"Exercice-24-:-reduit-5.9\"><span class=\"toc-item-num\">5.9&nbsp;&nbsp;</span>Exercice 24 : <code>reduit</code></a></span></li><li><span><a href=\"#Exercice-25-:-somme,-produit\" data-toc-modified-id=\"Exercice-25-:-somme,-produit-5.10\"><span class=\"toc-item-num\">5.10&nbsp;&nbsp;</span>Exercice 25 : <code>somme</code>, <code>produit</code></a></span></li><li><span><a href=\"#Exercice-26-:-miroir-version-2\" data-toc-modified-id=\"Exercice-26-:-miroir-version-2-5.11\"><span class=\"toc-item-num\">5.11&nbsp;&nbsp;</span>Exercice 26 : <code>miroir</code> version 2</a></span></li></ul></li><li><span><a href=\"#Arbres\" data-toc-modified-id=\"Arbres-6\"><span class=\"toc-item-num\">6&nbsp;&nbsp;</span>Arbres</a></span><ul class=\"toc-item\"><li><span><a href=\"#Exercice-27\" data-toc-modified-id=\"Exercice-27-6.1\"><span class=\"toc-item-num\">6.1&nbsp;&nbsp;</span>Exercice 27</a></span></li><li><span><a href=\"#Exercice-28\" data-toc-modified-id=\"Exercice-28-6.2\"><span class=\"toc-item-num\">6.2&nbsp;&nbsp;</span>Exercice 28</a></span></li><li><span><a href=\"#Exercice-29\" data-toc-modified-id=\"Exercice-29-6.3\"><span class=\"toc-item-num\">6.3&nbsp;&nbsp;</span>Exercice 29</a></span></li><li><span><a href=\"#Exercice-30\" data-toc-modified-id=\"Exercice-30-6.4\"><span class=\"toc-item-num\">6.4&nbsp;&nbsp;</span>Exercice 30</a></span></li></ul></li><li><span><a href=\"#Parcours-d'arbres-binaires\" data-toc-modified-id=\"Parcours-d'arbres-binaires-7\"><span class=\"toc-item-num\">7&nbsp;&nbsp;</span>Parcours d'arbres binaires</a></span><ul class=\"toc-item\"><li><span><a href=\"#Exercice-31\" data-toc-modified-id=\"Exercice-31-7.1\"><span class=\"toc-item-num\">7.1&nbsp;&nbsp;</span>Exercice 31</a></span></li><li><span><a href=\"#Exercice-32-:-Parcours-naifs-(complexité-quadratique)\" data-toc-modified-id=\"Exercice-32-:-Parcours-naifs-(complexité-quadratique)-7.2\"><span class=\"toc-item-num\">7.2&nbsp;&nbsp;</span>Exercice 32 : Parcours naifs (complexité quadratique)</a></span></li><li><span><a href=\"#Exercice-33-:-Parcours-linéaires\" data-toc-modified-id=\"Exercice-33-:-Parcours-linéaires-7.3\"><span class=\"toc-item-num\">7.3&nbsp;&nbsp;</span>Exercice 33 : Parcours linéaires</a></span></li><li><span><a href=\"#Exercice-34-:-parcours-en-largeur-et-en-profondeur\" data-toc-modified-id=\"Exercice-34-:-parcours-en-largeur-et-en-profondeur-7.4\"><span class=\"toc-item-num\">7.4&nbsp;&nbsp;</span>Exercice 34 : parcours en largeur et en profondeur</a></span></li><li><span><a href=\"#Exercice-35-et-fin\" data-toc-modified-id=\"Exercice-35-et-fin-7.5\"><span class=\"toc-item-num\">7.5&nbsp;&nbsp;</span>Exercice 35 et fin</a></span><ul class=\"toc-item\"><li><span><a href=\"#Reconstruction-depuis-le-parcours-prefixe\" data-toc-modified-id=\"Reconstruction-depuis-le-parcours-prefixe-7.5.1\"><span class=\"toc-item-num\">7.5.1&nbsp;&nbsp;</span>Reconstruction depuis le parcours prefixe</a></span></li><li><span><a href=\"#Reconstruction-depuis-le-parcours-en-largeur\" data-toc-modified-id=\"Reconstruction-depuis-le-parcours-en-largeur-7.5.2\"><span class=\"toc-item-num\">7.5.2&nbsp;&nbsp;</span>Reconstruction depuis le parcours en largeur</a></span></li></ul></li></ul></li><li><span><a href=\"#Conclusion\" data-toc-modified-id=\"Conclusion-8\"><span class=\"toc-item-num\">8&nbsp;&nbsp;</span>Conclusion</a></span></li></ul></div>\n\nTP 2 - Programmation pour la préparation à l'agrégation maths option info\n\nEn Python, version 3.", "from sys import version\nprint(version)", "Listes\nCes exercices sont un peu foireux : les \"listes\" en Python ne sont pas des listes simplement chaînées !\nExercice 1 : taille", "from typing import TypeVar, List\n_a = TypeVar('alpha')\n\ndef taille(liste : List[_a]) -> int:\n longueur = 0\n for _ in liste:\n longueur += 1\n return longueur\n\ntaille([])\ntaille([1, 2, 3])\n\nlen([])\nlen([1, 2, 3])", "Exercice 2 : concat", "from typing import TypeVar, List\n_a = TypeVar('alpha')\n\ndef concatene(liste1 : List[_a], liste2 : List[_a]) -> List[_a]:\n # return liste1 + liste2 # easy solution\n liste = []\n for i in liste1:\n liste.append(i)\n for i in liste2:\n liste.append(i)\n return liste\n\nconcatene([1, 2], [3, 4])\n[1, 2] + [3, 4]", "Mais attention le typage est toujours optionnel en Python :", "concatene([1, 2], [\"pas\", \"entier\", \"?\"])", "Exercice 3 : appartient", "from typing import TypeVar, List\n_a = TypeVar('alpha')\n\ndef appartient(x : _a, liste : List[_a]) -> bool:\n for y in liste:\n if x == y:\n return True # on stoppe avant la fin\n return False\n\nappartient(1, [])\nappartient(1, [1])\nappartient(1, [1, 2, 3])\nappartient(4, [1, 2, 3])\n\n1 in []\n1 in [1]\n1 in [1, 2, 3]\n4 in [1, 2, 3]", "Notre implémentation est évidemment plus lente que le test x in liste de la librarie standard...\nMais pas tant :", "%timeit appartient(1000, list(range(10000)))\n%timeit 1000 in list(range(10000))", "Exercice 4 : miroir", "from typing import TypeVar, List\n_a = TypeVar('alpha')\n\ndef miroir(liste : List[_a]) -> List[_a]:\n # return liste[::-1] # version facile\n liste2 = []\n for x in liste:\n liste2.insert(0, x)\n return liste2\n\nmiroir([2, 3, 5, 7, 11])\n[2, 3, 5, 7, 11][::-1]\n\n%timeit miroir([2, 3, 5, 7, 11])\n%timeit [2, 3, 5, 7, 11][::-1]", "Exercice 5 : alterne\nLa sémantique n'était pas très claire, mais on peut imaginer quelque chose comme ça :", "from typing import TypeVar, List\n_a = TypeVar('alpha')\n\ndef alterne(liste1 : List[_a], liste2 : List[_a]) -> List[_a]:\n liste3 = []\n i, j = 0, 0\n n, m = len(liste1), len(liste2)\n while i < n and j < m: # encore deux\n liste3.append(liste1[i])\n i += 1\n liste3.append(liste2[j])\n j += 1\n while i < n: # si n > m\n liste3.append(liste1[i])\n i += 1\n while j < m: # ou si n < m\n liste3.append(liste2[j])\n j += 1\n return liste3\n\nalterne([3, 5], [2, 4, 6])\nalterne([1, 3, 5], [2, 4, 6])\nalterne([1, 3, 5], [4, 6])", "La complexité est linéaire en $\\mathcal{O}(\\max(|\\text{liste 1}|, |\\text{liste 2}|)$.\nExercice 6 : nb_occurrences", "from typing import TypeVar, List\n_a = TypeVar('alpha')\n\ndef nb_occurrences(x : _a, liste : List[_a]) -> int:\n nb = 0\n for y in liste:\n if x == y:\n nb += 1\n return nb\n\nnb_occurrences(0, [1, 2, 3, 4])\nnb_occurrences(2, [1, 2, 3, 4])\nnb_occurrences(2, [1, 2, 2, 3, 2, 4])\nnb_occurrences(5, [1, 2, 3, 4])", "Exercice 7 : pairs\nC'est un filtrage :", "filter?\n\nfrom typing import List\n\ndef pairs(liste : List[int]) -> List[int]:\n # return list(filter(lambda x : x % 2 == 0, liste))\n return [x for x in liste if x % 2 == 0]\n\npairs([1, 2, 3, 4, 5, 6])\npairs([1, 2, 3, 4, 5, 6, 7, 100000])\npairs([1, 2, 3, 4, 5, 6, 7, 100000000000])\npairs([1, 2, 3, 4, 5, 6, 7, 1000000000000000000])", "Exercice 8 : range", "from typing import List\n\ndef myrange(n : int) -> List[int]:\n liste = []\n i = 1\n while i <= n:\n liste.append(i)\n i += 1\n return liste\n\nmyrange(4)\n\nfrom typing import List\n\ndef intervale(a : int, b : int=None) -> List[int]:\n if b == None:\n a, b = 1, a\n liste = []\n i = a\n while i <= b:\n liste.append(i)\n i += 1\n return liste\n\nintervale(10)\nintervale(1, 4)", "Exercice 9 : premiers\nPlusieurs possibilités. Un filtre d'Erathosthène marche bien, ou une filtration.\nJe ne vais pas utiliser de tableaux donc on est un peu réduit d'utiliser une filtration (filtrage ? pattern matching)", "def racine(n : int) -> int:\n i = 1\n for i in range(n + 1):\n if i*i > n:\n return i - 1\n return i\n\nracine(1)\nracine(5)\nracine(102)\nracine(120031)\n\nfrom typing import List\n\ndef intervale2(a : int, b : int, pas : int=1) -> List[int]:\n assert pas > 0\n liste = []\n i = a\n while i <= b:\n liste.append(i)\n i += pas\n return liste\n\nintervale2(2, 12, 1)\nintervale2(2, 12, 3)", "Une version purement fonctionnelle est moins facile qu'une version impérative avec une référence booléenne.", "def estDivisible(n : int, k : int) -> bool:\n return (n % k) == 0\n\nestDivisible(10, 2)\nestDivisible(10, 3)\nestDivisible(10, 4)\nestDivisible(10, 5)\n\ndef estPremier(n : int) -> bool:\n return (n == 2) or (n == 3) or not any(map(lambda k: estDivisible(n, k), intervale2(2, racine(n), 1)))\n\nfor n in range(2, 20):\n print(n, list(map(lambda k: estDivisible(n, k), intervale2(2, racine(n), 1))))\n\nfrom typing import List\n\ndef premiers(n : int) -> List[int]:\n return [p for p in intervale2(2, n, 1) if estPremier(p)]\n\npremiers(10)\n\npremiers(100)", "Listes simplement chaînée (manuellement définies)\nComme ces exercices étaient un peu foireux à écrire avec les \"listes\" de Python, qui ne sont pas des listes simplement chaînées, je propose une autre solution où l'on va définir une petite classe qui représentera une liste simplement chaînée, et on va écrire les fonctions demandées avec cette classe.\nLa classe ListeChainee\nOn va supposer que les listes que l'on représentera ne contiendront pas la valeur None, qui est utilisée pour représenter l'absence de tête et/ou de queue de la liste.", "class ListeChainee():\n def __init__(self, hd=None, tl=None):\n self.hd = hd\n self.tl = tl\n \n def __repr__(self) -> str:\n if self.tl is None:\n if self.hd is None:\n return \"[]\"\n else:\n return f\"{self.hd} :: []\"\n else:\n return f\"{self.hd} :: {self.tl}\"\n \n def jolie(self) -> str:\n if self.tl is None:\n if self.hd is None:\n return \"[]\"\n else:\n return f\"[{self.hd}]\"\n else:\n j = self.tl.jolie()\n j = j.replace(\"[\", \"\").replace(\"]\", \"\")\n if j == \"\":\n return f\"[{self.hd}]\"\n else:\n return f\"[{self.hd}, {j}]\"\n\n# équivalent à :: en OCaml\ndef insert(hd, tl: ListeChainee) -> ListeChainee:\n \"\"\" Insère hd en début de la liste chainée tl.\"\"\"\n return ListeChainee(hd=hd, tl=tl)\n\n# liste vide, puis des listes plus grandes\nvide = ListeChainee() # []\nl_1 = insert(1, vide) # 1 :: [] ~= [1]\nl_12 = insert(2, l_1) # 2 :: 1 :: [] ~= [2, 1]\nl_123 = insert(3, l_12) # 3 :: 2 :: 1 :: []\n\nprint(vide) # []\nprint(l_1) # 1 :: []\nprint(l_12) # 2 :: 1 :: []\nprint(l_123) # 3 :: 2 :: 1 :: []\n\nprint(vide.jolie()) # []\nprint(l_1.jolie()) # [1]\nprint(l_12.jolie()) # [2, 1]\nprint(l_123.jolie()) # [3, 2, 1]", "Exercice 1 : taille\nPar exemple la longueur sera bien en O(n) si n=taille(liste) avec cette approche récursive :", "from typing import Optional\n\ndef taille(liste: Optional[ListeChainee]) -> int:\n if liste is None:\n return 0\n elif liste.tl is None:\n return 0 if liste.hd is None else 1\n return 1 + taille(liste.tl)\n\nprint(taille(vide)) # 0\nprint(taille(l_1)) # 1\nprint(taille(l_12)) # 2\nprint(taille(l_123)) # 3", "Exercice 2 : concat\nJe vais déjà commencer par écrire une fonction copy qui permet de copier récursivement une liste simplement chaînée, pour être sûr que l'on ne modifie pas en place une des listes données en argument.", "def copy(liste: ListeChainee) -> ListeChainee:\n if liste.tl is None:\n return ListeChainee(hd=liste.hd, tl=None)\n else:\n return ListeChainee(hd=liste.hd, tl=copy(liste.tl))", "On peut vérifier que cela marche en regardant, par exemple, l'id de deux objets si le deuxième est une copie du premier : les id seront bien différents.", "print(id(vide))\nprint(id(copy(vide)))", "Et donc pour concaténer deux chaînes, c'est facile :", "def concat(liste1: ListeChainee, liste2: ListeChainee) -> ListeChainee:\n if taille(liste1) == 0:\n return liste2\n elif taille(liste2) == 0:\n return liste1\n # nouvelle liste : comme ça changer queue.tl ne modifie PAS liste1\n resultat = copy(liste1)\n queue = resultat\n while taille(queue.tl) > 0:\n queue = queue.tl\n assert taille(queue.tl) == 0\n queue.tl = ListeChainee(hd=liste2.hd, tl=liste2.tl)\n return resultat\n\nprint(concat(vide, l_1))\nprint(vide) # pas modifiée : []\nprint(l_1) # pas modifiée : 1 :: []\n\nconcat(l_1, l_12) # 1 :: 2 :: 1 :: []\n\nconcat(l_1, l_123) # 1 :: 3 :: 2 :: 1 :: []\n\nconcat(l_1, vide) # 1 :: []\n\nconcat(l_12, vide) # 2 :: 1 :: []\n\nconcat(l_12, l_1) # 2 :: 1 :: 1 :: []\n\nconcat(l_123, l_123) # 3 :: 2 :: 1 :: 3 :: 2 :: 1 :: []", "Exercice 3 : appartient\nC'est en complexité linéaire dans le pire des cas.", "def appartient(x, liste: ListeChainee) -> bool:\n if liste.hd is None:\n return False\n else:\n if liste.hd == x:\n return True\n else:\n return appartient(x, liste.tl)\n\nassert appartient(0, vide) == False\nassert appartient(0, l_1) == False\nassert appartient(0, l_12) == False\nassert appartient(0, l_123) == False\nassert appartient(1, l_1) == True\nassert appartient(1, l_12) == True\nassert appartient(1, l_123) == True\nassert appartient(2, l_1) == False\nassert appartient(2, l_12) == True\nassert appartient(2, l_123) == True\nassert appartient(3, l_1) == False\nassert appartient(3, l_12) == False\nassert appartient(3, l_123) == True", "Exercice 4 : miroir\nCe sera en temps quadratique, à cause de toutes les recopies :", "def miroir(liste: ListeChainee) -> ListeChainee:\n if taille(liste) <= 1:\n return copy(liste)\n else:\n hd, tl = liste.hd, copy(liste.tl) # O(n)\n juste_hd = ListeChainee(hd=hd, tl=None) # O(1)\n return concat(miroir(tl), juste_hd) # O(n^2) + O(n) à cause de concat\n\nprint(miroir(vide)) # [] => []\nprint(miroir(l_1)) # [1] => [1]\nprint(miroir(l_12)) # [2, 1] => [1, 2]\nprint(miroir(l_123)) # [3, 2, 1] => [1, 2, 3]", "Exercice 5 : alterne\nLa sémantique n'était pas très claire, mais on peut imaginer quelque chose comme ça :\n\nsi une des deux listes est vide, on prend l'autre,\nsi les deux ne sont pas vide, on prend le début de l1, de l2, puis alterne(queue l1, queue l2)", "def alterne(liste1: ListeChainee, liste2: ListeChainee) -> ListeChainee:\n if taille(liste1) == 0:\n return copy(liste2) # on recopie pour ne rien modifier\n if taille(liste2) == 0:\n return copy(liste1) # on recopie pour ne rien modifier\n h1, t1 = liste1.hd, liste1.tl \n h2, t2 = liste2.hd, liste2.tl\n return insert(h1, insert(h2, alterne(t1, t2)))\n\nprint(alterne(l_1, l_12)) # [1, 2, 1]\nprint(alterne(l_12, l_1)) # [2, 1, 1]\nprint(alterne(l_123, l_1)) # [3, 1, 2, 1]\nprint(alterne(l_123, l_12)) # [3, 2, 2, 1, 1]\nprint(alterne(l_123, l_123)) # [3, 3, 2, 2, 1, 1]\nprint(alterne(l_12, l_123)) # [2, 3, 1, 2, 1]\nprint(alterne(l_1, l_123)) # [1, 3, 2, 1]", "La complexité est quadratique en $\\mathcal{O}((\\max(|\\text{liste 1}|, |\\text{liste 2}|)^2)$ à cause des recopies.\nExercice 6 : nb_occurrences\nCe sera en temps linéaire, dans tous les cas.", "def nb_occurrences(x, liste: ListeChainee) -> int:\n if liste is None or liste.hd is None:\n return 0\n else:\n count = 1 if x == liste.hd else 0\n if liste.tl is None:\n return count\n else:\n return count + nb_occurrences(x, liste.tl)\n\nassert nb_occurrences(1, vide) == 0\nassert nb_occurrences(1, l_1) == 1\nassert nb_occurrences(1, l_12) == 1\nassert nb_occurrences(2, l_12) == 1\nassert nb_occurrences(1, l_123) == 1\nassert nb_occurrences(2, l_123) == 1\nassert nb_occurrences(3, l_123) == 1\nassert nb_occurrences(1, concat(l_1, l_1)) == 2\nassert nb_occurrences(2, concat(l_1, l_12)) == 1\nassert nb_occurrences(3, concat(l_12, l_1)) == 0\nassert nb_occurrences(1, concat(l_12, l_12)) == 2\nassert nb_occurrences(2, concat(l_12, l_12)) == 2\nassert nb_occurrences(1, concat(l_123, concat(l_1, l_1))) == 3\nassert nb_occurrences(2, concat(l_123, concat(l_1, l_12))) == 2\nassert nb_occurrences(3, concat(l_123, concat(l_12, l_1))) == 1\nassert nb_occurrences(3, concat(l_123, concat(l_12, l_12))) == 1", "On peut facilement écrire une variante qui sera récursive terminale (\"tail recursive\") :", "def nb_occurrences(x, liste: ListeChainee, count=0) -> int:\n if liste is None or liste.hd is None:\n return count\n else:\n count += 1 if x == liste.hd else 0\n if liste.tl is None:\n return count\n else:\n return nb_occurrences(x, liste.tl, count=count)", "Exercice 7 : pairs\nC'est un filtrage par le prédicat x % 2 == 0.\nAutant écrire la fonction de filtrage générique :", "def filtrer(liste: ListeChainee, predicate) -> ListeChainee:\n if liste is None or liste.hd is None: # liste de taille 0\n return ListeChainee(hd=None, tl=None)\n elif liste.tl is None: # liste de taille 1\n if predicate(liste.hd): # on renvoie [hd]\n return ListeChainee(hd=liste.hd, tl=None)\n else: # on renvoie []\n return ListeChainee(hd=None, tl=None)\n else: # liste de taille >= 2\n if predicate(liste.hd):\n return insert(liste.hd, filtrer(liste.tl, predicate))\n else:\n return filtrer(liste.tl, predicate)", "Et donc c'est rapide :", "def pairs(liste: ListeChainee) -> ListeChainee:\n def predicate(x):\n return (x % 2) == 0\n # aussi : predicate = lambda x: (x % 2) == 0\n return filtrer(liste, predicate)\n\ndef impairs(liste: ListeChainee) -> ListeChainee:\n def predicate(x):\n return (x % 2) == 1\n return filtrer(liste, predicate)\n\nprint(pairs(vide)) # []\nprint(pairs(l_1)) # []\nprint(pairs(l_12)) # [2]\nprint(pairs(l_123)) # [2]\nprint(pairs(insert(4, insert(6, insert(8, l_123))))) # [4, 6, 8, 2]\nprint(pairs(insert(5, insert(6, insert(8, l_123))))) # [6, 8, 2]\n\nprint(impairs(vide)) # []\nprint(impairs(l_1)) # [1]\nprint(impairs(l_12)) # [1]\nprint(impairs(l_123)) # [3, 1]\nprint(impairs(insert(4, insert(6, insert(8, l_123))))) # [3, 1]\nprint(impairs(insert(5, insert(6, insert(8, l_123))))) # [5, 3, 1]", "Exercice 8 : range\nCe sera de complexité temporelle linéaire :", "def myrange(n: int) -> ListeChainee:\n if n <= 0:\n return ListeChainee(hd=None, tl=None)\n elif n == 1:\n return ListeChainee(hd=1, tl=None)\n # return insert(1, vide)\n else:\n return ListeChainee(hd=n, tl=myrange(n-1))\n\nprint(myrange(1)) # [1]\nprint(myrange(2)) # [1, 2]\nprint(myrange(3)) # [1, 2, 3]\nprint(myrange(4)) # [1, 2, 3, 4]", "Si on veut les avoir dans l'ordre croissant, il faudrait utiliser miroir qui est quadratique.\nAutant écrire directement une fonction intervale(a, b) qui renvoie la liste simplement chaînée contenant a :: (a+1) :: ... :: b :", "def intervale(a: int, b: Optional[int]=None) -> ListeChainee:\n if b is None:\n a, b = 1, a\n n = b - a\n if n < 0: # [a..b] = []\n return ListeChainee(hd=None, tl=None)\n elif n == 0: # [a..b] = [a]\n return ListeChainee(hd=a, tl=None)\n else: # [a..b] = a :: [a+1..b]\n return ListeChainee(hd=a, tl=intervale(a+1, b))\n\nprint(intervale(10)) # [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\nprint(intervale(1, 4)) # [1, 2, 3, 4]\nprint(intervale(13, 13)) # [13]\nprint(intervale(13, 10)) # []", "Une autre approche est d'écrire la fonction mymap et de dire que\npython\nintervale_bis(a, b) = miroir(mymap(lambda x: x + (a - 1), myrange(b - a + 1)))", "from typing import Callable\n\ndef mymap(fonction: Callable, liste: ListeChainee) -> ListeChainee:\n if liste is None or liste.hd is None: # liste de taille 0\n return ListeChainee(hd=None, tl=None)\n elif liste.tl is None: # liste de taille 1\n return ListeChainee(hd=fonction(liste.hd), tl=None)\n else: # liste de taille >= 2\n return ListeChainee(hd=fonction(liste.hd), tl=mymap(fonction, liste.tl))\n\nprint(myrange(10))\nprint(mymap(lambda x: x, myrange(10)))\n\n\ndef intervale_bis(a: int, b: int) -> ListeChainee:\n return miroir(mymap(lambda x: x + (a - 1), myrange(b - a + 1)))\n\nprint(intervale_bis(1, 10)) # [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\nprint(intervale_bis(1, 4)) # [1, 2, 3, 4]\nprint(intervale_bis(13, 13)) # [13]\nprint(intervale_bis(13, 10)) # []", "Exercice 9 : premiers\nPlusieurs possibilités. Un filtre d'Erathosthène marche bien, ou une filtrage.\nJe ne vais pas utiliser de tableaux donc on est un peu réduit d'utiliser une filtrage.\nOn a besoin des fonctions suivantes :\n\ncalculer la racine entière de $n$, très facile par une boucle,\ncalculer les nombres impairs entre 5 et $\\lfloor \\sqrt{n} \\rfloor$,\nfiltrer cette liste de nombres impairs pour garder ceux qui divisent $n$,\net dire que $n$ est premier s'il a un diviseur non trivial.", "def racine(n: int) -> int:\n i = 1\n for i in range(n + 1):\n if i*i > n:\n return i - 1\n return i\n\nprint(racine(1)) # 1\nprint(racine(5)) # 2\nprint(racine(102)) # 10\nprint(racine(120031)) # 346\n\ndef intervale2(a: int, b: Optional[int]=None, pas: int=1) -> ListeChainee:\n if b is None:\n a, b = 1, a\n n = b - a\n if n < 0: # [a..b::p] = []\n return ListeChainee(hd=None, tl=None)\n elif n == 0: # [a..b::p] = [a]\n return ListeChainee(hd=a, tl=None)\n else: # [a..b::p] = a :: [a+p..b::p]\n return ListeChainee(hd=a, tl=intervale2(a + pas, b=b, pas=pas))\n\nprint(intervale2(1, 10, 2)) # [1, 3, 5, 7, 9]\nprint(intervale2(1, 4, 2)) # [1, 3]\nprint(intervale2(13, 13, 2)) # [13]\nprint(intervale2(13, 10, 2)) # []", "Une version purement fonctionnelle est moins facile qu'une version impérative avec une référence booléenne.", "def estDivisible(n: int, k: int) -> bool:\n return (n % k) == 0\n\nestDivisible(10, 2)\nestDivisible(10, 3)\nestDivisible(10, 4)\nestDivisible(10, 5)", "On est prêt à écrire estPremier :", "def estPremier(n : int) -> bool:\n return taille(filtrer(intervale2(2, racine(n), 1), lambda k: estDivisible(n, k))) == 0", "En effet il suffit de construire d'abord la liste des entiers impairs de 2 à $\\lfloor \\sqrt{n} \\rfloor$, de les filtrer par ceux qui divisent $n$, et de vérifier si on a aucun diviseur (taille(..) == 0) auquel cas $n$ est premier, ou si $n$ a au moins un diviseur auquel cas $n$ n'est pas premier.", "for n in range(2, 20):\n print(\"Petits diviseurs de\", n, \" -> \", filtrer(intervale2(2, racine(n), 1), lambda k: estDivisible(n, k)))", "On voit dans l'exemple ci dessus les nombres premiers comme ceux n'ayant aucun diviseurs, et les nombres non premiers comme ceux ayant au moins un diviseur.", "def premiers(n : int) -> ListeChainee:\n return filtrer(intervale2(2, n, 1), estPremier)\n\npremiers(10) # [2, 3, 5, 7]\n\npremiers(100) # [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]", "Quelques tris par comparaison\nOn fera les tris en ordre croissant.", "test = [3, 1, 8, 4, 5, 6, 1, 2]", "Exercice 10 : Tri insertion", "from typing import TypeVar, List\n_a = TypeVar('alpha')\n\ndef insere(x : _a, liste : List[_a]) -> List[_a]:\n if len(liste) == 0:\n return [x]\n else:\n t, q = liste[0], liste[1:]\n if x <= t:\n return [x] + liste\n else:\n return [t] + insere(x, q)\n\ndef tri_insertion(liste : List[_a]) -> List[_a]:\n if len(liste) == 0:\n return []\n else:\n t, q = liste[0], liste[1:]\n return insere(t, tri_insertion(q))\n\ntri_insertion(test)", "Complexité en temps $\\mathcal{O}(n^2)$.\nExercice 11 : Tri insertion générique", "from typing import TypeVar, List, Callable\n_a = TypeVar('alpha')\n\ndef insere2(ordre : Callable[[_a, _a], bool], x : _a, liste : List[_a]) -> List[_a]:\n if len(liste) == 0:\n return [x]\n else:\n t, q = liste[0], liste[1:]\n if ordre(x, t):\n return [x] + liste\n else:\n return [t] + insere2(ordre, x, q)\n\ndef tri_insertion2(ordre : Callable[[_a, _a], bool], liste : List[_a]) -> List[_a]:\n if len(liste) == 0:\n return []\n else:\n t, q = liste[0], liste[1:]\n return insere2(ordre, t, tri_insertion2(ordre, q))\n\nordre_croissant = lambda x, y: x <= y\n\ntri_insertion2(ordre_croissant, test)\n\nordre_decroissant = lambda x, y: x >= y\n\ntri_insertion2(ordre_decroissant, test)", "Exercice 12 : Tri selection", "from typing import TypeVar, List, Tuple\n_a = TypeVar('alpha')\n\ndef selectionne_min(liste : List[_a]) -> Tuple[_a, List[_a]]:\n if len(liste) == 0:\n raise ValueError(\"Selectionne_min sur liste vide\")\n else:\n def cherche_min(mini : _a, autres : List[_a], reste : List[_a]) -> Tuple[_a, List[_a]]:\n if len(reste) == 0:\n return (mini, autres)\n else:\n t, q = reste[0], reste[1:]\n if t < mini:\n return cherche_min(t, [mini] + autres, q)\n else:\n return cherche_min(mini, [t] + autres, q)\n t, q = liste[0], liste[1:]\n return cherche_min(t, [], q)\n\ntest\nselectionne_min(test)", "(On voit que la liste autre a été inversée)", "def tri_selection(liste : List[_a]) -> List[_a]:\n if len(liste) == 0:\n return []\n else:\n mini, autres = selectionne_min(liste)\n return [mini] + tri_selection(autres)\n\ntri_selection(test)", "Complexité en temps : $\\mathcal{O}(n^2)$.\nExercices 13, 14, 15 : Tri fusion", "from typing import TypeVar, List, Tuple\n_a = TypeVar('alpha')\n\ndef separe(liste : List[_a]) -> Tuple[List[_a], List[_a]]:\n if len(liste) == 0:\n return ([], [])\n elif len(liste) == 1:\n return ([liste[0]], [])\n else:\n x, y, q = liste[0], liste[1], liste[2:]\n a, b = separe(q)\n return ([x] + a, [y] + b)\n\ntest\nsepare(test)\n\ndef fusion(liste1 : List[_a], liste2 : List[_a]) -> List[_a]:\n if (len(liste1), len(liste2)) == (0, 0):\n return []\n elif len(liste1) == 0:\n return liste2\n elif len(liste2) == 0:\n return liste1\n else: # les deux sont non vides\n x, a = liste1[0], liste1[1:]\n y, b = liste2[0], liste2[1:]\n if x <= y:\n return [x] + fusion(a, [y] + b)\n else:\n return [y] + fusion([x] + a, b)\n\n \nfusion([1, 3, 7], [2, 3, 8])\n\ndef tri_fusion(liste : List[_a]) -> List[_a]:\n if len(liste) <= 1:\n return liste\n else:\n a, b = separe(liste)\n return fusion(tri_fusion(a), tri_fusion(b))\n\ntri_fusion(test)", "Complexité en temps $\\mathcal{O}(n \\log n)$.\nComparaisons", "%timeit tri_insertion(test)\n%timeit tri_selection(test)\n%timeit tri_fusion(test)\n\nfrom sys import setrecursionlimit\nsetrecursionlimit(100000)\n# nécessaire pour tester les différentes fonctions récursives sur de grosses listes\n\nimport random\n\ndef test_random(n : int) -> List[int]:\n return [random.randint(-1000, 1000) for _ in range(n)]\n\nfor n in [10, 100, 1000]:\n print(\"\\nFor n =\", n)\n for tri in [tri_insertion, tri_selection, tri_fusion]:\n print(\" and tri = {}\".format(tri.__name__))\n %timeit tri(test_random(n))", "C'est assez pour vérifier que le tri fusion est bien plus efficace que les autres.\nOn voit aussi que les tris par insertion et sélection sont pire que linéaires,\nMais que le tri par fusion est presque linéaire (pour $n$ petits, $n \\log n$ est presque linéaire).\n\n\nListes : l'ordre supérieur\nJe ne corrige pas les questions qui étaient traitées dans le TP1.\nExercice 16 : applique", "from typing import TypeVar, List, Callable\n_a, _b = TypeVar('_a'), TypeVar('_b')\n\ndef applique(f : Callable[[_a], _b], liste : List[_a]) -> List[_b]:\n # Triche :\n return list(map(f, liste))\n # 1ère approche :\n return [f(x) for x in liste]\n # 2ème approche :\n fliste = []\n for x in liste:\n fliste.append(f(x))\n return fliste\n # 3ème approche\n n = len(liste)\n if n == 0: return []\n fliste = [liste[0] for _ in range(n)]\n for i in range(n):\n fliste[i] = f(liste[i])\n return fliste", "Exercice 17", "def premiers_carres_parfaits(n : int) -> List[int]:\n return applique(lambda x : x * x, list(range(1, n + 1)))\n\npremiers_carres_parfaits(12)", "Exercice 18 : itere", "from typing import TypeVar, List, Callable\n_a = TypeVar('_a')\n\ndef itere(f : Callable[[_a], None], liste : List[_a]) -> None:\n for x in liste:\n f(x)", "Exercice 19", "print_int = lambda i: print(\"{}\".format(i))\n\ndef affiche_liste_entiers(liste : List[int]) -> None:\n print(\"Debut\")\n itere(print_int, liste)\n print(\"Fin\")\n\naffiche_liste_entiers([1, 2, 4, 5, 12011993])", "Exercice 20 : qqsoit et ilexiste", "from typing import TypeVar, List, Callable\n_a = TypeVar('_a')\n\n# Comme all(map(f, liste))\ndef qqsoit(f : Callable[[_a], bool], liste : List[_a]) -> bool:\n for x in liste:\n if not f(x): return False # arret preliminaire\n return True\n\n# Comme any(map(f, liste))\ndef ilexiste(f : Callable[[_a], bool], liste : List[_a]) -> bool:\n for x in liste:\n if f(x): return True # arret preliminaire\n return False\n\nqqsoit(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])\nilexiste(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])\n\n%timeit qqsoit(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])\n%timeit all(map(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5]))\n\n%timeit ilexiste(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])\n%timeit any(map(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5]))", "Exercice 21 : appartient version 2", "def appartient_curry(x : _a) -> Callable[[List[_a]], bool]:\n return lambda liste: ilexiste(lambda y: x == y, liste)\n\ndef appartient(x : _a, liste : List[_a]) -> bool:\n return ilexiste(lambda y: x == y, liste)\n\ndef toutes_egales(x : _a, liste : List[_a]) -> bool:\n return qqsoit(lambda y: x == y, liste)\n\nappartient_curry(1)([1, 2, 3])\n\nappartient(1, [1, 2, 3])\nappartient(5, [1, 2, 3])\n\ntoutes_egales(1, [1, 2, 3])\ntoutes_egales(5, [1, 2, 3])", "Est-ce que notre implémentation peut être plus rapide que le test x in liste ?\nNon, mais elle est aussi rapide. C'est déjà pas mal !", "%timeit appartient(random.randint(-10, 10), [random.randint(-1000, 1000) for _ in range(1000)])\n%timeit random.randint(-10, 10) in [random.randint(-1000, 1000) for _ in range(1000)]", "Exercice 22 : filtre", "from typing import TypeVar, List, Callable\n_a = TypeVar('_a')\n\n# Comme list(filter(f, liste))\ndef filtre(f : Callable[[_a], bool], liste : List[_a]) -> List[_a]:\n # return [x for x in liste if f(x)]\n liste2 = []\n for x in liste:\n if f(x):\n liste2.append(x)\n return liste2\n\nfiltre(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])\nfiltre(lambda x: (x % 2) != 0, [1, 2, 3, 4, 5])", "Exercice 23\nJe vous laisse trouver pour premiers.", "pairs = lambda liste: filtre(lambda x: (x % 2) == 0, liste)\nimpairs = lambda liste: filtre(lambda x: (x % 2) != 0, liste)\n\npairs(list(range(10)))\nimpairs(list(range(10)))", "Exercice 24 : reduit", "from typing import TypeVar, List, Callable\n_a = TypeVar('_a')\n\n# Comme list(filter(f, liste))\ndef reduit_rec(f : Callable[[_a, _b], _a], acc : _a, liste : List[_b]) -> _a:\n if len(liste) == 0:\n return acc\n else:\n h, q = liste[0], liste[1:]\n return reduit(f, f(acc, h), q)\n\n# Version non récursive, bien plus efficace\ndef reduit(f : Callable[[_a, _b], _a], acc : _a, liste : List[_b]) -> _a:\n acc_value = acc\n for x in liste:\n acc_value = f(acc_value, x)\n return acc_value", "Très pratique pour calculer des sommes, notamment.\nExercice 25 : somme, produit", "from operator import add\nsomme_rec = lambda liste: reduit_rec(add, 0, liste)\nsomme = lambda liste: reduit(add, 0, liste)\n\nsomme_rec(list(range(10)))\nsomme(list(range(10)))\nsum(list(range(10)))\n\n%timeit somme_rec(list(range(10)))\n%timeit somme(list(range(10)))\n%timeit sum(list(range(10)))", "Pour de petites listes, la version récursive est aussi efficace que la version impérative. Chouette !", "%timeit somme_rec(list(range(1000)))\n%timeit somme(list(range(1000)))\n%timeit sum(list(range(1000)))\n\nfrom operator import mul\nproduit = lambda liste: reduit(mul, 1, liste)\n\nproduit(list(range(1, 6))) # 5! = 120", "Bonus :", "def factorielle(n : int) -> int:\n return produit(range(1, n + 1))\n\nfor n in range(1, 15):\n print(\"{:>7}! = {:>13}\".format(n, factorielle(n)))", "Exercice 26 : miroir version 2", "miroir = lambda liste: reduit(lambda l, x : [x] + l, [], liste)\n\nmiroir([2, 3, 5, 7, 11])", "Attention en Python, les listes ne sont PAS simplement chainées, donc lambda l, x : [x] + l est en temps linéaire en $|l| = n$, pas en $\\mathcal{O}(1)$ comme en Caml/OCaml pour fun l x -&gt; x :: l.\n\nArbres\n/!\\ Les deux dernières parties sont bien plus difficiles en Python qu'en Caml.\nExercice 27", "from typing import Dict, Optional, Tuple\n\n# Impossible de définir un type récursivement, pas comme en Caml\narbre_bin = Dict[str, Optional[Tuple[Dict, Dict]]]\n\nfrom pprint import pprint\n\narbre_test = {'Noeud': (\n {'Noeud': (\n {'Noeud': (\n {'Feuille': None},\n {'Feuille': None}\n )},\n {'Feuille': None}\n )},\n {'Feuille': None}\n )}\n\npprint(arbre_test)", "Avec une syntaxe améliorée, on se rapproche de très près de la syntaxe de Caml/OCaml :", "Feuille = {'Feuille': None}\nNoeud = lambda x, y : {'Noeud': (x, y)}\n\narbre_test = Noeud(Noeud(Noeud(Feuille, Feuille), Feuille), Feuille)\n\npprint(arbre_test)", "Exercice 28\nCompte le nombre de feuilles et de sommets.", "def taille(a : arbre_bin) -> int:\n # Pattern matching ~= if, elif,.. sur les clés de la profondeur 1\n # du dictionnaire (une seule clé)\n if 'Feuille' in a:\n return 1\n elif 'Noeud' in a:\n x, y = a['Noeud']\n return 1 + taille(x) + taille(y)\n\ntaille(arbre_test) # 7", "Exercice 29", "def hauteur(a : arbre_bin) -> int:\n if 'Feuille' in a:\n return 0\n elif 'Noeud' in a:\n x, y = a['Noeud']\n return 1 + max(hauteur(x), hauteur(y))\n\nhauteur(arbre_test) # 3", "Exercice 30\nBonus. (Écrivez une fonction testant si un arbre étiqueté par des entiers est tournoi.)\n\nParcours d'arbres binaires\nAprès quelques exercices manipulant cette structure de dictionnaire, écrire la suite n'est pas trop difficile.\nExercice 31", "from typing import TypeVar, Union, List\n\nF = TypeVar('F')\nN = TypeVar('N')\n\nelement_parcours = Union[F, N]\nparcours = List[element_parcours]", "Exercice 32 : Parcours naifs (complexité quadratique)", "def parcours_prefixe(a : arbre_bin) -> parcours:\n if 'Feuille' in a:\n return [F]\n elif 'Noeud' in a:\n g, d = a['Noeud']\n return [N] + parcours_prefixe(g) + parcours_prefixe(d)\n\nparcours_prefixe(arbre_test)\n\ndef parcours_postfixe(a : arbre_bin) -> parcours:\n if 'Feuille' in a:\n return [F]\n elif 'Noeud' in a:\n g, d = a['Noeud']\n return parcours_postfixe(g) + parcours_postfixe(d) + [N]\n\nparcours_postfixe(arbre_test)\n\ndef parcours_infixe(a : arbre_bin) -> parcours:\n if 'Feuille' in a:\n return [F]\n elif 'Noeud' in a:\n g, d = a['Noeud']\n return parcours_infixe(g) + [N] + parcours_infixe(d)\n\nparcours_infixe(arbre_test)", "Pourquoi ont-ils une complexité quadratique ? La concaténation (@ en OCaml, + en Python) ne se fait pas en temps constant mais linéaire dans la taille de la plus longue liste.\nExercice 33 : Parcours linéaires\nOn ajoute une fonction auxiliaire et un argument vus qui est une liste qui stocke les élements observés dans l'ordre du parcours", "def parcours_prefixe2(a : arbre_bin) -> parcours:\n def parcours(vus, b):\n if 'Feuille' in b:\n vus.insert(0, F)\n return vus\n elif 'Noeud' in b:\n vus.insert(0, N)\n g, d = b['Noeud']\n return parcours(parcours(vus, g), d)\n p = parcours([], a)\n return p[::-1]\n\nparcours_prefixe2(arbre_test)\n\ndef parcours_postfixe2(a : arbre_bin) -> parcours:\n def parcours(vus, b):\n if 'Feuille' in b:\n vus.insert(0, F)\n return vus\n elif 'Noeud' in b:\n g, d = b['Noeud']\n p = parcours(parcours(vus, g), d)\n p.insert(0, N)\n return p\n p = parcours([], a)\n return p[::-1]\n\nparcours_postfixe2(arbre_test)\n\ndef parcours_infixe2(a : arbre_bin) -> parcours:\n def parcours(vus, b):\n if 'Feuille' in b:\n vus.insert(0, F)\n return vus\n elif 'Noeud' in b:\n g, d = b['Noeud']\n p = parcours(vus, g)\n p.insert(0, N)\n return parcours(p, d)\n p = parcours([], a)\n return p[::-1]\n\nparcours_infixe2(arbre_test)", "Exercice 34 : parcours en largeur et en profondeur\nPour utiliser une file de priorité (priority queue), on utilise le module collections.deque.", "from collections import deque\n\ndef parcours_largeur(a : arbre_bin) -> parcours:\n file = deque()\n # fonction avec effet de bord sur la file\n def vasy() -> parcours:\n if len(file) == 0:\n return []\n else:\n b = file.pop()\n if 'Feuille' in b:\n # return [F] + vasy()\n v = vasy()\n v.insert(0, F)\n return v\n elif 'Noeud' in b:\n g, d = b['Noeud']\n file.insert(0, g)\n file.insert(0, d)\n # return [N] + vasy()\n v = vasy()\n v.insert(0, N)\n return v\n file.insert(0, a)\n return vasy()\n\nparcours_largeur(arbre_test)", "En remplaçant la file par une pile (une simple list), on obtient le parcours en profondeur, avec la même complexité.", "def parcours_profondeur(a : arbre_bin) -> parcours:\n pile = []\n # fonction avec effet de bord sur la file\n def vasy() -> parcours:\n if len(pile) == 0:\n return []\n else:\n b = pile.pop()\n if 'Feuille' in b:\n # return [F] + vasy()\n v = vasy()\n v.append(F)\n return v\n elif 'Noeud' in b:\n g, d = b['Noeud']\n pile.append(g)\n pile.append(d)\n # return [N] + vasy()\n v = vasy()\n v.insert(0, N)\n return v\n pile.append(a)\n return vasy()\n\nparcours_profondeur(arbre_test)", "Exercice 35 et fin\nReconstruction depuis le parcours prefixe", "test_prefixe = parcours_prefixe2(arbre_test)\ntest_prefixe", "L'idée de cette solution est la suivante :\nj'aimerais une fonction récursive qui fasse le travail;\nle problème c'est que si on prend un parcours prefixe, soit il commence\npar F et l'arbre doit être une feuille; soit il est de la forme N::q\noù q n'est plus un parcours prefixe mais la concaténation de DEUX parcours\nprefixe, on ne peut donc plus appeler la fonction sur q.\nOn va donc écrire une fonction qui prend une liste qui contient plusieurs\nparcours concaténé et qui renvoie l'arbre correspondant au premier parcours\net ce qui n'a pas été utilisé :", "from typing import Tuple\n\ndef reconstruit_prefixe(par : parcours) -> arbre_bin:\n def reconstruit(p : parcours) -> Tuple[arbre_bin, parcours]:\n if len(p) == 0:\n raise ValueError(\"parcours invalide pour reconstruit_prefixe\")\n elif p[0] == F:\n return (Feuille, p[1:])\n elif p[0] == N:\n g, q = reconstruit(p[1:])\n d, r = reconstruit(q)\n return (Noeud(g, d), r)\n # call it\n a, p = reconstruit(par)\n if len(p) == 0:\n return a\n else:\n raise ValueError(\"parcours invalide pour reconstruit_prefixe\")\n\nreconstruit_prefixe([F])\n\nreconstruit_prefixe(test_prefixe)", "Et cet exemple va échouer :", "reconstruit_prefixe([N, F, F] + test_prefixe) # échoue", "Reconstruction depuis le parcours en largeur\nCe n'est pas évident quand on ne connait pas. L'idée est de se servir d'une file\npour stocker les arbres qu'on reconstruit peu à peu depuis les feuilles. La file\npermet de récupérer les bons sous-arbres quand on rencontre un noeud", "largeur_test = parcours_largeur(arbre_test)\nlargeur_test\n\nfrom collections import deque\n\ndef reconstruit_largeur(par : parcours) -> arbre_bin:\n file = deque()\n # Fonction avec effets de bord\n def lire_element(e : element_parcours) -> None:\n if e == F:\n file.append(Feuille)\n elif e == N:\n d = file.popleft()\n g = file.popleft() # attention à l'ordre !\n file.append(Noeud(g, d))\n # Applique cette fonction à chaque élement du parcours\n for e in reversed(par):\n lire_element(e)\n if len(file) == 1:\n return file.popleft()\n else:\n raise ValueError(\"parcours invalide pour reconstruit_largeur\")\n\nlargeur_test\nreconstruit_largeur(largeur_test)\narbre_test", "Le même algorithme (enfin presque, modulo interversion de g et d)\navec une pile donne une autre version de la reconstruction du parcours prefixe.", "from collections import deque\n\ndef reconstruit_prefixe2(par : parcours) -> arbre_bin:\n pile = deque()\n # Fonction avec effets de bord\n def lire_element(e : element_parcours) -> None:\n if e == F:\n pile.append(Feuille)\n elif e == N:\n g = pile.pop()\n d = pile.pop() # attention à l'ordre !\n pile.append(Noeud(g, d))\n # Applique cette fonction à chaque élement du parcours\n for e in reversed(par):\n lire_element(e)\n if len(pile) == 1:\n return pile.pop()\n else:\n raise ValueError(\"parcours invalide pour reconstruit_prefixe2\")\n\nprefixe_test = parcours_prefixe2(arbre_test)\nprefixe_test\n\nreconstruit_prefixe2(prefixe_test)\narbre_test", "Conclusion\nFin. À la séance prochaine." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Intel-Corporation/tensorflow
tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Text classification with TensorFlow Lite Model Maker\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/tutorials/model_maker_text_classification\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nThe TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.\nThis notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used text classification model to classify movie reviews on a mobile device. The text classification model classifies text into predefined categories. The inputs should be preprocessed text and the outputs are the probabilities of the categories. The dataset used in this tutorial are positive and negative movie reviews.\nPrerequisites\nInstall the required packages\nTo run this example, install the required packages, including the Model Maker package from the GitHub repo.", "!pip install -q tflite-model-maker", "Import the required packages.", "import numpy as np\nimport os\n\nfrom tflite_model_maker import model_spec\nfrom tflite_model_maker import text_classifier\nfrom tflite_model_maker.config import ExportFormat\nfrom tflite_model_maker.text_classifier import AverageWordVecSpec\nfrom tflite_model_maker.text_classifier import DataLoader\n\nimport tensorflow as tf\nassert tf.__version__.startswith('2')\ntf.get_logger().setLevel('ERROR')", "Download the sample training data.\nIn this tutorial, we will use the SST-2 (Stanford Sentiment Treebank) which is one of the tasks in the GLUE benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for testing. The dataset has two classes: positive and negative movie reviews.", "data_dir = tf.keras.utils.get_file(\n fname='SST-2.zip',\n origin='https://dl.fbaipublicfiles.com/glue/data/SST-2.zip',\n extract=True)\ndata_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')", "The SST-2 dataset is stored in TSV format. The only difference between TSV and CSV is that TSV uses a tab \\t character as its delimiter instead of a comma , in the CSV format.\nHere are the first 5 lines of the training dataset. label=0 means negative, label=1 means positive.\n| sentence | label | | | |\n|-------------------------------------------------------------------------------------------|-------|---|---|---|\n| hide new secretions from the parental units | 0 | | | |\n| contains no wit , only labored gags | 0 | | | |\n| that loves its characters and communicates something rather beautiful about human nature | 1 | | | |\n| remains utterly satisfied to remain the same throughout | 0 | | | |\n| on the worst revenge-of-the-nerds clichés the filmmakers could dredge up | 0 | | | |\nNext, we will load the dataset into a Pandas dataframe and change the current label names (0 and 1) to a more human-readable ones (negative and positive) and use them for model training.", "import pandas as pd\n\ndef replace_label(original_file, new_file):\n # Load the original file to pandas. We need to specify the separator as\n # '\\t' as the training data is stored in TSV format\n df = pd.read_csv(original_file, sep='\\t')\n\n # Define how we want to change the label name\n label_map = {0: 'negative', 1: 'positive'}\n\n # Excute the label change\n df.replace({'label': label_map}, inplace=True)\n\n # Write the updated dataset to a new file\n df.to_csv(new_file)\n\n# Replace the label name for both the training and test dataset. Then write the\n# updated CSV dataset to the current folder.\nreplace_label(os.path.join(os.path.join(data_dir, 'train.tsv')), 'train.csv')\nreplace_label(os.path.join(os.path.join(data_dir, 'dev.tsv')), 'dev.csv')", "Quickstart\nThere are five steps to train a text classification model:\nStep 1. Choose a text classification model architecture.\nHere we use the average word embedding model architecture, which will produce a small and fast model with decent accuracy.", "spec = model_spec.get('average_word_vec')", "Model Maker also supports other model architectures such as BERT. If you are interested to learn about other architecture, see the Choose a model architecture for Text Classifier section below.\nStep 2. Load the training and test data, then preprocess them according to a specific model_spec.\nModel Maker can take input data in the CSV format. We will load the training and test dataset with the human-readable label name that were created earlier.\nEach model architecture requires input data to be processed in a particular way. DataLoader reads the requirement from model_spec and automatically executes the necessary preprocessing.", "train_data = DataLoader.from_csv(\n filename='train.csv',\n text_column='sentence',\n label_column='label',\n model_spec=spec,\n is_training=True)\ntest_data = DataLoader.from_csv(\n filename='dev.csv',\n text_column='sentence',\n label_column='label',\n model_spec=spec,\n is_training=False)", "Step 3. Train the TensorFlow model with the training data.\nThe average word embedding model use batch_size = 32 by default. Therefore you will see that it takes 2104 steps to go through the 67,349 sentences in the training dataset. We will train the model for 10 epochs, which means going through the training dataset 10 times.", "model = text_classifier.create(train_data, model_spec=spec, epochs=10)", "Step 4. Evaluate the model with the test data.\nAfter training the text classification model using the sentences in the training dataset, we will use the remaining 872 sentences in the test dataset to evaluate how the model performs against new data it has never seen before.\nAs the default batch size is 32, it will take 28 steps to go through the 872 sentences in the test dataset.", "loss, acc = model.evaluate(test_data)", "Step 5. Export as a TensorFlow Lite model.\nLet's export the text classification that we have trained in the TensorFlow Lite format. We will specify which folder to export the model.\nBy default, the float TFLite model is exported for the average word embedding model architecture.", "model.export(export_dir='average_word_vec')", "You can download the TensorFlow Lite model file using the left sidebar of Colab. Go into the average_word_vec folder as we specified in export_dir parameter above, right-click on the model.tflite file and choose Download to download it to your local computer.\nThis model can be integrated into an Android or an iOS app using the NLClassifier API of the TensorFlow Lite Task Library.\nSee the TFLite Text Classification sample app for more details on how the model is used in a working app.\nNote 1: Android Studio Model Binding does not support text classification yet so please use the TensorFlow Lite Task Library.\nNote 2: There is a model.json file in the same folder with the TFLite model. It contains the JSON representation of the metadata bundled inside the TensorFlow Lite model. Model metadata helps the TFLite Task Library know what the model does and how to pre-process/post-process data for the model. You don't need to download the model.json file as it is only for informational purpose and its content is already inside the TFLite file.\nNote 3: If you train a text classification model using MobileBERT or BERT-Base architecture, you will need to use BertNLClassifier API instead to integrate the trained model into a mobile app.\nThe following sections walk through the example step by step to show more details.\nChoose a model architecture for Text Classifier\nEach model_spec object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports MobileBERT, averaging word embeddings and BERT-Base models.\n| Supported Model | Name of model_spec | Model Description | Model size |\n|--------------------------|-------------------------|-----------------------------------------------------------------------------------------------------------------------|---------------------------------------------|\n| Averaging Word Embedding | 'average_word_vec' | Averaging text word embeddings with RELU activation. | <1MB |\n| MobileBERT | 'mobilebert_classifier' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device applications. | 25MB w/ quantization <br/> 100MB w/o quantization |\n| BERT-Base | 'bert_classifier' | Standard BERT model that is widely used in NLP tasks. | 300MB |\nIn the quick start, we have used the average word embedding model. Let's switch to MobileBERT to train a model with higher accuracy.", "mb_spec = model_spec.get('mobilebert_classifier')", "Load training data\nYou can upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab.\n<img src=\"https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_text_classification.png\" alt=\"Upload File\" width=\"800\" hspace=\"100\">\nIf you prefer not to upload your dataset to the cloud, you can also locally run the library by following the guide.\nTo keep it simple, we will reuse the SST-2 dataset downloaded earlier. Let's use the DataLoader.from_csv method to load the data.\nPlease be noted that as we have changed the model architecture, we will need to reload the training and test dataset to apply the new preprocessing logic.", "train_data = DataLoader.from_csv(\n filename='train.csv',\n text_column='sentence',\n label_column='label',\n model_spec=mb_spec,\n is_training=True)\ntest_data = DataLoader.from_csv(\n filename='dev.csv',\n text_column='sentence',\n label_column='label',\n model_spec=mb_spec,\n is_training=False)", "The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders.\nTrain a TensorFlow Model\nTrain a text classification model using the training data.\nNote: As MobileBERT is a complex model, each training epoch will takes about 10 minutes on a Colab GPU. Please make sure that you are using a GPU runtime.", "model = text_classifier.create(train_data, model_spec=mb_spec, epochs=3)", "Examine the detailed model structure.", "model.summary()", "Evaluate the model\nEvaluate the model that we have just trained using the test data and measure the loss and accuracy value.", "loss, acc = model.evaluate(test_data)", "Export as a TensorFlow Lite model\nConvert the trained model to TensorFlow Lite model format with metadata so that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is model.tflite.\nIn many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster.\nThe default post-training quantization technique is dynamic range quantization for the BERT and MobileBERT models.", "model.export(export_dir='mobilebert/')", "The TensorFlow Lite model file can be integrated in a mobile app using the BertNLClassifier API in TensorFlow Lite Task Library. Please note that this is different from the NLClassifier API used to integrate the text classification trained with the average word vector model architecture.\nThe export formats can be one or a list of the following:\n\nExportFormat.TFLITE\nExportFormat.LABEL\nExportFormat.VOCAB\nExportFormat.SAVED_MODEL\n\nBy default, it exports only the TensorFlow Lite model file containing the model metadata. You can also choose to export other files related to the model for better examination. For instance, exporting only the label file and vocab file as follows:", "model.export(export_dir='mobilebert/', export_format=[ExportFormat.LABEL, ExportFormat.VOCAB])", "You can evaluate the TFLite model with evaluate_tflite method to measure its accuracy. Converting the trained TensorFlow model to TFLite format and apply quantization can affect its accuracy so it is recommended to evaluate the TFLite model accuracy before deployment.", "accuracy = model.evaluate_tflite('mobilebert/model.tflite', test_data)\nprint('TFLite model accuracy: ', accuracy)", "Advanced Usage\nThe create function is the driver function that the Model Maker library uses to create models. The model_spec parameter defines the model specification. The AverageWordVecSpec and BertClassifierSpec classes are currently supported. The create function comprises of the following steps:\n\nCreates the model for the text classifier according to model_spec.\nTrains the classifier model. The default epochs and the default batch size are set by the default_training_epochs and default_batch_size variables in the model_spec object.\n\nThis section covers advanced usage topics like adjusting the model and the training hyperparameters.\nCustomize the MobileBERT model hyperparameters\nThe model parameters you can adjust are:\n\nseq_len: Length of the sequence to feed into the model.\ninitializer_range: The standard deviation of the truncated_normal_initializer for initializing all weight matrices.\ntrainable: Boolean that specifies whether the pre-trained layer is trainable.\n\nThe training pipeline parameters you can adjust are:\n\nmodel_dir: The location of the model checkpoint files. If not set, a temporary directory will be used.\ndropout_rate: The dropout rate.\nlearning_rate: The initial learning rate for the Adam optimizer.\ntpu: TPU address to connect to.\n\nFor instance, you can set the seq_len=256 (default is 128). This allows the model to classify longer text.", "new_model_spec = model_spec.get('mobilebert_classifier')\nnew_model_spec.seq_len = 256", "Customize the average word embedding model hyperparameters\nYou can adjust the model infrastructure like the wordvec_dim and the seq_len variables in the AverageWordVecSpec class.\nFor example, you can train the model with a larger value of wordvec_dim. Note that you must construct a new model_spec if you modify the model.", "new_model_spec = AverageWordVecSpec(wordvec_dim=32)", "Get the preprocessed data.", "new_train_data = DataLoader.from_csv(\n filename='train.csv',\n text_column='sentence',\n label_column='label',\n model_spec=new_model_spec,\n is_training=True)", "Train the new model.", "model = text_classifier.create(new_train_data, model_spec=new_model_spec)", "Tune the training hyperparameters\nYou can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance,\n\nepochs: more epochs could achieve better accuracy, but may lead to overfitting.\nbatch_size: the number of samples to use in one training step.\n\nFor example, you can train with more epochs.", "model = text_classifier.create(new_train_data, model_spec=new_model_spec, epochs=20)", "Evaluate the newly retrained model with 20 training epochs.", "new_test_data = DataLoader.from_csv(\n filename='dev.csv',\n text_column='sentence',\n label_column='label',\n model_spec=new_model_spec,\n is_training=False)\n\nloss, accuracy = model.evaluate(new_test_data)", "Change the Model Architecture\nYou can change the model by changing the model_spec. The following shows how to change to BERT-Base model.\nChange the model_spec to BERT-Base model for the text classifier.", "spec = model_spec.get('bert_classifier')", "The remaining steps are the same.\nCustomize Post-training quantization on the TensorFlow Lite model\nPost-training quantization is a conversion technique that can reduce model size and inference latency, while also improving CPU and hardware accelerator inference speed, with a little degradation in model accuracy. Thus, it's widely used to optimize the model.\nModel Maker library applies a default post-training quantization techique when exporting the model. If you want to customize post-training quantization, Model Maker supports multiple post-training quantization options using QuantizationConfig as well. Let's take float16 quantization as an instance. First, define the quantization config.\npython\nconfig = QuantizationConfig.for_float16()\nThen we export the TensorFlow Lite model with such configuration.\npython\nmodel.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)\nRead more\nYou can read our text classification example to learn technical details. For more information, please refer to:\n\nTensorFlow Lite Model Maker guide and API reference.\nTask Library: NLClassifier and BertNLClassifier for deployment.\nThe end-to-end reference apps: Android and iOS." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sxjscience/mxnet
example/recommenders/demo2-dssm.ipynb
apache-2.0
[ "Content-based recommender using Deep Structured Semantic Model\nAn example of how to build a Deep Structured Semantic Model (DSSM) for incorporating complex content-based features into a recommender system. See Learning Deep Structured Semantic Models for Web Search using Clickthrough Data. This example does not attempt to provide a datasource or train a model, but merely show how to structure a complex DSSM network.", "import warnings\n\nimport mxnet as mx\nfrom mxnet import gluon, nd, autograd, sym\nimport numpy as np\nfrom sklearn.random_projection import johnson_lindenstrauss_min_dim\n\n\n# Define some constants\nmax_user = int(1e5)\ntitle_vocab_size = int(3e4)\nquery_vocab_size = int(3e4)\nnum_samples = int(1e4)\nhidden_units = 128\nepsilon_proj = 0.25\n\nctx = mx.gpu() if mx.context.num_gpus() > 0 else mx.cpu()", "Bag of words random projection\nA previous version of this example contained a bag of word random projection example, it is kept here for reference but not used in the next example.\nRandom Projection is a dimension reduction technique that guarantees the disruption of the pair-wise distance between your original data point within a certain bound.\nWhat is even more interesting is that the dimension to project onto to guarantee that bound does not depend on the original number of dimension but solely on the total number of datapoints.\nYou can see more explanation in this blog post", "proj_dim = johnson_lindenstrauss_min_dim(num_samples, epsilon_proj)\nprint(\"To keep a distance disruption ~< {}% of our {} samples we need to randomly project to at least {} dimensions\".format(epsilon_proj*100, num_samples, proj_dim))\n\nclass BagOfWordsRandomProjection(gluon.HybridBlock):\n def __init__(self, vocab_size, output_dim, random_seed=54321, pad_index=0):\n \"\"\"\n :param int vocab_size: number of element in the vocabulary\n :param int output_dim: projection dimension\n :param int ramdon_seed: seed to use to guarantee same projection\n :param int pad_index: index of the vocabulary used for padding sentences\n \"\"\"\n super(BagOfWordsRandomProjection, self).__init__()\n self._vocab_size = vocab_size\n self._output_dim = output_dim\n proj = self._random_unit_vecs(vocab_size=vocab_size, output_dim=output_dim, random_seed=random_seed)\n # we set the projection of the padding word to 0\n proj[pad_index, :] = 0\n self.proj = self.params.get_constant('proj', value=proj)\n\n def _random_unit_vecs(self, vocab_size, output_dim, random_seed):\n rs = np.random.RandomState(seed=random_seed)\n W = rs.normal(size=(vocab_size, output_dim))\n Wlen = np.linalg.norm(W, axis=1)\n W_unit = W / Wlen[:,None]\n return W_unit\n\n def hybrid_forward(self, F, x, proj):\n \"\"\"\n :param nd or sym F:\n :param nd.NDArray x: index of tokens\n returns the sum of the projected embeddings of each token\n \"\"\"\n embedded = F.Embedding(x, proj, input_dim=self._vocab_size, output_dim=self._output_dim)\n return embedded.sum(axis=1)\n\nbowrp = BagOfWordsRandomProjection(1000, 20)\nbowrp.initialize()\n\nbowrp(mx.nd.array([[10, 50, 100], [5, 10, 0]]))", "With padding:", "bowrp(mx.nd.array([[10, 50, 100, 0], [5, 10, 0, 0]]))", "Content-based recommender / ranking system using DSSM\nFor example in the search result ranking problem:\nYou have users, that have performed text-based searches. They were presented with results, and selected one of them.\nResults are composed of a title and an image.\nYour positive examples will be the clicked items in the search results, and the negative examples are sampled from the non-clicked examples.\nThe network will jointly learn embeddings for users and query text making up the \"Query\", title and image making the \"Item\" and learn how similar they are.\nAfter training, you can index the embeddings for your items and do a knn search with your query embeddings using the cosine similarity to return ranked items", "proj_dim = 128\n\nclass DSSMRecommenderNetwork(gluon.HybridBlock):\n def __init__(self, query_vocab_size, proj_dim, max_user, title_vocab_size, hidden_units, random_seed=54321, p=0.5):\n super(DSSMRecommenderNetwork, self).__init__()\n with self.name_scope():\n \n # User/Query pipeline\n self.user_embedding = gluon.nn.Embedding(max_user, proj_dim)\n self.user_mlp = gluon.nn.Dense(hidden_units, activation=\"relu\")\n \n # Instead of bag of words, we use learned embeddings + stacked biLSTM average\n self.query_text_embedding = gluon.nn.Embedding(query_vocab_size, proj_dim)\n self.query_lstm = gluon.rnn.LSTM(hidden_units, 2, bidirectional=True)\n self.query_text_mlp = gluon.nn.Dense(hidden_units, activation=\"relu\") \n \n self.query_dropout = gluon.nn.Dropout(p)\n self.query_mlp = gluon.nn.Dense(hidden_units, activation=\"relu\")\n\n # Item pipeline\n # Instead of bag of words, we use learned embeddings + stacked biLSTM average\n self.title_embedding = gluon.nn.Embedding(title_vocab_size, proj_dim)\n self.title_lstm = gluon.rnn.LSTM(hidden_units, 2, bidirectional=True)\n self.title_mlp = gluon.nn.Dense(hidden_units, activation=\"relu\")\n \n # You could use vgg here for example\n self.image_embedding = gluon.model_zoo.vision.resnet18_v2(pretrained=False).features \n self.image_mlp = gluon.nn.Dense(hidden_units, activation=\"relu\")\n \n self.item_dropout = gluon.nn.Dropout(p)\n self.item_mlp = gluon.nn.Dense(hidden_units, activation=\"relu\")\n \n def hybrid_forward(self, F, user, query_text, title, image):\n # Query\n user = self.user_embedding(user)\n user = self.user_mlp(user)\n\n query_text = self.query_text_embedding(query_text)\n query_text = self.query_lstm(query_text.transpose((1,0,2)))\n # average the states\n query_text = query_text.mean(axis=0)\n query_text = self.query_text_mlp(query_text)\n \n query = F.concat(user, query_text)\n query = self.query_dropout(query)\n query = self.query_mlp(query)\n \n # Item\n title_text = self.title_embedding(title)\n title_text = self.title_lstm(title_text.transpose((1,0,2)))\n # average the states\n title_text = title_text.mean(axis=0)\n title_text = self.title_mlp(title_text)\n \n image = self.image_embedding(image)\n image = self.image_mlp(image)\n \n item = F.concat(title_text, image)\n item = self.item_dropout(item)\n item = self.item_mlp(item)\n \n # Cosine Similarity\n query = query.expand_dims(axis=2)\n item = item.expand_dims(axis=2)\n sim = F.batch_dot(query, item, transpose_a=True) / (query.norm(axis=1) * item.norm(axis=1) + 1e-9).expand_dims(axis=2)\n \n return sim.squeeze(axis=2)\n\nnetwork = DSSMRecommenderNetwork(\n query_vocab_size,\n proj_dim,\n max_user,\n title_vocab_size,\n hidden_units\n)\n\n\nnetwork.initialize(mx.init.Xavier(), ctx)\n\n# Load pre-trained vgg16 weights\nwith network.name_scope():\n network.image_embedding = gluon.model_zoo.vision.resnet18_v2(pretrained=True, ctx=ctx).features", "It is quite hard to visualize the network since it is relatively complex but you can see the two-pronged structure, and the resnet18 branch", "mx.viz.plot_network(network(\n mx.sym.var('user'), mx.sym.var('query_text'), mx.sym.var('title'), mx.sym.var('image')),\n shape={'user': (1,1), 'query_text': (1,30), 'title': (1,30), 'image': (1,3,224,224)},\n node_attrs={\"fixedsize\":\"False\"})", "We can print the summary of the network using dummy data. We can see it is already training on 32M parameters!", "user = mx.nd.array([[200], [100]], ctx)\nquery = mx.nd.array([[10, 20, 0, 0, 0], [40, 50, 0, 0, 0]], ctx) # Example of an encoded text\ntitle = mx.nd.array([[10, 20, 0, 0, 0], [40, 50, 0, 0, 0]], ctx) # Example of an encoded text\nimage = mx.nd.random.uniform(shape=(2,3, 224,224), ctx=ctx) # Example of an encoded image\n\n\nnetwork.summary(user, query, title, image)\n\nnetwork(user, query, title, image)", "The output is the similarity, if we wanted to train it on real data, we would need to minimize the Cosine loss, 1 - cosine_similarity." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
intel-analytics/BigDL
apps/variational-autoencoder/using_variational_autoencoder_to_generate_digital_numbers.ipynb
apache-2.0
[ "Using Variational Autoencoder to Generate Digital Numbers\nVariational Autoencoders (VAEs) are very popular approaches to unsupervised learning of complicated distributions. In this example, we are going to use VAE to generate digital numbers.\n\nIn standard Autoencoder, we have an encoder network that takes in the original image and encode it into a vector of latent variables and a decoder network that takes in the latent vector and output an generated image that we hope to look similar to the original image.\n\nIn VAE, we constrain the latent variable to be unit gaussian, so that we can sample latent variables from a unit gaussian distribution, then use the decoder network to generate images.\nSo, we get the architecture above. Instead of generate the latent varibles directly, the encoder network output a mean vector and a variance (or log-variance) vector, and the decoder takes the sampled latent vector to generate the output image. And we add penalty on the latent distribution's KL Divergence to a unit gaussian distribution.\nDefine the Model", "# a bit of setup\nimport numpy as np\nfrom bigdl.dllib.nn.criterion import *\nfrom bigdl.dllib.feature.dataset import mnist\nfrom bigdl.dllib.keras.layers import *\nfrom bigdl.dllib.keras.models import Model\nfrom bigdl.dllib.keras.utils import *\nimport datetime as dt\n\nIMAGE_SIZE = 784\nIMAGE_ROWS = 28\nIMAGE_COLS = 28\nIMAGE_CHANNELS = 1\nlatent_size = 2\n\nfrom bigdl.dllib.nncontext import *\nsc = init_nncontext(\"Variational Autoencoder Example\")", "We are going to use a simple cnn network as our encoder and decoder. In decoder, we use SpatialFullConvolution (aka deconvolution or convolution transpose) layer to upsample the image to the original resolution.", "def get_encoder(latent_size):\n input0 = Input(shape=(IMAGE_CHANNELS, IMAGE_COLS, IMAGE_ROWS))\n \n #CONV\n conv1 = Convolution2D(16, 5, 5, input_shape=(IMAGE_CHANNELS, IMAGE_ROWS, IMAGE_COLS), border_mode='same',\n subsample=(2, 2))(input0)\n relu1 = LeakyReLU()(conv1)\n conv2 = Convolution2D(32, 5, 5, input_shape=(16, 14, 14), border_mode='same', subsample=(2, 2))(relu1)\n relu2 = LeakyReLU()(conv2) # 32,7,7\n reshape = Flatten()(relu2)\n \n #fully connected to output mean vector and log-variance vector\n reshape = Reshape([7*7*32])(relu2)\n z_mean = Dense(latent_size)(reshape)\n z_log_var = Dense(latent_size)(reshape)\n model = Model([input0],[z_mean,z_log_var])\n return model\n\ndef get_decoder(latent_size):\n input0 = Input(shape=(latent_size,))\n reshape0 = Dense(1568)(input0)\n reshape1 = Reshape((32, 7, 7))(reshape0)\n relu0 = Activation('relu')(reshape1)\n \n # use resize and conv layer instead of deconv layer\n resize1 = ResizeBilinear(14,14)(relu0)\n deconv1 = Convolution2D(16, 5, 5, subsample=(1, 1), activation='relu', border_mode = 'same', input_shape=(32, 14, 14))(resize1)\n resize2 = ResizeBilinear(28,28)(deconv1)\n deconv2 = Convolution2D(1, 5, 5, subsample=(1, 1), input_shape=(16, 28, 28), border_mode = 'same')(resize2)\n outputs = Activation('sigmoid')(deconv2)\n \n model = Model([input0],[outputs])\n return model\n\ndef get_autoencoder(latent_size):\n input0 = Input(shape=(IMAGE_CHANNELS, IMAGE_COLS, IMAGE_ROWS))\n encoder = get_encoder(latent_size)(input0)\n sample = GaussianSampler()(encoder)\n decoder_model = get_decoder(latent_size)\n decoder = decoder_model(sample)\n model = Model([input0],[encoder,decoder])\n return model,decoder_model\n\nautoencoder,decoder_model = get_autoencoder(2)", "Get the MNIST Dataset", "def get_mnist(sc, mnist_path):\n (train_images, train_labels) = mnist.read_data_sets(mnist_path = \"/tmp/mnist\", \"train\")\n train_images = np.reshape(train_images, (60000, 1, 28, 28))\n rdd_train_images = sc.parallelize(train_images)\n\n rdd_train_sample = rdd_train_images.map(lambda img:\n Sample.from_ndarray(\n (img > 128) * 1.0,\n [(img > 128) * 1.0, (img > 128) * 1.0]))\n return rdd_train_sample\n\nmnist_path = \"/tmp/mnist\" # please replace this\n\ntrain_data = get_mnist(sc, mnist_path)\n# (train_images, train_labels) = mnist.read_data_sets(mnist_path, \"train\")", "Define our Training Objective\nThe size_average parameter in BCECriterion should be False, because when size_average is True, the negative_log_likelyhood computed in BCECriterion is average over each observations as well as dimensions, while in the KLDCriterion the KL-Divergence is sumed over each observations, the loss woule be wrong.", "batch_size = 100\ncriterion = ParallelCriterion()\ncriterion.add(KLDCriterion(), 1.0)\ncriterion.add(BCECriterion(size_average=False), 1.0/batch_size)", "Compile the Model", "autoencoder.compile(optimizer=Adam(0.001), loss=criterion)\n\nimport os\nif not os.path.exists(\"./log\"):\n os.makedirs(\"./log\")\n \napp_name='vae-digits-'+dt.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\nautoencoder.set_tensorboard(log_dir='./log/',app_name=app_name)\n\nprint(\"Saving logs to \", app_name)", "Start Training\nThis step may take a while depending on your system.", "autoencoder.fit(x=train_data,\n batch_size=batch_size,\n nb_epoch = 6)", "Let's show the learning curve.", "import matplotlib\nmatplotlib.use('Agg')\n%pylab inline\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import imshow\nimport numpy as np\nimport datetime as dt\n\n\n\ntrain_summary = TrainSummary('./log/', app_name)\nloss = np.array(train_summary.read_scalar(\"Loss\"))\nplt.figure(figsize = (12,12))\nplt.plot(loss[:,0],loss[:,1],label='loss')\nplt.xlim(0,loss.shape[0]+10)\nplt.grid(True)\nplt.title(\"loss\")", "You can also open tensorboard to see this curve.\nSample Some Images from the Decoder", "from matplotlib.pyplot import imshow\n\nimg = np.column_stack([decoder_model.forward(np.random.randn(1,2)).reshape(28,28) for s in range(8)])\nimshow(img, cmap='gray')", "Explore the Latent Space", "# This code snippet references this keras example (https://github.com/keras-team/keras/blob/master/examples/variational_autoencoder.py)\nfrom scipy.stats import norm\n# display a 2D manifold of the digits\nn = 15 # figure with 15x15 digits\ndigit_size = 28\nfigure = np.zeros((digit_size * n, digit_size * n))\n# linearly spaced coordinates on the unit square were transformed through the inverse CDF (ppf) of the Gaussian\n# to produce values of the latent variables z, since the prior of the latent space is Gaussian\ngrid_x = norm.ppf(np.linspace(0.05, 0.95, n))\ngrid_y = norm.ppf(np.linspace(0.05, 0.95, n))\n\nfor i, yi in enumerate(grid_x):\n for j, xi in enumerate(grid_y):\n z_sample = np.array([[xi, yi]])\n x_decoded = decoder_model.forward(z_sample)\n digit = x_decoded.reshape(digit_size, digit_size)\n figure[i * digit_size: (i + 1) * digit_size,\n j * digit_size: (j + 1) * digit_size] = digit\n\nplt.figure(figsize=(10, 10))\nplt.imshow(figure, cmap='Greys_r')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
w4zir/ml17s
lectures/lec04-linear-regression-example.ipynb
mit
[ "CSAL4243: Introduction to Machine Learning\nMuhammad Mudassir Khan ([email protected])\nLecture 4: Linear Regression and Gradient Descent Example\nOverview\n\nMachine Learning pipeline\nLinear Regression with one variable\nModel Representation\nCost Function\n\n\nGradient Descent\nLinear Regression Example\nRead data\nPlot data\nFind a line that best fit the data\nLets assume $\\theta_0 = 0$ and $\\theta_1=0$\nPlot it\n$\\theta_1$ vs Cost\nLets do it with Gradient Descent now\nPlot Convergence\nPredict output using trained model\nPlot Results\n\n\nResources\nCredits\n\n<br>\n<br>\nMachine Learning pipeline\n<img style=\"float: left;\" src=\"images/model.png\">\n\n\nx is called input variables or input features.\n\n\ny is called output or target variable. Also sometimes known as label.\n\n\nh is called hypothesis or model. \n\n\npair (x<sup>(i)</sup>,y<sup>(i)</sup>) is called a sample or training example\n\n\ndataset of all training examples is called training set.\n\n\nm is the number of samples in a dataset.\n\n\nn is the number of features in a dataset excluding label.\n\n\n<img style=\"float: left;\" src=\"images/02_02.png\", width=400> \n<br>\n<br>\nLinear Regression with one variable\nModel Representation\n\n\nModel is represented by h<sub>$\\theta$</sub>(x) or simply h(x)\n\n\nFor Linear regression with one input variable h(x) = $\\theta$<sub>0</sub> + $\\theta$<sub>1</sub>x\n\n\n<img style=\"float: left;\" src=\"images/02_01.png\">\n\n$\\theta$<sub>0</sub> and $\\theta$<sub>1</sub> are called weights or parameters.\nNeed to find $\\theta$<sub>0</sub> and $\\theta$<sub>1</sub> that maximizes the performance of model.\n\n<br>\n<br>\n<br>\nCost Function\nLet $\\hat{y}$ = h(x) = $\\theta$<sub>0</sub> + $\\theta$<sub>1</sub>x\nError in single sample (x,y) = $\\hat{y}$ - y = h(x) - y \nCummulative error of all m samples = $\\sum_{i=1}^{m} (h(x^i) - y^i)^2$\nFinally mean squared error or cost function = J($\\theta$) = $\\frac{1}{2m}\\sum_{i=1}^{m} (h(x^i) - y^i)^2$\n<img style=\"float: left;\" src=\"images/03_01.png\", width=300> <img style=\"float: right;\" src=\"images/03_02.png\", width=300>\n<br>\n<br>\nGradient Descent\nGradient descent equation:\n$\\theta_j := \\theta_j - \\alpha \\frac{\\partial}{\\partial \\theta_j} J(\\theta_0, \\theta_1)$\nLinear regression Cost function:\nJ($\\theta$) = $\\frac{1}{2m}\\sum_{i=1}^{m} (h(x^i) - y^i)^2$\n<br>\nReplacing J($\\theta$) in gradient descent equation:\n\\begin{align} \\text{repeat until convergence: } \\lbrace & \\newline \\theta_0 := & \\theta_0 - \\alpha \\frac{1}{m} \\sum\\limits_{i=1}^{m}(h_\\theta(x_{i}) - y_{i}) \\newline \\theta_1 := & \\theta_1 - \\alpha \\frac{1}{m} \\sum\\limits_{i=1}^{m}\\left((h_\\theta(x_{i}) - y_{i}) x_{i}\\right) \\newline \\rbrace& \\end{align}\n\n<br>\n<img style=\"float: left;\" src=\"images/03_04.gif\">\n<br>\n<br>\nLinear Regression Example\n| x | y | \n| ------------- |:-------------:| \n| 1 | 0.8 | \n| 2 | 1.6 | \n| 3 | 2.4 | \n| 4 | 3.2 | \nRead data", "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nfrom sklearn import linear_model\nimport matplotlib.pyplot as plt\n\n# read data in pandas frame\ndataframe = pd.read_csv('datasets/example1.csv', encoding='utf-8')\n\n# assign x and y\nX = np.array(dataframe[['x']])\ny = np.array(dataframe[['y']])\n\nm = y.size # number of training examples\n\n# check data by printing first few rows\ndataframe.head()", "Plot data", "#visualize results\nplt.scatter(X, y)\nplt.title(\"Dataset\")\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.show()", "Find a line that best fit the data", "#best fit line\n\ntmpx = np.array([0, 1, 2, 3, 4])\ny1 = 0.2*tmpx\ny2 = 0.7*tmpx\ny3 = 1.5*tmpx\n\n\nplt.scatter(X, y)\nplt.plot(tmpx,y1)\nplt.plot(tmpx,y2)\nplt.plot(tmpx,y3)\nplt.title(\"Best fit line\")\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.show()", "Lets assume $\\theta_0 = 0$ and $\\theta_1=0$\nModel h(x) = $\\theta_0$ + $\\theta_1$x = 0\nCost function J($\\theta$) = $\\frac{1}{2m}\\sum_{i=1}^{m} (h(x^i) - y^i)^2$ = $\\frac{1}{2m}\\sum_{i=1}^{m} (0 - y^i)^2$", "theta0 = 0\ntheta1 = 0\n\ncost = 0\nfor i in range(m):\n hx = theta1*X[i,0] + theta0\n cost += pow((hx - y[i,0]),2)\n\ncost = cost/(2*m) \nprint (cost)", "plot it", "# predict using model\ny_pred = theta1*X + theta0\n\n# plot\nplt.scatter(X, y)\nplt.plot(X, y_pred)\nplt.title(\"Line for theta1 = 0\")\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.show()", "Plot $\\theta1$ vs Cost", "# save theta1 and cost in a vector\ncost_log = []\ntheta1_log = []\n\ncost_log.append(cost)\ntheta1_log.append(theta1)\n\n# plot\nplt.scatter(theta1_log, cost_log)\nplt.title(\"Theta1 vs Cost\")\nplt.xlabel(\"Theta1\")\nplt.ylabel(\"Cost\")\nplt.show()", "Lets assume $\\theta_0 = 0$ and $\\theta_1=1$\nModel h(x) = $\\theta_0$ + $\\theta_1$x = x\nCost function J($\\theta$) = $\\frac{1}{2m}\\sum_{i=1}^{m} (h(x^i) - y^i)^2$ = $\\frac{1}{2m}\\sum_{i=1}^{m} (x^i - y^i)^2$", "theta0 = 0\ntheta1 = 1\n\ncost = 0\nfor i in range(m):\n hx = theta1*X[i,0] + theta0\n cost += pow((hx - y[i,0]),2)\n\ncost = cost/(2*m) \nprint (cost)", "plot it", "# predict using model\ny_pred = theta1*X + theta0\n\n# plot\nplt.scatter(X, y)\nplt.plot(X, y_pred)\nplt.title(\"Line for theta1 = 1\")\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.show()", "Plot $\\theta1$ vs Cost again", "# save theta1 and cost in a vector\ncost_log.append(cost)\ntheta1_log.append(theta1)\n\n# plot\nplt.scatter(theta1_log, cost_log)\nplt.title(\"Theta1 vs Cost\")\nplt.xlabel(\"Theta1\")\nplt.ylabel(\"Cost\")\nplt.show()", "Lets assume $\\theta_0 = 0$ and $\\theta_1=2$\nModel h(x) = $\\theta_0$ + $\\theta_1$x = 2x\nCost function J($\\theta$) = $\\frac{1}{2m}\\sum_{i=1}^{m} (h(x^i) - y^i)^2$ = $\\frac{1}{2m}\\sum_{i=1}^{m} (2x^i - y^i)^2$", "theta0 = 0\ntheta1 = 2\n\ncost = 0\nfor i in range(m):\n hx = theta1*X[i,0] + theta0\n cost += pow((hx - y[i,0]),2)\n\ncost = cost/(2*m) \nprint (cost)\n\n\n# predict using model\ny_pred = theta1*X + theta0\n\n# plot\nplt.scatter(X, y)\nplt.plot(X, y_pred)\nplt.title(\"Line for theta1 = 2\")\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.show()\n\n\n\n# save theta1 and cost in a vector\ncost_log.append(cost)\ntheta1_log.append(theta1)\n\n# plot\nplt.scatter(theta1_log, cost_log)\nplt.title(\"theta1 vs Cost\")\nplt.xlabel(\"Theta1\")\nplt.ylabel(\"Cost\")\nplt.show()", "Run it for a while", "theta0 = 0\ntheta1 = -3.1\n\ncost_log = []\ntheta1_log = [];\n\ninc = 0.1\nfor j in range(61):\n theta1 = theta1 + inc;\n \n cost = 0\n for i in range(m):\n hx = theta1*X[i,0] + theta0\n cost += pow((hx - y[i,0]),2)\n\n cost = cost/(2*m) \n\n cost_log.append(cost)\n theta1_log.append(theta1)\n", "plot $\\theta_1$ vs Cost", "plt.scatter(theta1_log, cost_log)\nplt.title(\"theta1 vs Cost\")\nplt.xlabel(\"Theta1\")\nplt.ylabel(\"Cost\")\nplt.show()", "<br>\n<br>\nLets do it with Gradient Descent now", "theta0 = 0\ntheta1 = 2\n\nalpha = 0.1\ninterations = 100\n\ncost_log = []\ntheta_log = [];\n\ninc = 0.1\nfor j in range(interations):\n \n cost = 0\n grad = 0\n for i in range(m):\n hx = theta1*X[i,0] + theta0 \n cost += pow((hx - y[i,0]),2)\n grad += ((hx - y[i,0]))*X[i,0]\n\n cost = cost/(2*m)\n grad = grad/(2*m) \n theta1 = theta1 - alpha*grad\n \n \n cost_log.append(cost)\n theta_log.append(theta1)\n\ntheta_log", "Plot Convergence", "plt.plot(cost_log)\nplt.title(\"Convergence of Cost Function\")\nplt.xlabel(\"Iteration number\")\nplt.ylabel(\"Cost function\")\nplt.show()", "Predict output using trained model", "# predict using model\ny_pred = theta1*X + theta0\n\n# plot\nplt.scatter(X, y)\nplt.plot(X, y_pred)\nplt.title(\"Line for Theta1 from Gradient Descent\")\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.show()", "Resources\nCourse website: https://w4zir.github.io/ml17s/\nCourse resources\nCredits\nRaschka, Sebastian. Python machine learning. Birmingham, UK: Packt Publishing, 2015. Print.\nAndrew Ng, Machine Learning, Coursera\nLucas Shen\nDavid Kaleko" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pxcandeias/py-notebooks
Rayleigh_damping.ipynb
mit
[ "<a id='top'></a>\nRayleigh damping in multi-degree-of-freedom (MDOF) systems\nIntroduction\nDynamic equilibrium equation\nComputational lab\nMass proportional damping\nStiffness proportional damping\nRayleigh damping\nExample\nReferences\nOdds and ends\nIntroduction\nIn structural dynamics, mass and stiffness can be computed from the geometric characteristics and material properties of a structure but damping can only be estimated based on the fact that structural dynamic responses are, well, damped. It is usually assumed that such damping is viscous, in the absence of more accurate information, which fits nicely in the solution of the dynamic equilibrium equation.\nThis python notebook will explore the influence of the following special cases in the computation of the dynamic response of MDOF systems:\n\nMass proportional damping\nStiffness proportional damping\nRayleigh damping\n\nWhat makes these cases special is the fact that the damping matrix is orthogonal to the modal matrix and, therefore, it is diagonalizable. This constitutes a clear advantage when it comes to solving the dynamic equilibrium equation system for MDOF systems.\nBack to top\nDynamic equilibrium equation\nIn structural dynamics the second order differential dynamic equilibrium equation can be written in terms of generalized coordinates (d[isplacement]) and their first (v[elocity]) and second (a[cceleration]) time derivatives:\n\\begin{equation}\n\\mathbf{M} \\times \\mathbf{a(t)} + \\mathbf{C} \\times \\mathbf{v(t)} + \\mathbf{K} \\times \\mathbf{d(t)} = \\mathbf{F(t)}\n\\end{equation}\nwhere:\n\n$\\mathbf{M}$ is the mass matrix\n$\\mathbf{C}$ is the damping matrix\n$\\mathbf{K}$ is the stiffness matrix\n$\\mathbf{a(t)}$ is the acceleration vector\n$\\mathbf{v(t)}$ is the velocity vector\n$\\mathbf{d(t)}$ is the displacement vector\n$\\mathbf{F(t)}$ is the force input vector \n\nIn a MDOF systems all these matrices are of size $NDOF \\times NDOF$, where $NDOF$ is the number of generalized degrees of freedom. Carrying out the usual coordinate transformation from generalized coordinates to modal coordinates, $\\mathbf{d(t)} = \\mathbf{\\Phi} \\times \\mathbf{q(t)}$, and pre-multiplying by the transpose of the modal matrix, $\\mathbf{\\Phi}^T$, one obtains the following:\n\\begin{equation}\n\\mathbf{\\Phi}^T \\times \\mathbf{M} \\times \\mathbf{\\Phi} \\times \\mathbf{\\ddot q(t)} + \\mathbf{\\Phi}^T \\times \\mathbf{C} \\cdot \\mathbf{\\Phi} \\times \\mathbf{\\dot q(t)} + \\mathbf{\\Phi}^T \\times \\mathbf{K} \\times \\mathbf{\\Phi} \\times \\mathbf{q(t)} = \\mathbf{\\Phi}^T \\times \\mathbf{F(t)}\n\\end{equation}\nwhere:\n\n$\\mathbf{\\Phi}^T \\times \\mathbf{M} \\times \\mathbf{\\Phi} = \\mathbf{M_n}$ is a diagonal matrix (with modal mass in the main diagonal)\n$\\mathbf{\\Phi}^T \\times \\mathbf{K} \\times \\mathbf{\\Phi} = \\mathbf{K_n}$ is a diagonal matrix (with modal stiffness in the main diagonal)\n$\\mathbf{\\Phi}^T \\times \\mathbf{F(t)} = \\mathbf{F_n(t)}$ is a column vector (with modal excitation functions) \n\nIn what concerns the product $\\mathbf{\\Phi}^T \\times \\mathbf{C} \\times \\mathbf{\\Phi}$ it will be a diagonal matrix (with modal damping in the main diagonal) only in certain circunstances. This is achieved when damping is proportional to either the mass or the stiffness or a combination of both, which is usually referred to as Rayleigh damping. In its generalised form it is represented by the Caughey series (see References section for more information):\n\\begin{equation}\n\\mathbf{C} = \\mathbf{M} \\times \\sum_{j=0}^{N-1}{\\alpha_j \\cdot \\left[ \\mathbf{M}^{-1} \\times \\mathbf{K} \\right]^j}\n\\end{equation}\nWhen this happens, the previous dynamic equilibrium equation system transforms into a set of $NDOF$ one-degree-of-freedom independent dynamic equilibrium equations (modal equations):\n\\begin{equation}\n\\mathbf{M_n} \\times \\mathbf{\\ddot q(t)} + \\mathbf{C_n} \\times \\mathbf{\\dot q(t)} + \\mathbf{K_n} \\times \\mathbf{q(t)} = \\mathbf{F_n(t)}\n\\end{equation}\nThe $NDOF$ independent modal equilibrium equations can be rewritten as:\n\\begin{equation}\n\\mathbf{\\ddot q_n(t)} + \\mathbf{2 \\cdot \\zeta_n \\cdot \\omega_n} \\cdot \\mathbf{\\dot q_n(t)} + \\mathbf{\\omega_n^2} \\cdot \\mathbf{q_n(t)} = \\mathbf{a_n(t)}\n\\end{equation}\nwhere:\n\n$\\zeta_n$ is the modal damping coefficient of mode $N$\n$\\omega_n$ is the modal angular frequency of mode $N$\n$a_n(t)$ is the modal excitation of mode $N$ \n\nThe solution of these $NDOF$ independent dynamic equilibrium equations will follow the standard procedures for the one-degree-of-freedom case. The final solution for the MDOF system ($d(t)$) is obtained by superposing the $NDOF$ modal solutions $q_n(t)$.\nWe will look now at three different cases where the damping matrix is diagonalizable.\nBack to top\nComputational lab\nBefore proceeding any further, let us set the computational lab for this Python notebook:", "import sys\nimport math\nimport numpy as np\nimport matplotlib as mpl\nprint('System: {}'.format(sys.version))\nfor package in (np, mpl):\n print('Package: {} {}'.format(package.__name__, package.__version__))", "We will produce some plots based on a frequency range to illustrate the concepts:", "import matplotlib.pyplot as plt\n%matplotlib inline\n\nff = np.linspace(0.01, 6., num=600)\nwn = 2.*np.pi*ff", "Back to top\nMass proportional damping\nMass proportional damping means that the damping matrix is somehow a multiple of the mass matrix:\n\\begin{equation}\n\\mathbf{C} = \\alpha \\cdot \\mathbf{M}\n\\end{equation}\nwhere $\\alpha$ is the constant of mass proportionality. In these circunstances, the dynamic equilibrium equation can be written as:\n\\begin{equation}\n\\mathbf{M} \\times \\mathbf{a(t)} + \\alpha \\cdot \\mathbf{M} \\times \\mathbf{v(t)} + \\mathbf{K} \\times \\mathbf{d(t)} = \\mathbf{F(t)}\n\\end{equation}\nProceeding the same as above, one obtains the $NDOF$ independent modal equilibrium equations:\n\\begin{equation}\n\\mathbf{M_n} \\times \\mathbf{\\ddot q_n(t)} + \\alpha \\cdot \\mathbf{M_n} \\times \\mathbf{\\dot q_n(t)} + \\mathbf{K_n} \\times \\mathbf{q_n(t)} = \\mathbf{F_n(t)}\n\\end{equation}\nor, equivalently:\n\\begin{equation}\n\\mathbf{\\ddot q_n(t)} + \\alpha \\cdot \\mathbf{\\dot q_n(t)} + \\mathbf{\\omega_n^2} \\cdot \\mathbf{q_n(t)} = \\mathbf{a_n(t)}\n\\end{equation}\nComparing expressions, one obtains\n\\begin{equation}\n\\alpha = 2 \\cdot \\zeta_n \\cdot \\omega_n \\Leftrightarrow \\zeta_n = \\frac{\\alpha}{2 \\cdot \\omega_n}\n\\end{equation}\nfrom where it can be seen that the mass proportional damping is a hyperbolic function of the vibration frequency $\\omega_n$.", "alpha = 0.1\nzn_a = alpha/(2.*wn)\nplt.plot(wn, zn_a, label='mass proportional')\nplt.xlabel('Vibration frequency $\\omega_n$ [rad/s]')\nplt.ylabel('Damping coefficient $\\zeta_n$ [-]')\nplt.legend(loc='best')\nplt.grid(True)\nplt.xlim([0, 35.])\nplt.ylim([0, 0.2])\nplt.show()", "Back to top\nStiffness proportional damping\nStiffness proportional damping means that damping matrix is somehow a multiple of the stiffness matrix:\n\\begin{equation}\n\\mathbf{C} = \\beta \\cdot \\mathbf{K}\n\\end{equation}\nwhere $\\beta$ is the constant of stiffness proportionality. In these circunstances, the dynamic equilibrium equation can be written as:\n\\begin{equation}\n\\mathbf{M} \\times \\mathbf{a(t)} + \\beta \\cdot \\mathbf{K} \\times \\mathbf{v(t)} + \\mathbf{K} \\times \\mathbf{d(t)} = \\mathbf{F(t)}\n\\end{equation}\nProceeding the same as above, one obtains the $NDOF$ independent modal equilibrium equations:\n\\begin{equation}\n\\mathbf{M_n} \\times \\mathbf{\\ddot q_n(t)} + \\beta \\cdot \\mathbf{K_n} \\times \\mathbf{\\dot q_n(t)} + \\mathbf{K_n} \\times \\mathbf{q_n(t)} = \\mathbf{F_n(t)}\n\\end{equation}\nor, equivalently:\n\\begin{equation}\n\\mathbf{\\ddot q_n(t)} + \\beta \\cdot \\mathbf{\\omega_n^2} \\cdot \\mathbf{\\dot q_n(t)} + \\mathbf{\\omega_n^2} \\cdot \\mathbf{q_n(t)} = \\mathbf{a_n(t)}\n\\end{equation}\nComparing expressions, one obtains\n\\begin{equation}\n\\beta \\cdot \\omega_n^2 = 2 \\cdot \\zeta \\cdot \\omega_n \\Leftrightarrow \\zeta_n = \\frac{\\omega_n \\cdot \\beta}{2}\n\\end{equation}\nfrom where it can be seen that the stiffness proportional damping is a linear function of the vibration frequency $\\omega_n$.", "beta = 0.005\nzn_b = (beta*wn)/2.\nplt.plot(wn, zn_b, label='stiffness proportional')\nplt.xlabel('Vibration frequency $\\omega_n$ [rad/s]')\nplt.ylabel('Damping coefficient $\\zeta_n$ [-]')\nplt.legend(loc='best')\nplt.grid(True)\nplt.xlim([0, 35.])\nplt.ylim([0, 0.2])\nplt.show()", "Back to top\nRayleigh damping\nWhen Rayleigh damping is considered it means that the damping coefficient is a combination of the two previous ones, that is, it is a multiple of mass and stifnness:\n\\begin{equation}\n\\mathbf{C} = \\alpha \\cdot \\mathbf{M} + \\beta \\cdot \\mathbf{K}\n\\end{equation}\nwhere $\\alpha$ and $\\beta$ have the previous meanings. In these circunstances, the dynamic equilibrium equation can be written as:\n\\begin{equation}\n\\mathbf{M} \\times \\mathbf{a(t)} + \\left[ \\alpha \\cdot \\mathbf{M} + \\beta \\cdot \\mathbf{K} \\right] \\times \\mathbf{v(t)} + \\mathbf{K} \\times \\mathbf{d(t)} = \\mathbf{F(t)}\n\\end{equation}\nProceeding the same as above, one obtains the $NDOF$ independent modal equilibrium equations:\n\\begin{equation}\n\\mathbf{M_n} \\times \\mathbf{\\ddot q_n(t)} + \\left[ \\alpha \\cdot \\mathbf{M_n} + \\beta \\cdot \\mathbf{K_n} \\right] \\times \\mathbf{\\dot q_n(t)} + \\mathbf{K_n} \\times \\mathbf{q_n(t)} = \\mathbf{F_n(t)}\n\\end{equation}\nor, equivalently:\n\\begin{equation}\n\\mathbf{\\ddot q_n(t)} + \\left[ \\alpha + \\beta \\cdot \\mathbf{\\omega_n^2} \\right] \\cdot \\mathbf{\\dot q_n(t)} + \\mathbf{\\omega_n^2} \\cdot \\mathbf{q_n(t)} = \\mathbf{a_n(t)}\n\\end{equation}\nComparing expressions, one obtains\n\\begin{equation}\n\\alpha + \\beta \\cdot \\omega_n^2 = 2 \\cdot \\zeta \\cdot \\omega_n \\Leftrightarrow \\zeta_n = \\frac{\\alpha}{2 \\cdot \\omega_n} + \\frac{\\omega_n \\cdot \\beta}{2}\n\\end{equation}\nfrom where it can be seen that the Rayleigh damping is the sum of the previous linear and hyperbolic functions of the vibration frequency $\\omega_n$.", "plt.hold(True)\nplt.plot(wn, zn_a+zn_b, label='Rayleigh damping')\nplt.plot(wn, zn_a, label='mass proportional')\nplt.plot(wn, zn_b, label='stiffness proportional')\nplt.xlabel('Vibration frequency $\\omega_n$ [rad/s]')\nplt.ylabel('Damping coefficient $\\zeta_n$ [-]')\nplt.legend(loc='best')\nplt.grid(True)\nplt.xlim([0, 35.])\nplt.ylim([0, 0.2])\nplt.show()", "When the Rayleigh damping is used in MDOF systems, the coefficients $\\alpha$ and $\\beta$ can be computed in order to give an appropriate damping coefficient value for a given frequency range, related to the vibration modes of interest for the dynamic analysis. This is achieved by setting a simple two equation system whose solution yields the values of $\\alpha$ and $\\beta$:\n$$\n\\left[\\begin{array}{cc}\n\\zeta_1 \\ \\zeta_2\n\\end{array}\\right]\n=\n\\left[\\begin{array}{cc}\n\\frac{1}{2 \\cdot \\omega_1} && \\frac{\\omega_1}{2} \\ \\frac{1}{2 \\cdot \\omega_2} && \\frac{\\omega_2}{2}\n\\end{array}\\right]\n\\times\n\\left[\\begin{array}{cc}\n\\alpha \\ \\beta\n\\end{array}\\right]\n$$\nBack to top\nExample\nLet us consider a MDOF system where there are several vibration modes of interest, ranging from 1 to 4 Hz, and that we want compute the dynamic response considering a damping coefficient of 2% for the first mode and 5% for the last mode.", "f1, f2 = 1., 4.\nz1, z2 = 0.02, 0.05\nw1 = 2.*np.pi*f1\nw2 = 2.*np.pi*f2\nalpha, beta = np.linalg.solve([[1./(2.*w1), w1/2.], [1./(2.*w2), w2/2.]], [z1, z2])\nprint('Alpha={:.6f}\\nBeta={:.6f}'.format(alpha, beta))", "We can check that the Rayleigh damping assumes the required values at the desired frequencies, although may vary considerably for other frequencies:", "zn_a = alpha/(2.*wn)\nzn_b = (beta*wn)/2.\nplt.hold(True)\nplt.plot(wn, zn_a+zn_b, label='Rayleigh damping')\nplt.plot(wn, zn_a, label='mass proportional')\nplt.plot(wn, zn_b, label='stiffness proportional')\nplt.plot(w1, z1, 'o')\nplt.plot(w2, z2, 'o')\nplt.axvline(w1, ls=':')\nplt.axhline(z1, ls=':')\nplt.axvline(w2, ls=':')\nplt.axhline(z2, ls=':')\nplt.xlabel('Vibration frequency $\\omega_n$ [rad/s]')\nplt.ylabel('Damping coefficient $\\zeta_n$ [-]')\nplt.legend(loc='best')\nplt.xlim([0, 35.])\nplt.ylim([0, 0.2])\nplt.show()", "Back to top\nReferences\nCaughey, T. K. and O’Kelly, M. E. J. (1965). “Classical normal modes in damped linear dynamic systems.” Transactions of ASME, Journal of Applied Mechanics, 32, 583–588.\nClough, Ray W., and Penzien, Joseph, Dynamics of Structures, 2nd ed. (revised), Computers and Structures, 2003.\nBack to top\nOdds and ends\nThis notebook was created by Paulo Xavier Candeias.\nBack to top" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
zzsza/Datascience_School
30. 딥러닝/06. 단어 임베딩의 원리와 gensim.word2vec 사용법.ipynb
mit
[ "단어 임베딩의 원리와 gensim.word2vec 사용법\n단어 임베딩(Word Embedding)이란 텍스트를 구성하는 하나의 단어를 수치화하는 방법의 일종이다.\n텍스트 분석에서 흔히 사용하는 방식은 단어 하나에 인덱스 정수를 할당하는 Bag of Words 방법이다. 이 방법을 사용하면 문서는 단어장에 있는 단어의 갯수와 같은 크기의 벡터가 되고 단어장의 각 단어가 그 문서에 나온 횟수만큼 벡터의 인덱스 위치의 숫자를 증가시킨다.\n즉 단어장이 \"I\", \"am\", \"a\", \"boy\", \"girl\" 다섯개의 단어로 이루어진 경우 각 단어에 다음과 같이 숫자를 할당한다.\n\"I\": 0\n\"am\": 1\n\"a\": 2\n\"boy\": 3 \n\"girl\": 4\n이 때 \"I am a girl\" 이라는 문서는 다음과 같이 벡터로 만들 수 있다.\n$$ [1 \\; 1 \\; 1 \\; 0 \\; 1] $$\n단어 임베딩은 하나의 단어를 하나의 인덱스 정수가 아니라 실수 벡터로 나타낸다. 예를 들어 2차원 임베딩을 하는 경우 다음과 같은 숫자 벡터가 될 수 있다.\n\"I\": (0.3, 0.2)\n\"am\": (0.1, 0.8)\n\"a\": (0.5, 0.6)\n\"boy\": (0.2, 0.9) \n\"girl\": (0.4, 0.7)\n단어 임베딩이 된 경우에는 각 단어 벡터를 합치거나(concatenation) 더하는(averaging, normalized Bag of Words) 방식으로 전체 문서의 벡터 표현을 구한다.\nFeed-Forward 신경망 언어 모형 (Neural Net Language Model)\n이러한 단어 임베딩은 신경망을 이용하여 언어 모형을 만들려는 시도에서 나왔다. 자세한 내용은 다음 논문을 참고한다.\n\n\"A Neural Probabilistic Language Model\", Bengio, et al. 2003\n\nhttp://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf\n\n\n\"Efficient Estimation of Word Representations in Vector Space\", Mikolov, et al. 2013\n\n\nhttps://arxiv.org/pdf/1301.3781v3.pdf\n\n\n\"word2vec Parameter Learning Explained\", Xin Rong, \n\nhttp://www-personal.umich.edu/~ronxin/pdf/w2vexp.pdf\n\nV개의 단어를 가지는 단어장이 있을 때, 단어를 BOW 방식으로 크기 V인 벡터로 만든 다음 다음 그림과 같이 하나의 은닉층(Hidden Layer)을 가지는 신경망을 사용하여 특정 단어 열(word sequence)이 주어졌을 때 다음에 나올 단어를 예측하는 문제를 생각해 보자. 입력과 출력은 모두 BOW 방식으로 인코딩되어 있다.\n<img src=\"https://datascienceschool.net/upfiles/cd100ec8d3d6476e9522ead4c2acf6a2.png\" style=\"width: 50%;\">\n<small>이미지 출처: \"word2vec Parameter Learning Explained\", Xin Rong</small>\n입력 $x$가 들어가면 입력 가중치 행렬 $W^T$이 곱해져서 은닉층 벡터 $h$가 되는데 $x$가 one-hot-encoding 된 값이므로 $h$ 벡터는 입력 가중치 행렬 $W$의 행 하나가 된다. \n$$ h = W^T x = v^T_i $$\n여기에서 $i$는 입력 벡터 $x$ 의 값이 1인 원소의 인덱스이다. 즉, BOW 단어장에서 $i$번째 단어를 뜻한다.\n벡터 $h$는 다시 출력 가중치 행렬 $W'^T$와 곱해져서 출력 벡터 $y$가 된다. \n$$ y = W'^T h $$\n출력 가중치 행렬 $W'$의 $j$번째 열을 $v_j$라고 하면 출력 벡터 $y$의 $j$번째 원소의 값은 다음과 같다.\n$$ y_j = v_j'^T h $$\n가중치 행렬을 갱신하는 최적화 공식을 살펴본다. 자세한 유도과정은 논문을 참조한다.\n우선 출력 가중치 행렬의 갱신 공식은 다음과 같다.\n$$ v_j'^{\\text{(new)}} = v_j'^{\\text{(old)}} - \\eta \\cdot e_j \\cdot h = v_j'^{\\text{(old)}} - \\eta \\cdot e_j \\cdot v_i^T $$ \n이 식에서 $\\eta$는 최적화 스텝 사이즈, $e_j$는 출력 오차가 된다. 이 공식에 따르면 벡터 $v'_j$는 $v_j$ 방향으로 수렴해 간다. 즉, $i$번째 단어와 $j$번째 단어가 연속하는 관계라면 $v'_j$가 $v_i$와 유사한 위치로 수렴한다는 뜻이다.\n다음으로 입력 가중치 행렬의 갱신 공식은 다음과 같다.\n$$ v_i^{\\text{(new)}} = v_i^{\\text{(old)}} - \\eta \\sum_k e_j \\cdot w'_{ik}$$ \n이 공식에 따르면 벡터 $v_i$는 여러 $v'_k$ 벡터의 가중합으로 수렴해 간다. 이렇게 단어간의 관계에 의해 $i$번째 단어를 뜻하는 $v_i$의 값들이 연관성을 가지게 되는데 이 $v_i$ 벡터 값을 해당 단어에 대한 분산 표현 (distributed representation) , 벡터 표현 (vector representation) 또는 단어 임베딩 (word embedding)이라고 한다.\n<img src=\"https://www.tensorflow.org/versions/r0.11/images/linear-relationships.png\" style=\"width: 100%;\">\n<small>이미지 출처: https://www.tensorflow.org/versions/master/tutorials/word2vec/index.html</small>\nCBOW (Continuous Bag of Words) Embedding\n위의 방식은 하나의 단어로부터 다음에 오는 단어를 예측하는 문제였다. 이러한 문제를 단어 하나짜리 문맥(single-word context)를 가진다고 한다. \nCBOW (Continuous Bag of Words) 방식은 복수 단어 문맥(multi-word context)에 대한 문제 즉, 여러개의 단어를 나열한 뒤 이와 관련된 단어를 추정하는 문제이다. 즉, 문자에서 나오는 $n$개의 단어 열로부터 다음 단어를 예측하는 문제가 된다. 예를 들어\n\nthe quick brown fox jumped over the lazy dog\n\n라는 문장에서 (the, quick, brown) 이라는 문맥이 주어지면 fox라는 단어를 예측해야 한다.\nCBOW는 다음과 같은 신경망 구조를 가진다. 여기에서 각 문맥 단어를 은닉층으로 투사하는 가중치 행렬은 모든 단어에 대해 공통으로 사용한다.\n<img src=\"https://datascienceschool.net/upfiles/3cdbdbfe1c8a4742aaf2e0c40917948f.png\" style=\"width: 50%;\">\n<small>이미지 출처: \"word2vec Parameter Learning Explained\", Xin Rong</small>\nSkip-Gram Embedding\nSkip-Gram 방식은 CBOW 방식과 반대로 특정한 단어로부터 문맥이 될 수 있는 단어를 예측한다. 보통 입력 단어 주변의 $k$개 단어를 문맥으로 보고 예측 모형을 만드는데 이 $k$ 값을 window size 라고 한다.\n위 문장에서 window size $k=1$인 경우,\n\nquick -> the\nquick -> brown\nbrown -> quick\nbrown -> fox\n\n과 같은 관계를 예측할 수 있어야 한다.\n<img src=\"https://datascienceschool.net/upfiles/8f8ebfa0ebb34eb584d24d59fe60a12d.png\" style=\"width: 50%;\">\n<small>이미지 출처: \"word2vec Parameter Learning Explained\", Xin Rong</small>\nword2vec\nword2vec은 CBOW 방식과 Skip-Gram 방식의 단어 임베딩을 구현한 C++ 라이브러리로 구글에 있던 Mikolov 등이 개발하였다.\n파이썬에서는 gensim이라는 패키지에 Word2Vec이라는 클래스로 구현되어 있다. nltk의 영화 감상 corpus를 기반으로 Word2Vec 사용법을 살펴보자.\n우선 단어 임베딩을 위한 코퍼스를 만든다. 코퍼스는 리스트의 리스트 형태로 구현되어야 한다. 내부 리스트는 하나의 문장을 이루는 단어 열이 된다.", "from nltk.corpus import movie_reviews\nsentences = [list(s) for s in movie_reviews.sents()]\n\nsentences[0]", "다음으로 이 코퍼스를 입력 인수로 하여 Word2Vec 클래스 객체를 생성한다. 이 시점에 트레이닝이 이루어진다.", "from gensim.models.word2vec import Word2Vec\n\n%%time\nmodel = Word2Vec(sentences)", "트레이닝이 완료되면 init_sims 명령으로 필요없는 메모리를 unload 시킨다.", "model.init_sims(replace=True)", "이제 이 모형에서 다음과 같은 메서드를 사용할 수 있다. 보다 자세한 내용은 https://radimrehurek.com/gensim/models/word2vec.html 를 참조한다.\n\nsimilarity : 두 단어의 유사도 계산\nmost_similar : 가장 유사한 단어를 출력", "model.similarity('actor', 'actress')\n\nmodel.similarity('he', 'she')\n\nmodel.similarity('actor', 'she')\n\nmodel.most_similar(\"villain\")", "most_similar 메서드는 positive 인수와 negative 인수를 사용하여 다음과 같은 단어간 관계도 찾을 수 있다.\n\nactor + he - actress = she", "model.most_similar(positive=['actor', 'he'], negative='actress', topn=1)", "이번에는 네이버 영화 감상 코퍼스를 사용하여 한국어 단어 임베딩을 해보자.", "import codecs\n\ndef read_data(filename):\n with codecs.open(filename, encoding='utf-8', mode='r') as f:\n data = [line.split('\\t') for line in f.read().splitlines()]\n data = data[1:] # header 제외\n return data\n\ntrain_data = read_data('/home/dockeruser/data/nsmc/ratings_train.txt')\n\nfrom konlpy.tag import Twitter\ntagger = Twitter()\n\ndef tokenize(doc):\n return ['/'.join(t) for t in tagger.pos(doc, norm=True, stem=True)]\n\ntrain_docs = [row[1] for row in train_data]\nsentences = [tokenize(d) for d in train_docs]\n\nfrom gensim.models import word2vec\nmodel = word2vec.Word2Vec(sentences)\nmodel.init_sims(replace=True)\n\nmodel.similarity(*tokenize(u'악당 영웅'))\n\nmodel.similarity(*tokenize(u'악당 감동'))\n\nfrom konlpy.utils import pprint\npprint(model.most_similar(positive=tokenize(u'배우 남자'), negative=tokenize(u'여배우'), topn=1))", "더 많은 한국어 코퍼스를 사용한 단어 임베딩 모형은 다음 웹사이트에서 테스트해 볼 수 있다.\n* http://w.elnn.kr/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
summanlp/gensim
docs/notebooks/annoytutorial.ipynb
lgpl-2.1
[ "Similarity Queries using Annoy Tutorial\nThis tutorial is about using the (Annoy Approximate Nearest Neighbors Oh Yeah) library for similarity queries with a Word2Vec model built with gensim.\nWhy use Annoy?\nThe current implementation for finding k nearest neighbors in a vector space in gensim has linear complexity via brute force in the number of indexed documents, although with extremely low constant factors. The retrieved results are exact, which is an overkill in many applications: approximate results retrieved in sub-linear time may be enough. Annoy can find approximate nearest neighbors much faster.\nPrerequisites\nAdditional libraries needed for this tutorial:\n- annoy\n- psutil\n- matplotlib\nOutline\n\nDownload Text8 Corpus\nBuild Word2Vec Model\nConstruct AnnoyIndex with model & make a similarity query\nVerify & Evaluate performance\nEvaluate relationship of num_trees to initialization time and accuracy\nWork with Google's word2vec C formats", "# pip install watermark\n%reload_ext watermark\n%watermark -v -m -p gensim,numpy,scipy,psutil,matplotlib", "1. Download Text8 Corpus", "import os.path\nif not os.path.isfile('text8'):\n !wget -c http://mattmahoney.net/dc/text8.zip\n !unzip text8.zip", "Import & Set up Logging\nI'm not going to set up logging due to the verbose input displaying in notebooks, but if you want that, uncomment the lines in the cell below.", "LOGS = False\n\nif LOGS:\n import logging\n logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)", "2. Build Word2Vec Model", "from gensim.models import Word2Vec, KeyedVectors\nfrom gensim.models.word2vec import Text8Corpus\n\n# using params from Word2Vec_FastText_Comparison\n\nlr = 0.05\ndim = 100\nws = 5\nepoch = 5\nminCount = 5\nneg = 5\nloss = 'ns'\nt = 1e-4\n\n# Same values as used for fastText training above\nparams = {\n 'alpha': lr,\n 'size': dim,\n 'window': ws,\n 'iter': epoch,\n 'min_count': minCount,\n 'sample': t,\n 'sg': 1,\n 'hs': 0,\n 'negative': neg\n}\n\nmodel = Word2Vec(Text8Corpus('text8'), **params)\nprint(model)", "See the Word2Vec tutorial for how to initialize and save this model.\nComparing the traditional implementation and the Annoy approximation", "#Set up the model and vector that we are using in the comparison\ntry:\n from gensim.similarities.index import AnnoyIndexer\nexcept ImportError:\n raise ValueError(\"SKIP: Please install the annoy indexer\")\n\nmodel.init_sims()\nannoy_index = AnnoyIndexer(model, 100)\n\n# Dry run to make sure both indices are fully in RAM\nvector = model.wv.syn0norm[0]\nmodel.most_similar([vector], topn=5, indexer=annoy_index)\nmodel.most_similar([vector], topn=5)\n\nimport time\nimport numpy as np\n\ndef avg_query_time(annoy_index=None, queries=1000):\n \"\"\"\n Average query time of a most_similar method over 1000 random queries,\n uses annoy if given an indexer\n \"\"\"\n total_time = 0\n for _ in range(queries):\n rand_vec = model.wv.syn0norm[np.random.randint(0, len(model.wv.vocab))]\n start_time = time.clock()\n model.most_similar([rand_vec], topn=5, indexer=annoy_index)\n total_time += time.clock() - start_time\n return total_time / queries\n\nqueries = 10000\n\ngensim_time = avg_query_time(queries=queries)\nannoy_time = avg_query_time(annoy_index, queries=queries)\nprint(\"Gensim (s/query):\\t{0:.5f}\".format(gensim_time))\nprint(\"Annoy (s/query):\\t{0:.5f}\".format(annoy_time))\nspeed_improvement = gensim_time / annoy_time\nprint (\"\\nAnnoy is {0:.2f} times faster on average on this particular run\".format(speed_improvement))", "This speedup factor is by no means constant and will vary greatly from run to run and is particular to this data set, BLAS setup, Annoy parameters(as tree size increases speedup factor decreases), machine specifications, among other factors.\n\nNote: Initialization time for the annoy indexer was not included in the times. The optimal knn algorithm for you to use will depend on how many queries you need to make and the size of the corpus. If you are making very few similarity queries, the time taken to initialize the annoy indexer will be longer than the time it would take the brute force method to retrieve results. If you are making many queries however, the time it takes to initialize the annoy indexer will be made up for by the incredibly fast retrieval times for queries once the indexer has been initialized\nNote : Gensim's 'most_similar' method is using numpy operations in the form of dot product whereas Annoy's method isnt. If 'numpy' on your machine is using one of the BLAS libraries like ATLAS or LAPACK, it'll run on multiple cores(only if your machine has multicore support ). Check SciPy Cookbook for more details.\n\n3. Construct AnnoyIndex with model & make a similarity query\nCreating an indexer\nAn instance of AnnoyIndexer needs to be created in order to use Annoy in gensim. The AnnoyIndexer class is located in gensim.similarities.index\nAnnoyIndexer() takes two parameters:\nmodel: A Word2Vec or Doc2Vec model\nnum_trees: A positive integer. num_trees effects the build time and the index size. A larger value will give more accurate results, but larger indexes. More information on what trees in Annoy do can be found here. The relationship between num_trees, build time, and accuracy will be investigated later in the tutorial. \nNow that we are ready to make a query, lets find the top 5 most similar words to \"science\" in the Text8 corpus. To make a similarity query we call Word2Vec.most_similar like we would traditionally, but with an added parameter, indexer. The only supported indexer in gensim as of now is Annoy.", "# 100 trees are being used in this example\nannoy_index = AnnoyIndexer(model, 100)\n# Derive the vector for the word \"science\" in our model\nvector = model[\"science\"]\n# The instance of AnnoyIndexer we just created is passed \napproximate_neighbors = model.most_similar([vector], topn=11, indexer=annoy_index)\n# Neatly print the approximate_neighbors and their corresponding cosine similarity values\nprint(\"Approximate Neighbors\")\nfor neighbor in approximate_neighbors:\n print(neighbor)\n\nnormal_neighbors = model.most_similar([vector], topn=11)\nprint(\"\\nNormal (not Annoy-indexed) Neighbors\")\nfor neighbor in normal_neighbors:\n print(neighbor)", "Analyzing the results\nThe closer the cosine similarity of a vector is to 1, the more similar that word is to our query, which was the vector for \"science\". There are some differences in the ranking of similar words and the set of words included within the 10 most similar words.\n4. Verify & Evaluate performance\nPersisting Indexes\nYou can save and load your indexes from/to disk to prevent having to construct them each time. This will create two files on disk, fname and fname.d. Both files are needed to correctly restore all attributes. Before loading an index, you will have to create an empty AnnoyIndexer object.", "fname = '/tmp/mymodel.index'\n\n# Persist index to disk\nannoy_index.save(fname)\n\n# Load index back\nif os.path.exists(fname):\n annoy_index2 = AnnoyIndexer()\n annoy_index2.load(fname)\n annoy_index2.model = model\n\n# Results should be identical to above\nvector = model[\"science\"]\napproximate_neighbors2 = model.most_similar([vector], topn=11, indexer=annoy_index2)\nfor neighbor in approximate_neighbors2:\n print(neighbor)\n \nassert approximate_neighbors == approximate_neighbors2", "Be sure to use the same model at load that was used originally, otherwise you will get unexpected behaviors.\nSave memory by memory-mapping indices saved to disk\nAnnoy library has a useful feature that indices can be memory-mapped from disk. It saves memory when the same index is used by several processes.\nBelow are two snippets of code. First one has a separate index for each process. The second snipped shares the index between two processes via memory-mapping. The second example uses less total RAM as it is shared.", "# Remove verbosity from code below (if logging active)\n\nif LOGS:\n logging.disable(logging.CRITICAL)\n\nfrom multiprocessing import Process\nimport os\nimport psutil", "Bad Example: Two processes load the Word2vec model from disk and create there own Annoy indices from that model.", "%%time\n\nmodel.save('/tmp/mymodel.pkl')\n\ndef f(process_id):\n print('Process Id: {}'.format(os.getpid()))\n process = psutil.Process(os.getpid())\n new_model = Word2Vec.load('/tmp/mymodel.pkl')\n vector = new_model[\"science\"]\n annoy_index = AnnoyIndexer(new_model,100)\n approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index)\n print('\\nMemory used by process {}: {}\\n---'.format(os.getpid(), process.memory_info()))\n\n# Creating and running two parallel process to share the same index file.\np1 = Process(target=f, args=('1',))\np1.start()\np1.join()\np2 = Process(target=f, args=('2',))\np2.start()\np2.join()", "Good example. Two processes load both the Word2vec model and index from disk and memory-map the index", "%%time\n\nmodel.save('/tmp/mymodel.pkl')\n\ndef f(process_id):\n print('Process Id: {}'.format(os.getpid()))\n process = psutil.Process(os.getpid())\n new_model = Word2Vec.load('/tmp/mymodel.pkl')\n vector = new_model[\"science\"]\n annoy_index = AnnoyIndexer()\n annoy_index.load('/tmp/mymodel.index')\n annoy_index.model = new_model\n approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index)\n print('\\nMemory used by process {}: {}\\n---'.format(os.getpid(), process.memory_info()))\n\n# Creating and running two parallel process to share the same index file.\np1 = Process(target=f, args=('1',))\np1.start()\np1.join()\np2 = Process(target=f, args=('2',))\np2.start()\np2.join()", "5. Evaluate relationship of num_trees to initialization time and accuracy", "import matplotlib.pyplot as plt\n%matplotlib inline", "Build dataset of Initialization times and accuracy measures", "exact_results = [element[0] for element in model.most_similar([model.wv.syn0norm[0]], topn=100)]\n\nx_values = []\ny_values_init = []\ny_values_accuracy = []\n\nfor x in range(1, 300, 10):\n x_values.append(x)\n start_time = time.time()\n annoy_index = AnnoyIndexer(model, x)\n y_values_init.append(time.time() - start_time)\n approximate_results = model.most_similar([model.wv.syn0norm[0]], topn=100, indexer=annoy_index)\n top_words = [result[0] for result in approximate_results]\n y_values_accuracy.append(len(set(top_words).intersection(exact_results)))", "Plot results", "plt.figure(1, figsize=(12, 6))\nplt.subplot(121)\nplt.plot(x_values, y_values_init)\nplt.title(\"num_trees vs initalization time\")\nplt.ylabel(\"Initialization time (s)\")\nplt.xlabel(\"num_trees\")\nplt.subplot(122)\nplt.plot(x_values, y_values_accuracy)\nplt.title(\"num_trees vs accuracy\")\nplt.ylabel(\"% accuracy\")\nplt.xlabel(\"num_trees\")\nplt.tight_layout()\nplt.show()", "Initialization:\nInitialization time of the annoy indexer increases in a linear fashion with num_trees. Initialization time will vary from corpus to corpus, in the graph above the lee corpus was used\nAccuracy:\nIn this dataset, the accuracy seems logarithmically related to the number of trees. We see an improvement in accuracy with more trees, but the relationship is nonlinear. \n6. Work with Google word2vec files\nOur model can be exported to a word2vec C format. There is a binary and a plain text word2vec format. Both can be read with a variety of other software, or imported back into gensim as a KeyedVectors object.", "# To export our model as text\nmodel.wv.save_word2vec_format('/tmp/vectors.txt', binary=False)\n\n# View the first 3 lines of the exported file\n\n# The first line has the total number of entries and the vector dimension count. \n# The next lines have a key (a string) followed by its vector.\nwith open('/tmp/vectors.txt') as myfile:\n for i in range(3):\n print(myfile.readline().strip())\n\n# To import a word2vec text model\nwv = KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)\n\n# To export our model as binary\nmodel.wv.save_word2vec_format('/tmp/vectors.bin', binary=True)\n\n# To import a word2vec binary model\nwv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)\n\n# To create and save Annoy Index from a loaded `KeyedVectors` object (with 100 trees)\nannoy_index = AnnoyIndexer(wv, 100)\nannoy_index.save('/tmp/mymodel.index')\n\n# Load and test the saved word vectors and saved annoy index\nwv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)\nannoy_index = AnnoyIndexer()\nannoy_index.load('/tmp/mymodel.index')\nannoy_index.model = wv\n\nvector = wv[\"cat\"]\napproximate_neighbors = wv.most_similar([vector], topn=11, indexer=annoy_index)\n# Neatly print the approximate_neighbors and their corresponding cosine similarity values\nprint(\"Approximate Neighbors\")\nfor neighbor in approximate_neighbors:\n print(neighbor)\n\nnormal_neighbors = wv.most_similar([vector], topn=11)\nprint(\"\\nNormal (not Annoy-indexed) Neighbors\")\nfor neighbor in normal_neighbors:\n print(neighbor)", "Recap\nIn this notebook we used the Annoy module to build an indexed approximation of our word embeddings. To do so, we did the following steps:\n1. Download Text8 Corpus\n2. Build Word2Vec Model\n3. Construct AnnoyIndex with model & make a similarity query\n4. Verify & Evaluate performance\n5. Evaluate relationship of num_trees to initialization time and accuracy\n6. Work with Google's word2vec C formats" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
NEONScience/NEON-Data-Skills
tutorials/Python/Hyperspectral/intro-hyperspectral/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py.ipynb
agpl-3.0
[ "syncID: 61ad1fc43ddd45b49cad1bca48656bbe\ntitle: \"NEON AOP Hyperspectral Data in HDF5 format with Python - Tiled Data\" \ndescription: \"Learn how to read NEON AOP hyperspectral flightline data using Python and develop skills to manipulate and visualize spectral data.\"\ndateCreated: 2018-07-04 \nauthors: Bridget Hass\ncontributors: Donal O'Leary\nestimatedTime: 1 hour\npackagesLibraries: numpy, h5py, gdal, matplotlib.pyplot\ntopics: hyperspectral-remote-sensing, HDF5, remote-sensing\nlanguagesTool: python\ndataProduct: NEON.DP3.30006, NEON.DP3.30008\ncode1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Hyperspectral/intro-hyperspectral/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py.ipynb\ntutorialSeries: intro-hsi-py-series\nurlTitle: neon-aop-hdf5-tile-py\n\nIn this introductory tutorial, we discuss how to read NEON AOP hyperspectral flightline\ndata using Python. We develop and practice skills and use several tools to manipulate and \nvisualize the spectral data. By the end of this tutorial, you will become \nfamiliar with the Python syntax.\nIf you are interested in learning how to do this for flightline NEON AOP hyperspectral data, \nplease see <a href=\"/neon-aop-hdf5-py\" target=\"_blank\"> NEON AOP Hyperspectral Data in HDF5 format with Python - Flightlines</a>.\nLearning Objectives\nAfter completing this tutorial, you will be able to:\n\nImport and use Python packages numpy, pandas, matplotlib, h5py, and gdal.\nUse the package h5py and the visititems functionality to read an HDF5 file \nand view data attributes.\nRead the data ignore value and scaling factor and apply these values to produce \na cleaned reflectance array.\nExtract and plot a single band of reflectance data\nPlot a histogram of reflectance values to visualize the range and distribution \nof values.\nSubset an hdf5 reflectance file from the full flightline to a smaller region \nof interest (if you complete the optional extension). \nApply a histogram stretch and adaptive equalization to improve the contrast \nof an image (if you complete the optional extension) . \n\nInstall Python Packages\n\nnumpy\npandas\ngdal \nmatplotlib \nh5py\n\nDownload Data\nTo complete this tutorial, you will use data available from the NEON 2017 Data\nInstitute.\nThis tutorial uses the following files:\n<ul>\n <li> <a href=\"https://www.neonscience.org/sites/default/files/neon_aop_spectral_python_functions_tiled_data.zip\">neon_aop_spectral_python_functions_tiled_data.zip (10 KB)</a> <- Click to Download</li>\n <li><a href=\"https://ndownloader.figshare.com/files/25752665\" target=\"_blank\">NEON_D02_SERC_DP3_368000_4306000_reflectance.h5 (618 MB)</a> <- Click to Download</li>\n</ul>\n\n<a href=\"https://ndownloader.figshare.com/files/25752665\" class=\"link--button link--arrow\">\nDownload Dataset</a>\nThe LiDAR and imagery data used to create this raster teaching data subset \nwere collected over the \n<a href=\"http://www.neonscience.org/\" target=\"_blank\"> National Ecological Observatory Network's</a> \n<a href=\"http://www.neonscience.org/science-design/field-sites/\" target=\"_blank\" >field sites</a>\nand processed at NEON headquarters.\nThe entire dataset can be accessed on the \n<a href=\"http://data.neonscience.org\" target=\"_blank\"> NEON data portal</a>.\nHyperspectral remote sensing data is a useful tool for measuring changes to our \nenvironment at the Earth’s surface. In this tutorial we explore how to extract \ninformation from a tile (1000m x 1000m x 426 bands) of NEON AOP orthorectified surface reflectance data, stored in hdf5 format. For more information on this data product, refer to the <a href=\"http://data.neonscience.org/data-products/DP3.30006.001\" target=\"_blank\">NEON Data Product Catalog</a>.\nMapping the Invisible: Introduction to Spectral Remote Sensing\nFor more information on spectral remote sensing watch this video. \n<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/3iaFzafWJQE\" frameborder=\"0\" allowfullscreen></iframe>\n\nSet up\nFirst let's import the required packages:", "import numpy as np\nimport h5py\nimport gdal, osr, os\nimport matplotlib.pyplot as plt", "Next, set display preferences so that plots are inline (meaning any images you output from your code will show up below the cell in the notebook) and turn off plot warnings:", "%matplotlib inline\nimport warnings\nwarnings.filterwarnings('ignore')", "Read in hdf5\nf = h5py.File('file.h5','r') reads in an h5 file to the variable f. \nUsing the help\nWe will be using a number of built-in and user-defined functions and methods throughout the tutorial. If you are uncertain what a certain function does, or how to call it, you can type help() or type a \n? at the end of the function or method and run the cell (either select Cell > Run Cells or Shift Enter with your cursor in the cell you want to run). The ? will pop up a window at the bottom of the notebook displaying the function's docstrings, which includes information about the function and usage. We encourage you to use help and ? throughout the tutorial as you come across functions you are unfamiliar with. Let's try this out with h5py.File:", "help(h5py)\n\nh5py.File?", "Now that we have an idea of how to use h5py to read in an h5 file, let's try it out. Note that if the h5 file is stored in a different directory than where you are running your notebook, you need to include the path (either relative or absolute) to the directory where that data file is stored. Use os.path.join to create the full path of the file.", "# Note that you will need to update this filepath for your local machine\nf = h5py.File('/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5','r')", "Explore NEON AOP HDF5 Reflectance Files\nWe can look inside the HDF5 dataset with the h5py visititems function. The list_dataset function defined below displays all datasets stored in the hdf5 file and their locations within the hdf5 file:", "#list_dataset lists the names of datasets in an hdf5 file\ndef list_dataset(name,node):\n if isinstance(node, h5py.Dataset):\n print(name)\n\nf.visititems(list_dataset)", "You can see that there is a lot of information stored inside this reflectance hdf5 file. Most of this information is metadata (data about the reflectance data), for example, this file stores input parameters used in the atmospheric correction. For this introductory lesson, we will only work with two of these datasets, the reflectance data (hyperspectral cube), and the corresponding geospatial information, stored in Metadata/Coordinate_System:\n\nSERC/Reflectance/Reflectance_Data\nSERC/Reflectance/Metadata/Coordinate_System/\n\nWe can also display the name, shape, and type of each of these datasets using the ls_dataset function defined below, which is also called with the visititems method:", "#ls_dataset displays the name, shape, and type of datasets in hdf5 file\ndef ls_dataset(name,node):\n if isinstance(node, h5py.Dataset):\n print(node)\n\n#to see what the visititems methods does, type ? at the end:\nf.visititems?\n\nf.visititems(ls_dataset)", "Now that we can see the structure of the hdf5 file, let's take a look at some of the information that is stored inside. Let's start by extracting the reflectance data, which is nested under SERC/Reflectance/Reflectance_Data:", "serc_refl = f['SERC']['Reflectance']\nprint(serc_refl)", "The two members of the HDF5 group /SERC/Reflectance are Metadata and Reflectance_Data. Let's save the reflectance data as the variable serc_reflArray:", "serc_reflArray = serc_refl['Reflectance_Data']\nprint(serc_reflArray)", "We can extract the size of this reflectance array that we extracted using the shape method:", "refl_shape = serc_reflArray.shape\nprint('SERC Reflectance Data Dimensions:',refl_shape)", "This 3-D shape (1000,1000,426) corresponds to (y,x,bands), where (x,y) are the dimensions of the reflectance array in pixels. Hyperspectral data sets are often called \"cubes\" to reflect this 3-dimensional shape.\n<figure>\n <a href=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/DataCube.png\">\n <img src=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/DataCube.png\"></a>\n <figcaption> A \"cube\" showing a hyperspectral data set. Source: National Ecological Observatory Network\n (NEON) \n </figcaption>\n</figure>\n\nNEON hyperspectral data contain around 426 spectral bands, and when working with tiled data, the spatial dimensions are 1000 x 1000, where each pixel represents 1 meter. Now let's take a look at the wavelength values. First, we will extract wavelength information from the serc_refl variable that we created:", "#define the wavelengths variable\nwavelengths = serc_refl['Metadata']['Spectral_Data']['Wavelength']\n\n#View wavelength information and values\nprint('wavelengths:',wavelengths)", "We can then use numpy (imported as np) to see the minimum and maximum wavelength values:", "# Display min & max wavelengths\nprint('min wavelength:', np.amin(wavelengths),'nm')\nprint('max wavelength:', np.amax(wavelengths),'nm')", "Finally, we can determine the band widths (distance between center bands of two adjacent bands). Let's try this for the first two bands and the last two bands. Remember that Python uses 0-based indexing ([0] represents the first value in an array), and note that you can also use negative numbers to splice values from the end of an array ([-1] represents the last value in an array).", "#show the band widths between the first 2 bands and last 2 bands \nprint('band width between first 2 bands =',(wavelengths.value[1]-wavelengths.value[0]),'nm')\nprint('band width between last 2 bands =',(wavelengths.value[-1]-wavelengths.value[-2]),'nm')", "The center wavelengths recorded in this hyperspectral cube range from 383.66 - 2511.94 nm, and each band covers a range of ~5 nm. Now let's extract spatial information, which is stored under SERC/Reflectance/Metadata/Coordinate_System/Map_Info:", "serc_mapInfo = serc_refl['Metadata']['Coordinate_System']['Map_Info']\nprint('SERC Map Info:',serc_mapInfo.value)", "Understanding the output:\nHere we can spatial information about the reflectance data. Below is a break down of what each of these values means:\n\nUTM - coordinate system (Universal Transverse Mercator)\n1.000, 1.000 - \n368000.000, 4307000.0 - UTM coordinates (meters) of the map origin, which refers to the upper-left corner of the image (xMin, yMax). \n1.0000000, 1.0000000 - pixel resolution (meters)\n18 - UTM zone\nN - UTM hemisphere (North for all NEON sites)\nWGS-84 - reference ellipoid\n\nThe letter b that appears before UTM signifies that the variable-length string data is stored in binary format when it is written to the hdf5 file. Don't worry about it for now, as we will convert the numerical data we need into floating point numbers. For more information on hdf5 strings read the <a href=\"http://docs.h5py.org/en/latest/strings.html\" target=\"_blank\">h5py documentation</a>. \nLet's extract relevant information from the Map_Info metadata to define the spatial extent of this dataset. To do this, we can use the split method to break up this string into separate values:", "#First convert mapInfo to a string\nmapInfo_string = str(serc_mapInfo.value) #convert to string\n\n#see what the split method does\nmapInfo_string.split?\n\n#split the strings using the separator \",\" \nmapInfo_split = mapInfo_string.split(\",\") \nprint(mapInfo_split)", "Now we can extract the spatial information we need from the map info values, convert them to the appropriate data type (float) and store it in a way that will enable us to access and apply it later when we want to plot the data:", "#Extract the resolution & convert to floating decimal number\nres = float(mapInfo_split[5]),float(mapInfo_split[6])\nprint('Resolution:',res)\n\n#Extract the upper left-hand corner coordinates from mapInfo\nxMin = float(mapInfo_split[3]) \nyMax = float(mapInfo_split[4])\n\n#Calculate the xMax and yMin values from the dimensions\nxMax = xMin + (refl_shape[1]*res[0]) #xMax = left edge + (# of columns * x pixel resolution)\nyMin = yMax - (refl_shape[0]*res[1]) #yMin = top edge - (# of rows * y pixel resolution)", "Now we can define the spatial exten as the tuple (xMin, xMax, yMin, yMax). This is the format required for applying the spatial extent when plotting with matplotlib.pyplot.", "#Define extent as a tuple:\nserc_ext = (xMin, xMax, yMin, yMax)\nprint('serc_ext:',serc_ext)\nprint('serc_ext type:',type(serc_ext))", "Extract a Single Band from Array\nWhile it is useful to have all the data contained in a hyperspectral cube, it is difficult to visualize all this information at once. We can extract a single band (representing a ~5nm band, approximating a single wavelength) from the cube by using splicing as follows. Note that we have to cast the reflectance data into the type float. Recall that since Python indexing starts at 0 instead of 1, in order to extract band 56, we need to use the index 55.", "b56 = serc_reflArray[:,:,55].astype(float)\nprint('b56 type:',type(b56))\nprint('b56 shape:',b56.shape)\nprint('Band 56 Reflectance:\\n',b56)", "Here we can see that we extracted a 2-D array (1000 x 1000) of the scaled reflectance data corresponding to the wavelength band 56. Before we can use the data, we need to clean it up a little. We'll show how to do this below. \nScale factor and No Data Value\nThis array represents the scaled reflectance for band 56. Recall from exploring the HDF5 data in HDFViewer that NEON AOP reflectance data uses a Data_Ignore_Value of -9999 to represent missing data (often called NaN), and a reflectance Scale_Factor of 10000.0 in order to save disk space (can use lower precision this way). \n<figure>\n <a href=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/HDF5-general/hdfview_SERCrefl.png\">\n <img src=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/HDF5-general/hdfview_SERCrefl.png\"></a>\n <figcaption> Screenshot of the NEON HDF5 file format.\n Source: National Ecological Observatory Network\n </figcaption>\n</figure>\n\nWe can extract and apply the Data_Ignore_Value and Scale_Factor as follows:", "#View and apply scale factor and data ignore value\nscaleFactor = serc_reflArray.attrs['Scale_Factor']\nnoDataValue = serc_reflArray.attrs['Data_Ignore_Value']\nprint('Scale Factor:',scaleFactor)\nprint('Data Ignore Value:',noDataValue)\n\nb56[b56==int(noDataValue)]=np.nan\nb56 = b56/scaleFactor\nprint('Cleaned Band 56 Reflectance:\\n',b56)", "Plot single reflectance band\nNow we can plot this band using the Python package matplotlib.pyplot, which we imported at the beginning of the lesson as plt. Note that the default colormap is jet unless otherwise specified. You can explore using different colormaps on your own; see the <a href=\"https://matplotlib.org/examples/color/colormaps_reference.html\" target=\"_blank\">mapplotlib colormaps</a> for for other options.", "serc_plot = plt.imshow(b56,extent=serc_ext,cmap='Greys') ", "We can see that this image looks pretty washed out. To see why this is, it helps to look at the range and distribution of reflectance values that we are plotting. We can do this by making a histogram. \nPlot histogram\nWe can plot a histogram using the matplotlib.pyplot.hist function. Note that this function won't work if there are any NaN values, so we can ensure we are only plotting the real data values using the call below. You can also specify the # of bins you want to divide the data into.", "plt.hist(b56[~np.isnan(b56)],50); #50 signifies the # of bins", "We can see that most of the reflectance values are < 0.4. In order to show more contrast in the image, we can adjust the colorlimit (clim) to 0-0.4:", "serc_plot = plt.imshow(b56,extent=serc_ext,cmap='Greys',clim=(0,0.4)) \nplt.title('SERC Band 56 Reflectance');", "Here you can see that adjusting the colorlimit displays features (eg. roads, buildings) much better than when we set the colormap limits to the entire range of reflectance values. \nExtension: Basic Image Processing -- Contrast Stretch & Histogram Equalization\nWe can also try out some basic image processing to better visualize the \nreflectance data using the ski-image package. \nHistogram equalization is a method in image processing of contrast adjustment \nusing the image's histogram. Stretching the histogram can improve the contrast \nof a displayed image, as we will show how to do below. \n<figure>\n <a href=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png\">\n <img src=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png\"></a>\n <figcaption> Histogram equalization is a method in image processing of contrast adjustment \nusing the image's histogram. Stretching the histogram can improve the contrast \nof a displayed image, as we will show how to do below.\n Source: <a href=\"https://en.wikipedia.org/wiki/Talk%3AHistogram_equalization#/media/File:Histogrammspreizung.png\"> Wikipedia - Public Domain </a>\n </figcaption>\n</figure>\n\nThe following tutorial section is adapted from skikit-image's tutorial\n<a href=\"http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#sphx-glr-auto-examples-color-exposure-plot-equalize-py\" target=\"_blank\"> Histogram Equalization</a>.\nBelow we demonstrate a widget to interactively display different linear contrast stretches:\nExplore the contrast stretch feature interactively using IPython widgets:", "from skimage import exposure\nfrom IPython.html.widgets import *\n\ndef linearStretch(percent):\n pLow, pHigh = np.percentile(b56[~np.isnan(b56)], (percent,100-percent))\n img_rescale = exposure.rescale_intensity(b56, in_range=(pLow,pHigh))\n plt.imshow(img_rescale,extent=serc_ext,cmap='gist_earth') \n #cbar = plt.colorbar(); cbar.set_label('Reflectance')\n plt.title('SERC Band 56 \\n Linear ' + str(percent) + '% Contrast Stretch'); \n ax = plt.gca()\n ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #\n rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree\n \ninteract(linearStretch,percent=(0,50,1))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ko/guide/migrate.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "텐서플로 1 코드를 텐서플로 2로 바꾸기\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/migrate\">\n <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />\n TensorFlow.org에서 보기</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/migrate.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n 구글 코랩(Colab)에서 실행하기</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/migrate.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n 깃허브(GitHub) 소스 보기</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/migrate.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nNote: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도\n불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다.\n이 번역에 개선할 부분이 있다면\ntensorflow/docs-l10n 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.\n문서 번역이나 리뷰에 참여하려면\[email protected]로\n메일을 보내주시기 바랍니다.\n이 문서는 저수준 텐서플로 API를 사용자를 위한 가이드입니다.\n만약 고수준 API(tf.keras)를 사용하고 있다면 텐서플로 2.0으로 바꾸기 위해 할 일이 거의 없습니다:\n\n옵티마이저 학습률 기본값을 확인해 보세요.\n측정 지표의 \"이름\"이 바뀌었을 수 있습니다.\n\n여전히 텐서플로 1.X 버전의 코드를 수정하지 않고 텐서플로 2.0에서 실행할 수 있습니다(contrib 모듈은 제외):\nimport tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\n하지만 이렇게 하면 텐서플로 2.0에서 제공하는 많은 장점을 활용할 수 없습니다. 이 문서는 성능을 높이면서 코드는 더 간단하고 유지보수하기 쉽도록 업그레이드하는 방법을 안내합니다.\n자동 변환 스크립트\n첫 번째 단계는 업그레이드 스크립트를 사용해 보는 것입니다.\n이는 텐서플로 2.0으로 업그레이드하기 위해 처음 시도할 일입니다. 하지만 이 작업이 기존 코드를 텐서플로 2.0 스타일로 바꾸어 주지는 못합니다. 여전히 플레이스홀더(placeholder)나 세션(session), 컬렉션(collection), 그외 1.x 스타일의 기능을 사용하기 위해 tf.compat.v1 아래의 모듈을 참조하고 있을 것입니다.\n고수준 동작 변경\ntf.compat.v1.disable_v2_behavior()를 사용해 텐서플로 2.0에서 코드를 실행한다면 전역 범위의 변경 사항에 대해 알고 있어야 합니다. 주요 변경 사항은 다음과 같습니다:\n\n\n즉시 실행, v1.enable_eager_execution() : 암묵적으로 tf.Graph를 사용하는 모든 코드는 실패할 것입니다. 코드를 with tf.Graph().as_default() 컨택스트(context)로 감싸야 합니다.\n\n\n리소스(resource) 변수, v1.enable_resource_variables(): 일부 코드는 TF 레퍼런스 변수의 결정적이지 않은 행동에 영향을 받을 수 있습니다.\n리소스 변수는 저장되는 동안 잠깁니다. 따라서 이해하기 쉬운 일관성을 보장합니다.\n\n\n극단적인 경우 동작을 바꿀 수 있습니다.\n\n추가로 복사본을 만들고 메모리 사용량을 높일 수 있습니다.\n\ntf.Variable 생성자에 use_resource=False를 전달하여 비활성화할 수 있습니다.\n\n\n텐서 크기, v1.enable_v2_tensorshape(): TF 2.0에서 텐서 크기는 간단합니다. t.shape[0].value 대신에 t.shape[0]을 사용할 수 있습니다. 변경 사항이 작기 때문에 당장 고치는 것이 좋습니다. TensorShape 예를 참고하세요.\n\n\n제어 흐름, v1.enable_control_flow_v2(): TF 2.0 제어 흐름 구현이 간단하게 바뀌었기 때문에 다른 그래프 표현을 만듭니다. 이슈가 있다면 버그를 신고해 주세요.\n\n\n2.0에 맞도록 코드 수정하기\n텐서플로 1.x 코드를 텐서플로 2.0으로 변환하는 몇 가지 예를 소개하겠습니다. 이 작업을 통해 성능을 최적화하고 간소화된 API의 이점을 사용할 수 있습니다.\n각각의 경우에 수정하는 패턴은 다음과 같습니다:\n1. tf.Session.run 호출을 바꾸세요.\n모든 tf.Session.run 호출을 파이썬 함수로 바꾸어야 합니다.\n\nfeed_dict와 tf.placeholder는 함수의 매개변수가 됩니다.\nfetches는 함수의 반환값이 됩니다.\n변환 과정에서 즉시 실행 모드 덕분에 표준 파이썬 디버거 pdb를 사용하여 쉽게 디버깅할 수 있습니다.\n\n그다음 그래프 모드에서 효율적으로 실행할 수 있도록 tf.function 데코레이터를 추가합니다. 더 자세한 내용은 오토그래프 가이드를 참고하세요.\n노트:\n\n\nv1.Session.run과 달리 tf.function은 반환 시그니처(signature)가 고정되어 있고 항상 모든 출력을 반환합니다. 성능에 문제가 된다면 두 개의 함수로 나누세요.\n\n\ntf.control_dependencies나 비슷한 연산이 필요없습니다: tf.function은 쓰여진 순서대로 실행됩니다. 예를 들어 tf.Variable 할당이나 tf.assert는 자동으로 실행됩니다.\n\n\n2. 파이썬 객체를 사용하여 변수와 손실을 관리하세요.\nTF 2.0에서 이름 기반 변수 추적은 매우 권장되지 않습니다. 파이썬 객체로 변수를 추적하세요.\nv1.get_variable 대신에 tf.Variable을 사용하세요.\n모든 v1.variable_scope는 파이썬 객체로 바꾸어야 합니다. 일반적으로 다음 중 하나가 될 것입니다:\n\ntf.keras.layers.Layer\ntf.keras.Model\ntf.Module\n\n만약 (tf.Graph.get_collection(tf.GraphKeys.VARIABLES)처럼) 변수의 리스트가 필요하다면 Layer와 Model 객체의 .variables이나 .trainable_variables 속성을 사용하세요.\nLayer와 Model 클래스는 전역 컬렉션이 필요하지 않도록 몇 가지 다른 속성들도 제공합니다. .losses 속성은 tf.GraphKeys.LOSSES 컬렉션 대신 사용할 수 있습니다.\n자세한 내용은 케라스 가이드를 참고하세요.\n경고: tf.compat.v1의 상당수 기능은 암묵적으로 전역 컬렉션을 사용합니다.\n3. 훈련 루프를 업그레이드하세요.\n풀려는 문제에 맞는 고수준 API를 사용하세요. 훈련 루프(loop)를 직접 만드는 것보다 tf.keras.Model.fit 메서드를 사용하는 것이 좋습니다.\n고수준 함수는 훈련 루프를 직접 만들 때 놓치기 쉬운 여러 가지 저수준의 세부 사항들을 관리해 줍니다. 예를 들어 자동으로 규제(regularization) 손실을 수집하거나 모델을 호출할 때 training=True로 매개변수를 설정해 줍니다.\n4. 데이터 입력 파이프라인을 업그레이드하세요.\n데이터 입력을 위해 tf.data 데이터셋을 사용하세요. 이 객체는 효율적이고 간결하며 텐서플로와 잘 통합됩니다.\ntf.keras.Model.fit 메서드에 바로 전달할 수 있습니다.\nmodel.fit(dataset, epochs=5)\n파이썬에서 직접 반복시킬 수 있습니다:\nfor example_batch, label_batch in dataset:\n break\n5. compat.v1에서 마이그레이션 하기\ntf.compat.v1 모듈에는 완전한 텐서플로 1.x API가 들어 있습니다.\nTF2 업그레이드 스크립트는 안전할 경우 이와 동일한 2.0 버전으로 바꿉니다. 즉 2.0 버전의 동작이 완전히 동일한 경우입니다(예를 들면, v1.arg_max가 tf.argmax로 이름이 바뀌었기 때문에 동일한 함수입니다).\n업그레이드 스크립트가 코드를 수정하고 나면 코드에 compat.v1이 많이 등장할 것입니다. 코드를 살펴 보면서 수동으로 2.0 버전으로 바꿉니다(2.0 버전이 있다면 로그에 언급되어 있을 것입니다).\n모델 변환하기\n준비", "import tensorflow as tf\n\n\nimport tensorflow_datasets as tfds", "저수준 변수와 연산 실행\n저수준 API를 사용하는 예는 다음과 같습니다:\n\n재사용을 위해 변수 범위(variable scopes)를 사용하기\nv1.get_variable로 변수를 만들기\n명시적으로 컬렉션을 참조하기\n\n다음과 같은 메서드를 사용하여 암묵적으로 컬렉션을 참조하기:\n\n\nv1.global_variables\n\n\nv1.losses.get_regularization_loss\n\n\n그래프 입력을 위해 v1.placeholder를 사용하기\n\nsession.run으로 그래프를 실행하기\n변수를 수동으로 초기화하기\n\n변환 전\n다음 코드는 텐서플로 1.x를 사용한 코드에서 볼 수 있는 패턴입니다.\n```python\nin_a = tf.placeholder(dtype=tf.float32, shape=(2))\nin_b = tf.placeholder(dtype=tf.float32, shape=(2))\ndef forward(x):\n with tf.variable_scope(\"matmul\", reuse=tf.AUTO_REUSE):\n W = tf.get_variable(\"W\", initializer=tf.ones(shape=(2,2)),\n regularizer=tf.contrib.layers.l2_regularizer(0.04))\n b = tf.get_variable(\"b\", initializer=tf.zeros(shape=(2)))\n return W * x + b\nout_a = forward(in_a)\nout_b = forward(in_b)\nreg_loss = tf.losses.get_regularization_loss(scope=\"matmul\")\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n outs = sess.run([out_a, out_b, reg_loss],\n feed_dict={in_a: [1, 0], in_b: [0, 1]})\n```\n변환 후\n변환된 코드의 패턴은 다음과 같습니다:\n\n변수는 파이썬 지역 객체입니다.\nforward 함수는 여전히 필요한 계산을 정의합니다.\nSession.run 호출은 forward 함수를 호출하는 것으로 바뀝니다.\ntf.function 데코레이터는 선택 사항으로 성능을 위해 추가할 수 있습니다.\n어떤 전역 컬렉션도 참조하지 않고 규제를 직접 계산합니다.\n세션이나 플레이스홀더를 사용하지 않습니다.", "W = tf.Variable(tf.ones(shape=(2,2)), name=\"W\")\nb = tf.Variable(tf.zeros(shape=(2)), name=\"b\")\n\[email protected]\ndef forward(x):\n return W * x + b\n\nout_a = forward([1,0])\nprint(out_a)\n\nout_b = forward([0,1])\n\nregularizer = tf.keras.regularizers.l2(0.04)\nreg_loss = regularizer(W)", "tf.layers 기반의 모델\nv1.layers 모듈은 변수를 정의하고 재사용하기 위해 v1.variable_scope에 의존하는 층 함수를 포함합니다.\n변환 전\n```python\ndef model(x, training, scope='model'):\n with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):\n x = tf.layers.conv2d(x, 32, 3, activation=tf.nn.relu,\n kernel_regularizer=tf.contrib.layers.l2_regularizer(0.04))\n x = tf.layers.max_pooling2d(x, (2, 2), 1)\n x = tf.layers.flatten(x)\n x = tf.layers.dropout(x, 0.1, training=training)\n x = tf.layers.dense(x, 64, activation=tf.nn.relu)\n x = tf.layers.batch_normalization(x, training=training)\n x = tf.layers.dense(x, 10, activation=tf.nn.softmax)\n return x\ntrain_out = model(train_data, training=True)\ntest_out = model(test_data, training=False)\n```\n변환 후\n\n층을 단순하게 쌓을 경우엔 tf.keras.Sequential이 적합합니다. (복잡한 모델인 경우 사용자 정의 층과 모델이나 함수형 API를 참고하세요.)\n모델이 변수와 규제 손실을 관리합니다.\nv1.layers에서 tf.keras.layers로 바로 매핑되기 때문에 일대일로 변환됩니다.\n\n대부분 매개변수는 동일합니다. 다른 부분은 다음과 같습니다:\n\n모델이 실행될 때 각 층에 training 매개변수가 전달됩니다.\n원래 model 함수의 첫 번째 매개변수(입력 x)는 사라집니다. 층 객체가 모델 구축과 모델 호출을 구분하기 때문입니다.\n\n추가 노트:\n\ntf.contrib에서 규제를 초기화했다면 다른 것보다 매개변수 변화가 많습니다.\n더 이상 컬렉션을 사용하지 않기 때문에 v1.losses.get_regularization_loss와 같은 함수는 값을 반환하지 않습니다. 이는 훈련 루프를 망가뜨릴 수 있습니다.", "model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu',\n kernel_regularizer=tf.keras.regularizers.l2(0.04),\n input_shape=(28, 28, 1)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dropout(0.1),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.Dense(10)\n])\n\ntrain_data = tf.ones(shape=(1, 28, 28, 1))\ntest_data = tf.ones(shape=(1, 28, 28, 1))\n\ntrain_out = model(train_data, training=True)\nprint(train_out)\n\ntest_out = model(test_data, training=False)\nprint(test_out)\n\n# 훈련되는 전체 변수\nlen(model.trainable_variables)\n\n# 규제 손실\nmodel.losses", "변수와 v1.layers의 혼용\n기존 코드는 종종 저수준 TF 1.x 변수와 고수준 v1.layers 연산을 혼용합니다.\n변경 전\n```python\ndef model(x, training, scope='model'):\n with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):\n W = tf.get_variable(\n \"W\", dtype=tf.float32,\n initializer=tf.ones(shape=x.shape),\n regularizer=tf.contrib.layers.l2_regularizer(0.04),\n trainable=True)\n if training:\n x = x + W\n else:\n x = x + W * 0.5\n x = tf.layers.conv2d(x, 32, 3, activation=tf.nn.relu)\n x = tf.layers.max_pooling2d(x, (2, 2), 1)\n x = tf.layers.flatten(x)\n return x\ntrain_out = model(train_data, training=True)\ntest_out = model(test_data, training=False)\n```\n변경 후\n이런 코드를 변환하려면 이전 예제처럼 층별로 매핑하는 패턴을 사용하세요.\nv1.variable_scope는 기본적으로 하나의 층입니다. 따라서 tf.keras.layers.Layer로 재작성합니다. 자세한 내용은 이 문서를 참고하세요.\n일반적인 패턴은 다음과 같습니다:\n\n__init__에서 층에 필요한 매개변수를 입력 받습니다.\nbuild 메서드에서 변수를 만듭니다.\ncall 메서드에서 연산을 실행하고 결과를 반환합니다.", "# 모델에 추가하기 위해 사용자 정의 층을 만듭니다.\nclass CustomLayer(tf.keras.layers.Layer):\n def __init__(self, *args, **kwargs):\n super(CustomLayer, self).__init__(*args, **kwargs)\n\n def build(self, input_shape):\n self.w = self.add_weight(\n shape=input_shape[1:],\n dtype=tf.float32,\n initializer=tf.keras.initializers.ones(),\n regularizer=tf.keras.regularizers.l2(0.04),\n trainable=True)\n\n # call 메서드가 그래프 모드에서 사용되면\n # training 변수는 텐서가 됩니다.\n @tf.function\n def call(self, inputs, training=None):\n if training:\n return inputs + self.w\n else:\n return inputs + self.w * 0.5\n\ncustom_layer = CustomLayer()\nprint(custom_layer([1]).numpy())\nprint(custom_layer([1], training=True).numpy())\n\ntrain_data = tf.ones(shape=(1, 28, 28, 1))\ntest_data = tf.ones(shape=(1, 28, 28, 1))\n\n# 사용자 정의 층을 포함한 모델을 만듭니다.\nmodel = tf.keras.Sequential([\n CustomLayer(input_shape=(28, 28, 1)),\n tf.keras.layers.Conv2D(32, 3, activation='relu'),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n])\n\ntrain_out = model(train_data, training=True)\ntest_out = model(test_data, training=False)", "노트:\n\n클래스 상속으로 만든 케라스 모델과 층은 v1 그래프(연산간의 의존성이 자동으로 제어되지 않습니다)와 즉시 실행 모드 양쪽에서 실행될 수 있어야 합니다.\n\n오토그래프(autograph)와 의존성 자동 제어(automatic control dependency)를 위해 tf.function()으로 call() 메서드를 감쌉니다.\n\n\ncall 메서드에 training 매개변수를 추가하는 것을 잊지 마세요.\n\n경우에 따라 이 값은 tf.Tensor가 됩니다.\n경우에 따라 이 값은 파이썬 불리언(boolean)이 됩니다.\n\n\n\nself.add_weight()를 사용하여 생성자 메서드나 def build() 메서드에서 모델 변수를 만듭니다.\n\nbuild 메서드에서 입력 크기를 참조할 수 있으므로 적절한 크기의 가중치를 만들 수 있습니다.\n\ntf.keras.layers.Layer.add_weight를 사용하면 케라스가 변수와 규제 손실을 관리할 수 있습니다.\n\n\n사용자 정의 층 안에 tf.Tensors 객체를 포함하지 마세요.\n\ntf.function이나 즉시 실행 모드에서 모두 텐서가 만들어지지만 이 텐서들의 동작 방식은 다릅니다.\n상태를 저장하기 위해서는 tf.Variable을 사용하세요. 변수는 양쪽 방식에 모두 사용할 수 있습니다.\ntf.Tensors는 중간 값을 저장하기 위한 용도로만 사용합니다.\n\nSlim & contrib.layers를 위한 노트\n예전 텐서플로 1.x 코드는 Slim 라이브러리를 많이 사용합니다. 이 라이브러리는 텐서플로 1.x의 tf.contrib.layers로 패키지되어 있습니다. contrib 모듈은 더 이상 텐서플로 2.0에서 지원하지 않고 tf.compat.v1에도 포함되지 않습니다. Slim을 사용한 코드를 TF 2.0으로 변환하는 것은 v1.layers를 사용한 코드를 변경하는 것보다 더 어렵습니다. 사실 Slim 코드는 v1.layers로 먼저 변환하고 그 다음 케라스로 변환하는 것이 좋습니다.\n\narg_scopes를 삭제하세요. 모든 매개변수는 명시적으로 설정되어야 합니다.\nnormalizer_fn과 activation_fn를 사용해야 한다면 분리하여 각각 하나의 층으로 만드세요.\n분리 합성곱(separable conv) 층은 한 개 이상의 다른 케라스 층으로 매핑합니다(깊이별(depthwise), 점별(pointwise), 분리(separable) 케라스 층).\nSlim과 v1.layers는 매개변수 이름과 기본값이 다릅니다.\n일부 매개변수는 다른 스케일(scale)을 가집니다.\n사전 훈련된 Slim 모델을 사용한다면 tf.keras.applications나 TFHub를 확인해 보세요.\n\n일부 tf.contrib 층은 텐서플로 내부에 포함되지 못했지만 TF 애드온(add-on) 패키지로 옮겨졌습니다.\n훈련\n여러 가지 방법으로 tf.keras 모델에 데이터를 주입할 수 있습니다. 파이썬 제너레이터(generator)와 넘파이 배열을 입력으로 사용할 수 있습니다.\ntf.data 패키지를 사용하여 모델에 데이터를 주입하는 것이 권장되는 방법입니다. 이 패키지는 데이터 조작을 위한 고성능 클래스들을 포함하고 있습니다.\ntf.queue는 데이터 구조로만 지원되고 입력 파이프라인으로는 지원되지 않습니다.\n데이터셋 사용하기\n텐서플로 데이터셋(Datasets) 패키지(tfds)는 tf.data.Dataset 객체로 정의된 데이터셋을 적재하기 위한 유틸리티가 포함되어 있습니다.\n예를 들어 tfds를 사용하여 MNIST 데이터셋을 적재하는 코드는 다음과 같습니다:", "datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)\nmnist_train, mnist_test = datasets['train'], datasets['test']", "그 다음 훈련용 데이터를 준비합니다:\n\n각 이미지의 스케일을 조정합니다.\n샘플의 순서를 섞습니다.\n이미지와 레이블(label)의 배치를 만듭니다.", "BUFFER_SIZE = 10 # 실전 코드에서는 더 큰 값을 사용합니다.\nBATCH_SIZE = 64\nNUM_EPOCHS = 5\n\n\ndef scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255\n\n return image, label", "간단한 예제를 위해 5개의 배치만 반환하도록 데이터셋을 자릅니다:", "train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)\ntest_data = mnist_test.map(scale).batch(BATCH_SIZE)\n\nSTEPS_PER_EPOCH = 5\n\ntrain_data = train_data.take(STEPS_PER_EPOCH)\ntest_data = test_data.take(STEPS_PER_EPOCH)\n\nimage_batch, label_batch = next(iter(train_data))", "케라스 훈련 루프 사용하기\n훈련 과정을 세부적으로 제어할 필요가 없다면 케라스의 내장 메서드인 fit, evaluate, predict를 사용하는 것이 좋습니다. 이 메서드들은 모델 구현(Sequential, 함수형 API, 클래스 상속)에 상관없이 일관된 훈련 인터페이스를 제공합니다.\n이 메서드들의 장점은 다음과 같습니다:\n\n넘파이 배열, 파이썬 제너레이터, tf.data.Datasets을 사용할 수 있습니다.\n자동으로 규제와 활성화 손실을 적용합니다.\n다중 장치 훈련을 위해 tf.distribute을 지원합니다.\n임의의 호출 가능한 객체를 손실과 측정 지표로 사용할 수 있습니다.\ntf.keras.callbacks.TensorBoard와 같은 콜백(callback)이나 사용자 정의 콜백을 지원합니다.\n자동으로 텐서플로 그래프를 사용하므로 성능이 뛰어납니다.\n\nDataset을 사용하여 모델을 훈련하는 예제는 다음과 같습니다. (자세한 작동 방식은 튜토리얼을 참고하세요.)", "model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu',\n kernel_regularizer=tf.keras.regularizers.l2(0.02),\n input_shape=(28, 28, 1)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dropout(0.1),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.Dense(10)\n])\n\n# 사용자 정의 층이 없는 모델입니다.\nmodel.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\nmodel.fit(train_data, epochs=NUM_EPOCHS)\nloss, acc = model.evaluate(test_data)\n\nprint(\"손실 {}, 정확도 {}\".format(loss, acc))", "사용자 정의 훈련 루프 만들기\n케라스 모델의 훈련 스텝(step)이 좋지만 그 외 다른 것을 더 제어하려면 자신만의 데이터 반복 루프를 만들고 tf.keras.model.train_on_batch 메서드를 사용해 보세요.\n기억할 점: 많은 것을 tf.keras.Callback으로 구현할 수 있습니다.\n이 메서드는 앞에서 언급한 메서드의 장점을 많이 가지고 있고 사용자가 바깥쪽 루프를 제어할 수 있습니다.\n훈련하는 동안 성능을 확인하기 위해 tf.keras.model.test_on_batch나 tf.keras.Model.evaluate 메서드를 사용할 수도 있습니다.\n노트: train_on_batch와 test_on_batch는 기본적으로 하나의 배치에 대한 손실과 측정값을 반환합니다. reset_metrics=False를 전달하면 누적된 측정값을 반환합니다. 이 때는 누적된 측정값을 적절하게 초기화해 주어야 합니다. AUC와 같은 일부 지표는 reset_metrics=False를 설정해야 올바르게 계산됩니다.\n앞의 모델을 계속 사용합니다:", "# 사용자 정의 층이 없는 모델입니다.\nmodel.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\nfor epoch in range(NUM_EPOCHS):\n # 누적된 측정값을 초기화합니다.\n model.reset_metrics()\n\n for image_batch, label_batch in train_data:\n result = model.train_on_batch(image_batch, label_batch)\n metrics_names = model.metrics_names\n print(\"훈련: \",\n \"{}: {:.3f}\".format(metrics_names[0], result[0]),\n \"{}: {:.3f}\".format(metrics_names[1], result[1]))\n for image_batch, label_batch in test_data:\n result = model.test_on_batch(image_batch, label_batch,\n # return accumulated metrics\n reset_metrics=False)\n metrics_names = model.metrics_names\n print(\"\\n평가: \",\n \"{}: {:.3f}\".format(metrics_names[0], result[0]),\n \"{}: {:.3f}\".format(metrics_names[1], result[1]))", "<a id=\"custom_loops\"/>\n훈련 단계 커스터마이징\n자유도를 높이고 제어를 더 하려면 다음 세 단계를 사용해 자신만의 훈련 루프를 구현할 수 있습니다:\n\n샘플 배치를 만드는 파이썬 제너레이터나 tf.data.Dataset을 반복합니다.\ntf.GradientTape을 사용하여 그래디언트를 계산합니다.\ntf.keras.optimizer를 사용하여 모델의 가중치 변수를 업데이트합니다.\n\n기억할 점:\n\n클래스 상속으로 만든 층과 모델의 call 메서드에는 항상 training 매개변수를 포함하세요.\n모델을 호출할 때 training 매개변수를 올바르게 지정했는지 확인하세요.\n사용 방식에 따라 배치 데이터에서 모델이 실행될 때까지 모델 변수가 생성되지 않을 수 있습니다.\n모델의 규제 손실 같은 것들을 직접 관리해야 합니다.\n\nv1에 비해 단순해진 것:\n\n따로 변수를 초기화할 필요가 없습니다. 변수는 생성될 때 초기화됩니다.\n의존성을 수동으로 제어할 필요가 없습니다. tf.function 안에서도 연산은 즉시 실행 모드처럼 실행됩니다.", "model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu',\n kernel_regularizer=tf.keras.regularizers.l2(0.02),\n input_shape=(28, 28, 1)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dropout(0.1),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.Dense(10)\n])\n\noptimizer = tf.keras.optimizers.Adam(0.001)\nloss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n\[email protected]\ndef train_step(inputs, labels):\n with tf.GradientTape() as tape:\n predictions = model(inputs, training=True)\n regularization_loss = tf.math.add_n(model.losses)\n pred_loss = loss_fn(labels, predictions)\n total_loss = pred_loss + regularization_loss\n\n gradients = tape.gradient(total_loss, model.trainable_variables)\n optimizer.apply_gradients(zip(gradients, model.trainable_variables))\n\nfor epoch in range(NUM_EPOCHS):\n for inputs, labels in train_data:\n train_step(inputs, labels)\n print(\"마지막 에포크\", epoch)", "새로운 스타일의 측정 지표\n텐서플로 2.0에서 측정 지표와 손실은 객체입니다. 이 객체는 즉시 실행 모드와 tf.function에서 모두 사용할 수 있습니다.\n손실은 호출 가능한 객체입니다. 매개변수로 (y_true, y_pred)를 기대합니다:", "cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True)\ncce([[1, 0]], [[-1.0,3.0]]).numpy()", "측정 객체는 다음과 같은 메서드를 가집니다:\n\nupdate_state() — 새로운 측정값을 추가합니다.\nresult() — 누적된 측정 결과를 얻습니다.\nreset_states() — 모든 측정 내용을 지웁니다.\n\n이 객체는 호출 가능합니다. update_state 메서드처럼 새로운 측정값과 함께 호출하면 상태를 업데이트하고 새로운 측정 결과를 반환합니다.\n측정 변수를 수동으로 초기화할 필요가 없습니다. 텐서플로 2.0은 자동으로 의존성을 관리하기 때문에 어떤 경우에도 신경 쓸 필요가 없습니다.\n다음은 측정 객체를 사용하여 사용자 정의 훈련 루프 안에서 평균 손실을 관리하는 코드입니다.", "# 측정 객체를 만듭니다.\nloss_metric = tf.keras.metrics.Mean(name='train_loss')\naccuracy_metric = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')\n\[email protected]\ndef train_step(inputs, labels):\n with tf.GradientTape() as tape:\n predictions = model(inputs, training=True)\n regularization_loss = tf.math.add_n(model.losses)\n pred_loss = loss_fn(labels, predictions)\n total_loss = pred_loss + regularization_loss\n\n gradients = tape.gradient(total_loss, model.trainable_variables)\n optimizer.apply_gradients(zip(gradients, model.trainable_variables))\n # 측정값을 업데이트합니다.\n loss_metric.update_state(total_loss)\n accuracy_metric.update_state(labels, predictions)\n\n\nfor epoch in range(NUM_EPOCHS):\n # 측정값을 초기화합니다.\n loss_metric.reset_states()\n accuracy_metric.reset_states()\n\n for inputs, labels in train_data:\n train_step(inputs, labels)\n # 측정 결과를 얻습니다.\n mean_loss = loss_metric.result()\n mean_accuracy = accuracy_metric.result()\n\n print('에포크: ', epoch)\n print(' 손실: {:.3f}'.format(mean_loss))\n print(' 정확도: {:.3f}'.format(mean_accuracy))", "<a id=\"keras_metric_names\"></a>\n케라스 지표 이름\n텐서플로 2.0에서 케라스 모델은 지표 이름을 더 일관성있게 처리합니다.\n지표를 문자열로 전달하면 정확히 같은 문자열이 지표의 name으로 사용됩니다. model.fit 메서드가 반환하는 히스토리(history) 객체와 keras.callbacks로 전달하는 로그에 나타나는 이름이 지표로 전달한 문자열이 됩니다.", "model.compile(\n optimizer = tf.keras.optimizers.Adam(0.001),\n loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics = ['acc', 'accuracy', tf.keras.metrics.SparseCategoricalAccuracy(name=\"my_accuracy\")])\nhistory = model.fit(train_data)\n\nhistory.history.keys()", "이전 버전은 이와 다르게 metrics=[\"accuracy\"]를 전달하면 dict_keys(['loss', 'acc'])가 됩니다.\n케라스 옵티마이저\nv1.train.AdamOptimizer나 v1.train.GradientDescentOptimizer 같은 v1.train에 있는 옵티마이저는 tf.keras.optimizers에 있는 것과 동일합니다.\nv1.train을 keras.optimizers로 바꾸기\n다음은 옵티마이저를 바꿀 때 유념해야 할 내용입니다:\n\n옵티마이저를 업그레이드하면 예전 체크포인트와 호환이되지 않을 수 있습니다.\n입실론 매개변수 기본값은 모두 1e-8에서 1e-7로 바뀌었습니다(대부분의 경우 큰 차이가 없습니다).\nv1.train.GradientDescentOptimizer는 tf.keras.optimizers.SGD로 바꿀 수 있습니다. \nv1.train.MomentumOptimizer는 모멘텀 매개변수를 사용하는 SGD 옵티마이저로 바꿀 수 있습니다: tf.keras.optimizers.SGD(..., momentum=...).\nv1.train.AdamOptimizer는 tf.keras.optimizers.Adam로 바꿀 수 있습니다. beta1과 beta2 매개변수는 beta_1과 beta_2로 이름이 바뀌었습니다.\nv1.train.RMSPropOptimizer는 tf.keras.optimizers.RMSprop로 바꿀 수 있습니다. decay 매개변수는 rho로 이름이 바뀌었습니다.\nv1.train.AdadeltaOptimizer는 tf.keras.optimizers.Adadelta로 바꿀 수 있습니다.\ntf.train.AdagradOptimizer는 tf.keras.optimizers.Adagrad로 바꿀 수 있습니다.\ntf.train.FtrlOptimizer는 tf.keras.optimizers.Ftrl로 바꿀 수 있습니다. accum_name과 linear_name 매개변수는 삭제되었습니다.\ntf.contrib.AdamaxOptimizer와 tf.contrib.NadamOptimizer는 tf.keras.optimizers.Adamax와 tf.keras.optimizers.Nadam로 바꿀 수 있습니다. beta1, beta2 매개변수는 beta_1, beta_2로 바뀌었습니다.\n\ntf.keras.optimizers의 새로운 기본값\n<a id=\"keras_optimizer_lr\"></a>\n주의: 만약 모델이 수렴하는데 변화가 있다면 학습률 기본값을 확인해 보세요.\noptimizers.SGD, optimizers.Adam, optimizers.RMSprop 기본값은 그대로입니다..\n학습률 기본값이 바뀐 경우는 다음과 같습니다:\n\noptimizers.Adagrad는 0.01에서 0.001로 바뀌었습니다.\noptimizers.Adadelta는 1.0에서 0.001로 바뀌었습니다.\noptimizers.Adamax는 0.002에서 0.001로 바뀌었습니다.\noptimizers.Nadam은 0.002에서 0.001로 바뀌었습니다.\n\n텐서보드\n텐서플로 2는 텐서보드(TensorBoard) 시각화를 위해 서머리(summary) 데이터를 작성하는데 사용하는 tf.summary API에 큰 변화가있습니다. 새로운 tf.summary에 대한 개괄 소개는 TF 2 API를 사용한 시작하기 튜토리얼와 텐서보드 TF 2 이전 가이드를 참고하세요.\n저장과 복원\n체크포인트 호환성\n텐서플로 2.0은 객체 기반의 체크포인트를 사용합니다.\n이전 이름 기반 스타일의 체크포인트도 여전히 복원할 수 있지만 주의가 필요합니다.\n코드 변환 과정 때문에 변수 이름이 바뀔 수 있지만 해결 방법이 있습니다.\n가장 간단한 방법은 새로운 모델의 이름과 체크포인트에 있는 이름을 나열해 보는 것입니다:\n\n여전히 모든 변수는 설정 가능한 name 매개변수를 가집니다.\n케라스 모델도 name 매개변수를 가집니다. 이 값은 변수 이름의 접두어로 사용됩니다.\nv1.name_scope 함수를 변수 이름의 접두어를 지정하는데 사용할 수 있습니다. 이 함수는 tf.variable_scope와는 매우 다릅니다. 이름에만 영향을 미치며 변수를 추적하거나 재사용을 관장하지 않습니다.\n\n이것이 주어진 상황에 잘 맞지 않는다면 v1.train.init_from_checkpoint 함수를 시도해 보세요. 이 함수는 assignment_map 매개변수로 예전 이름과 새로운 이름을 매핑할 수 있습니다.\n노트: 지연 적재가 되는 객체 기반 체크포인트와는 달리 이름 기반 체크포인트는 함수가 호출될 때 모든 변수가 만들어 집니다. 일부 모델은 build 메서드를 호출하거나 배치 데이터에서 모델을 실행할 때까지 변수 생성을 지연합니다.\n텐서플로 추정기(Estimator) 저장소에는 텐서플로 1.X의 추정기에서 만든 체크포인트를 2.0으로 업그레이드하는 변환 도구가 포함되어 있습니다. 비슷한 경우를 위한 도구를 만드는 방법을 보여주는 사례입니다.\nsaved_model 호환성\nsaved_model에는 심각한 호환성 문제가 없습니다.\n\n텐서플로 1.x의 saved_model은 텐서플로 2.0와 호환됩니다.\n텐서플로 2.0의 saved_model로 저장한 모델도 연산이 지원된다면 TensorFlow 1.x에서 작동됩니다.\n\nGraph.pb 또는 Graph.pbtxt\n원본 Graph.pb 파일을 텐서플로 2.0으로 업그레이드하는 쉬운 방법은 없습니다. 이 파일을 생성하는 코드를 업그레이드하는 것이 좋습니다.\n하지만 \"동결된 그래프\"(변수가 상수로 바뀐 tf.Graph)라면 v1.wrap_function를 사용해 concrete_function로 변환하는 것이 가능합니다:", "def wrap_frozen_graph(graph_def, inputs, outputs):\n def _imports_graph_def():\n tf.compat.v1.import_graph_def(graph_def, name=\"\")\n wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])\n import_graph = wrapped_import.graph\n return wrapped_import.prune(\n tf.nest.map_structure(import_graph.as_graph_element, inputs),\n tf.nest.map_structure(import_graph.as_graph_element, outputs))", "예를 들어 2016년 Inception v1의 동결된 그래프입니다:", "path = tf.keras.utils.get_file(\n 'inception_v1_2016_08_28_frozen.pb',\n 'http://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz',\n untar=True)", "tf.GraphDef를 로드합니다:", "graph_def = tf.compat.v1.GraphDef()\nloaded = graph_def.ParseFromString(open(path,'rb').read())", "concrete_function로 감쌉니다:", "inception_func = wrap_frozen_graph(\n graph_def, inputs='input:0',\n outputs='InceptionV1/InceptionV1/Mixed_3b/Branch_1/Conv2d_0a_1x1/Relu:0')", "텐서를 입력으로 전달합니다:", "input_img = tf.ones([1,224,224,3], dtype=tf.float32)\ninception_func(input_img).shape", "추정기\n추정기로 훈련하기\n텐서플로 2.0은 추정기(estimator)를 지원합니다.\n추정기를 사용할 때 텐서플로 1.x의 input_fn(), tf.estimator.TrainSpec, tf.estimator.EvalSpec를 사용할 수 있습니다.\n다음은 input_fn을 사용하여 훈련과 평가를 수행하는 예입니다.\ninput_fn과 훈련/평가 스펙 만들기", "# 추정기 input_fn을 정의합니다.\ndef input_fn():\n datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)\n mnist_train, mnist_test = datasets['train'], datasets['test']\n\n BUFFER_SIZE = 10000\n BATCH_SIZE = 64\n\n def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255\n\n return image, label[..., tf.newaxis]\n\n train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)\n return train_data.repeat()\n\n# 훈련과 평가 스펙을 정의합니다.\ntrain_spec = tf.estimator.TrainSpec(input_fn=input_fn,\n max_steps=STEPS_PER_EPOCH * NUM_EPOCHS)\neval_spec = tf.estimator.EvalSpec(input_fn=input_fn,\n steps=STEPS_PER_EPOCH)", "케라스 모델 정의 사용하기\n텐서플로 2.0에서 추정기를 구성하는 방법은 조금 다릅니다.\n케라스를 사용하여 모델을 정의하는 것을 권장합니다. 그 다음 tf.keras.model_to_estimator 유틸리티를 사용하여 모델을 추정기로 바꾸세요. 다음 코드는 추정기를 만들고 훈련할 때 이 유틸리티를 사용하는 방법을 보여 줍니다.", "def make_model():\n return tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu',\n kernel_regularizer=tf.keras.regularizers.l2(0.02),\n input_shape=(28, 28, 1)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dropout(0.1),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.Dense(10)\n ])\n\nmodel = make_model()\n\nmodel.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\nestimator = tf.keras.estimator.model_to_estimator(\n keras_model = model\n)\n\ntf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)", "노트: 케라스에서는 가중치가 적용된 지표를 지원하지 않습니다. model_to_estimator를 사용해 추정기 API의 가중 지표로 변경할 수 없습니다. add_metrics 함수를 사용해 추정기 스펙(spec)에 직접 이런 지표를 만들어야 합니다.\n사용자 정의 model_fn 사용하기\n기존에 작성한 사용자 정의 추정기 model_fn을 유지해야 한다면 이 model_fn을 케라스 모델로 바꿀 수 있습니다.\n그러나 호환성 때문에 사용자 정의 model_fn은 1.x 스타일의 그래프 모드로 실행될 것입니다. 즉 즉시 실행과 의존성 자동 제어가 없다는 뜻입니다.\n<a name=\"minimal_changes\"></a>\n사용자 정의 model_fn을 최소한만 변경하기\n사용자 정의 model_fn을 최소한의 변경만으로 TF 2.0과 사용하려면 tf.compat.v1 아래의 optimizers와 metrics을 사용할 수 있습니다.\n사용자 정의 model_fn에 케라스 모델을 사용하는 것은 사용자 정의 훈련 루프에 사용하는 것과 비슷합니다:\n\nmode 매개변수에 기초하여 training 상태를 적절하게 지정하세요.\n옵티마이저에 모델의 trainable_variables를 명시적으로 전달하세요.\n\n사용자 정의 루프와 큰 차이점은 다음과 같습니다:\n\nmodel.losses를 사용하는 대신 tf.keras.Model.get_losses_for 사용하여 손실을 추출하세요.\ntf.keras.Model.get_updates_for를 사용하여 모델의 업데이트 값을 추출하세요.\n\n노트: \"업데이트(update)\"는 각 배치가 끝난 후에 모델에 적용해야 할 변화량입니다. 예를 들면 tf.keras.layers.BatchNormalization 층에서 평균과 분산의 이동 평균(moving average)이 있습니다.\n다음은 사용자 정의 model_fn으로부터 추정기를 만드는 코드로 이런 개념을 잘 보여 줍니다.", "def my_model_fn(features, labels, mode):\n model = make_model()\n\n optimizer = tf.compat.v1.train.AdamOptimizer()\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n\n training = (mode == tf.estimator.ModeKeys.TRAIN)\n predictions = model(features, training=training)\n\n if mode == tf.estimator.ModeKeys.PREDICT:\n return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)\n\n reg_losses = model.get_losses_for(None) + model.get_losses_for(features)\n total_loss=loss_fn(labels, predictions) + tf.math.add_n(reg_losses)\n\n accuracy = tf.compat.v1.metrics.accuracy(labels=labels,\n predictions=tf.math.argmax(predictions, axis=1),\n name='acc_op')\n\n update_ops = model.get_updates_for(None) + model.get_updates_for(features)\n minimize_op = optimizer.minimize(\n total_loss,\n var_list=model.trainable_variables,\n global_step=tf.compat.v1.train.get_or_create_global_step())\n train_op = tf.group(minimize_op, update_ops)\n\n return tf.estimator.EstimatorSpec(\n mode=mode,\n predictions=predictions,\n loss=total_loss,\n train_op=train_op, eval_metric_ops={'accuracy': accuracy})\n\n# 추정기를 만들고 훈련합니다.\nestimator = tf.estimator.Estimator(model_fn=my_model_fn)\ntf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)", "TF 2.0으로 사용자 정의 model_fn 만들기\n사용자 정의 model_fn에서 TF 1.x API를 모두 제거하고 TF 2.0으로 업그레이드하려면 옵티마이저와 지표를 tf.keras.optimizers와 tf.keras.metrics로 업데이트해야 합니다.\n위에서 언급한 최소한의 변경외에도 사용자 정의 model_fn에서 업그레이드해야 할 것이 있습니다:\n\nv1.train.Optimizer 대신에 tf.keras.optimizers을 사용하세요.\ntf.keras.optimizers에 모델의 trainable_variables을 명시적으로 전달하세요.\ntrain_op/minimize_op을 계산하려면,\n손실이 (호출 가능한 객체가 아니라) 스칼라 Tensor이면 Optimizer.get_updates()을 사용하세요. 반환되는 리스트의 첫 번째 원소가 train_op/minimize_op입니다. \n손실이 (함수 같은) 호출 가능한 객체라면 train_op/minimize_op 객체를 얻기 위해 Optimizer.minimize()를 사용하세요.\n평가를 하려면 tf.compat.v1.metrics 대신에 tf.keras.metrics를 사용하세요.\n\n위의 my_model_fn를 2.0으로 이전한 코드는 다음과 같습니다:", "def my_model_fn(features, labels, mode):\n model = make_model()\n\n training = (mode == tf.estimator.ModeKeys.TRAIN)\n loss_obj = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n predictions = model(features, training=training)\n\n # 조건이 없는 손실(None 부분)과 \n # 입력 조건이 있는 손실(features 부분)을 얻습니다.\n reg_losses = model.get_losses_for(None) + model.get_losses_for(features)\n total_loss=loss_obj(labels, predictions) + tf.math.add_n(reg_losses)\n\n # tf.keras.metrics로 업그레이드 합니다.\n accuracy_obj = tf.keras.metrics.Accuracy(name='acc_obj')\n accuracy = accuracy_obj.update_state(\n y_true=labels, y_pred=tf.math.argmax(predictions, axis=1))\n\n train_op = None\n if training:\n # tf.keras.optimizers로 업그레이드합니다.\n optimizer = tf.keras.optimizers.Adam()\n # tf.compat.v1.train.global_step을 올바르게 증가시키기 위해\n # 수동으로 tf.compat.v1.global_step 변수를 optimizer.iterations에 할당합니다.\n # SessionRunHooks이 global_step에 의존하기 때문에\n # 이 할당문은 추정기에 지정된 모든 `tf.train.SessionRunHook`에 필수적입니다.\n optimizer.iterations = tf.compat.v1.train.get_or_create_global_step()\n # 조건이 없는 손실(None 부분)과 \n # 입력 조건이 있는 손실(features 부분)을 얻습니다.\n update_ops = model.get_updates_for(None) + model.get_updates_for(features)\n # minimize_op을 계산합니다.\n minimize_op = optimizer.get_updates(\n total_loss,\n model.trainable_variables)[0]\n train_op = tf.group(minimize_op, *update_ops)\n\n return tf.estimator.EstimatorSpec(\n mode=mode,\n predictions=predictions,\n loss=total_loss,\n train_op=train_op,\n eval_metric_ops={'Accuracy': accuracy_obj})\n\n# 추정기를 만들고 훈련합니다.\nestimator = tf.estimator.Estimator(model_fn=my_model_fn)\ntf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)", "프리메이드 추정기\ntf.estimator.DNN*, tf.estimator.Linear*, tf.estimator.DNNLinearCombined* 모듈 아래에 있는 프리메이드 추정기(premade estimator)는 계속 텐서플로 2.0 API를 지원합니다. 하지만 일부 매개변수가 바뀌었습니다:\n\ninput_layer_partitioner: 2.0에서 삭제되었습니다.\nloss_reduction: tf.compat.v1.losses.Reduction 대신에 tf.keras.losses.Reduction로 업데이트합니다. 기본값이 tf.compat.v1.losses.Reduction.SUM에서 tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE로 바뀌었습니다.\noptimizer, dnn_optimizer, linear_optimizer: 이 매개변수는 tf.compat.v1.train.Optimizer 대신에 tf.keras.optimizers로 업데이트되었습니다.\n\n위 변경 사항을 반영하려면:\n1. Distribution Strategy는 TF 2.0을 자동으로 대응하므로 input_layer_partitioner에 대해 이전이 필요없습니다.\n2. loss_reduction의 경우 지원되는 옵션을 tf.keras.losses.Reduction 확인해 보세요..\n3. optimizer 매개변수의 경우, optimizer, dnn_optimizer, linear_optimizer 매개변수를 전달하지 않거나 optimizer 매개변수를 string으로 지정했다면 아무것도 바꿀 필요가 없습니다. 기본적으로 tf.keras.optimizers를 사용합니다. 그외의 경우 tf.compat.v1.train.Optimizer를 이에 상응하는 tf.keras.optimizers로 바꾸어야 합니다.\n체크포인트 변환기\n<a id=\"checkpoint_converter\"></a>\nkeras.optimizers로 이전하면 TF 1.X로 저장한 체크포인트를 사용할 수 없습니다.\n체크포인트에 저장하는 tf.keras.optimizers 변수가 다르기 때문입니다.\nTf 2.0으로 이전한 후에 예전 체크포인트를 사용하려면 체크포인트 변환기를 사용하세요.", "! curl -O https://raw.githubusercontent.com/tensorflow/estimator/master/tensorflow_estimator/python/estimator/tools/checkpoint_converter.py", "이 스크립트는 도움말을 제공합니다:", "! python checkpoint_converter.py -h", "TensorShape\n이 클래스는 tf.compat.v1.Dimension 객체 대신에 int 값을 가지도록 단순화되었습니다. 따라서 int 값을 얻기 위해 .value() 메서드를 호출할 필요가 없습니다.\n여전히 개별 tf.compat.v1.Dimension 객체는 tf.TensorShape.dims로 참조할 수 있습니다.\n다음 코드는 텐서플로 1.x와 텐서플로 2.0의 차이점을 보여줍니다.", "# TensorShape 객체를 만들고 인덱스를 참조합니다.\ni = 0\nshape = tf.TensorShape([16, None, 256])\nshape", "TF 1.x에서는 다음과 같이 사용합니다:\npython\nvalue = shape[i].value\nTF 2.0에서는 다음과 같이 사용합니다:", "value = shape[i]\nvalue", "TF 1.x에서는 다음과 같이 사용합니다:\npython\nfor dim in shape:\n value = dim.value\n print(value)\nTF 2.0에서는 다음과 같이 사용합니다:", "for value in shape:\n print(value)", "TF 1.x에서는 다음과 같이 사용합니다(다른 Dimension 메서드를 사용할 때도):\npython\ndim = shape[i]\ndim.assert_is_compatible_with(other_dim)\nTF 2.0에서는 다음과 같이 사용합니다:", "other_dim = 16\nDimension = tf.compat.v1.Dimension\n\nif shape.rank is None:\n dim = Dimension(None)\nelse:\n dim = shape.dims[i]\ndim.is_compatible_with(other_dim) # 다른 Dimension 메서드도 동일\n\nshape = tf.TensorShape(None)\n\nif shape:\n dim = shape.dims[i]\n dim.is_compatible_with(other_dim) # 다른 Dimension 메서드도 동일", "랭크(rank)를 알 수 있다면 tf.TensorShape의 불리언 값은 True가 됩니다. 그렇지 않으면 False입니다.", "print(bool(tf.TensorShape([]))) # 스칼라\nprint(bool(tf.TensorShape([0]))) # 길이 0인 벡터\nprint(bool(tf.TensorShape([1]))) # 길이 1인 벡터\nprint(bool(tf.TensorShape([None]))) # 길이를 알 수 없는 벡터\nprint(bool(tf.TensorShape([1, 10, 100]))) # 3D 텐서\nprint(bool(tf.TensorShape([None, None, None]))) # 크기를 모르는 3D 텐서\nprint()\nprint(bool(tf.TensorShape(None))) # 랭크를 알 수 없는 텐서", "그외 변경 사항\n\n\ntf.colocate_with 삭제: 텐서플로의 장치 배치 알고리즘이 크게 향상되었습니다. 더 이상 이 연산이 필요하지 않습니다. 혹시 성능 저하가 발견된다면 [버그를 신고해 주세요]please file a bug.\n\n\nv1.ConfigProto를 동일한 tf.config 함수로 교체합니다.\n\n\n결론\n전체적인 과정은 다음과 같습니다:\n\n업그레이드 스크립트를 실행하세요.\ncontrib 모듈을 삭제하세요.\n모델을 객체 지향 스타일(케라스)로 바꾸세요.\n가능한 tf.keras나 tf.estimator의 훈련과 평가 루프를 사용하세요.\n그렇지 않으면 사용자 정의 루프를 사용하세요. 세션과 컬렉션은 사용하지 말아야 합니다.\n\n텐서플로 2.0 스타일로 코드를 바꾸려면 약간의 작업이 필요하지만 다음과 같은 장점을 얻을 수 있습니다:\n\n코드 라인이 줄어 듭니다.\n명료하고 단순해집니다.\n디버깅이 쉬워집니다." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
dev/_downloads/78ad76ea5b03c29b4b851b8b64f74b68/linear_model_patterns.ipynb
bsd-3-clause
[ "%matplotlib inline", "Linear classifier on sensor data with plot patterns and filters\nHere decoding, a.k.a MVPA or supervised machine learning, is applied to M/EEG\ndata in sensor space. Fit a linear classifier with the LinearModel object\nproviding topographical patterns which are more neurophysiologically\ninterpretable :footcite:HaufeEtAl2014 than the classifier filters (weight\nvectors). The patterns explain how the MEG and EEG data were generated from\nthe discriminant neural sources which are extracted by the filters.\nNote patterns/filters in MEG data are more similar than EEG data\nbecause the noise is less spatially correlated in MEG than EEG.", "# Authors: Alexandre Gramfort <[email protected]>\n# Romain Trachel <[email protected]>\n# Jean-Remi King <[email protected]>\n#\n# License: BSD-3-Clause\n\nimport mne\nfrom mne import io, EvokedArray\nfrom mne.datasets import sample\nfrom mne.decoding import Vectorizer, get_coef\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.pipeline import make_pipeline\n\n# import a linear classifier from mne.decoding\nfrom mne.decoding import LinearModel\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nsample_path = data_path / 'MEG' / 'sample'", "Set parameters", "raw_fname = sample_path / 'sample_audvis_filt-0-40_raw.fif'\nevent_fname = sample_path / 'sample_audvis_filt-0-40_raw-eve.fif'\ntmin, tmax = -0.1, 0.4\nevent_id = dict(aud_l=1, vis_l=3)\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname, preload=True)\nraw.filter(.5, 25, fir_design='firwin')\nevents = mne.read_events(event_fname)\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n decim=2, baseline=None, preload=True)\ndel raw\n\nlabels = epochs.events[:, -1]\n\n# get MEG and EEG data\nmeg_epochs = epochs.copy().pick_types(meg=True, eeg=False)\nmeg_data = meg_epochs.get_data().reshape(len(labels), -1)", "Decoding in sensor space using a LogisticRegression classifier", "clf = LogisticRegression(solver='liblinear') # liblinear is faster than lbfgs\nscaler = StandardScaler()\n\n# create a linear model with LogisticRegression\nmodel = LinearModel(clf)\n\n# fit the classifier on MEG data\nX = scaler.fit_transform(meg_data)\nmodel.fit(X, labels)\n\n# Extract and plot spatial filters and spatial patterns\nfor name, coef in (('patterns', model.patterns_), ('filters', model.filters_)):\n # We fitted the linear model onto Z-scored data. To make the filters\n # interpretable, we must reverse this normalization step\n coef = scaler.inverse_transform([coef])[0]\n\n # The data was vectorized to fit a single model across all time points and\n # all channels. We thus reshape it:\n coef = coef.reshape(len(meg_epochs.ch_names), -1)\n\n # Plot\n evoked = EvokedArray(coef, meg_epochs.info, tmin=epochs.tmin)\n evoked.plot_topomap(title='MEG %s' % name, time_unit='s')", "Let's do the same on EEG data using a scikit-learn pipeline", "X = epochs.pick_types(meg=False, eeg=True)\ny = epochs.events[:, 2]\n\n# Define a unique pipeline to sequentially:\nclf = make_pipeline(\n Vectorizer(), # 1) vectorize across time and channels\n StandardScaler(), # 2) normalize features across trials\n LinearModel( # 3) fits a logistic regression\n LogisticRegression(solver='liblinear')\n )\n)\nclf.fit(X, y)\n\n# Extract and plot patterns and filters\nfor name in ('patterns_', 'filters_'):\n # The `inverse_transform` parameter will call this method on any estimator\n # contained in the pipeline, in reverse order.\n coef = get_coef(clf, name, inverse_transform=True)\n evoked = EvokedArray(coef, epochs.info, tmin=epochs.tmin)\n evoked.plot_topomap(title='EEG %s' % name[:-1], time_unit='s')", "References\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
intel-analytics/BigDL
python/orca/colab-notebook/quickstart/pytorch_lenet_mnist.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/intel-analytics/BigDL/blob/branch-2.0/python/orca/colab-notebook/quickstart/pytorch_lenet_mnist.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n\nCopyright 2016 The BigDL Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#", "Environment Preparation\nInstall Java 8\nRun the cell on the Google Colab to install jdk 1.8.\nNote: if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer).", "# Install jdk8\n!apt-get install openjdk-8-jdk-headless -qq > /dev/null\nimport os\n# Set environment variable JAVA_HOME.\nos.environ[\"JAVA_HOME\"] = \"/usr/lib/jvm/java-8-openjdk-amd64\"\n!update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java\n!java -version", "Install BigDL Orca\nConda is needed to prepare the Python environment for running this example. \nNote: The following code cell is specific for setting up conda environment on Colab; for general conda installation, please refer to the install guide for more details.", "import sys\n\n# Set current python version\npython_version = f\"3.7.10\"\n\n# Install Miniconda\n!wget https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh\n!chmod +x Miniconda3-4.5.4-Linux-x86_64.sh\n!./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local\n\n# Update Conda\n!conda install --channel defaults conda python=$python_version --yes\n!conda update --channel defaults --all --yes\n\n# Append to the sys.path\n_ = (sys.path\n .append(f\"/usr/local/lib/python3.7/site-packages\"))\n\nos.environ['PYTHONHOME']=\"/usr/local\"", "You can install the latest pre-release version using pip install --pre --upgrade bigdl-orca.", "# Install latest pre-release version of BigDL Orca \n# Installing BigDL Orca from pip will automatically install pyspark, bigdl, and their dependencies.\n!pip install --pre --upgrade bigdl-orca\n\n# Install python dependencies\n!pip install torch==1.7.1 torchvision==0.8.2\n!pip install six cloudpickle\n!pip install jep==3.9.0", "Distributed PyTorch using Orca APIs\nIn this guide we will describe how to scale out PyTorch programs using Orca in 4 simple steps.", "# import necesary libraries and modules\nfrom __future__ import print_function\nimport os\nimport argparse\n\nfrom bigdl.orca import init_orca_context, stop_orca_context\nfrom bigdl.orca import OrcaContext", "Step 1: Init Orca Context", "# recommended to set it to True when running BigDL in Jupyter notebook. \nOrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook).\n\ncluster_mode = \"local\"\n\nif cluster_mode == \"local\":\n init_orca_context(cores=1, memory=\"2g\") # run in local mode\nelif cluster_mode == \"k8s\":\n init_orca_context(cluster_mode=\"k8s\", num_nodes=2, cores=4) # run on K8s cluster\nelif cluster_mode == \"yarn\":\n init_orca_context(\n cluster_mode=\"yarn-client\", cores=4, num_nodes=2, memory=\"2g\",\n driver_memory=\"10g\", driver_cores=1,\n conf={\"spark.rpc.message.maxSize\": \"1024\",\n \"spark.task.maxFailures\": \"1\",\n \"spark.driver.extraJavaOptions\": \"-Dbigdl.failure.retryTimes=1\"}) # run on Hadoop YARN cluster", "This is the only place where you need to specify local or distributed mode. View Orca Context for more details.\nNote: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster.\nStep 2: Define the Model\nYou may define your model, loss and optimizer in the same way as in any standard (single node) PyTorch program.", "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nclass LeNet(nn.Module):\n def __init__(self):\n super(LeNet, self).__init__()\n self.conv1 = nn.Conv2d(1, 20, 5, 1)\n self.conv2 = nn.Conv2d(20, 50, 5, 1)\n self.fc1 = nn.Linear(4*4*50, 500)\n self.fc2 = nn.Linear(500, 10)\n\n def forward(self, x):\n x = F.relu(self.conv1(x))\n x = F.max_pool2d(x, 2, 2)\n x = F.relu(self.conv2(x))\n x = F.max_pool2d(x, 2, 2)\n x = x.view(-1, 4*4*50)\n x = F.relu(self.fc1(x))\n x = self.fc2(x)\n return F.log_softmax(x, dim=1)\n\nmodel = LeNet()\nmodel.train()\ncriterion = nn.NLLLoss()\nlr = 0.001\n\nadam = torch.optim.Adam(model.parameters(), lr)", "Step 3: Define Train Dataset\nYou can define the dataset using standard Pytorch DataLoader.", "import torch\nfrom torchvision import datasets, transforms\n\ntorch.manual_seed(0)\ndir='/tmp/dataset'\nbatch_size=320\ntest_batch_size=320\n\ntrain_loader = torch.utils.data.DataLoader(\n datasets.MNIST(dir, train=True, download=True,\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size= batch_size, shuffle=True)\n\ntest_loader = torch.utils.data.DataLoader(\n datasets.MNIST(dir, train=False,\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size=test_batch_size, shuffle=False)", "Step 4: Fit with Orca Estimator\nFirst, Create an Estimator.", "from bigdl.orca.learn.pytorch import Estimator \nfrom bigdl.orca.learn.metrics import Accuracy\n\nest = Estimator.from_torch(model=model, optimizer=adam, loss=criterion, metrics=[Accuracy()])", "Next, fit and evaluate using the Estimator.", "from bigdl.orca.learn.trigger import EveryEpoch \n\nest.fit(data=train_loader, epochs=1, validation_data=test_loader,\n checkpoint_trigger=EveryEpoch())", "Finally, evaluate using the Estimator.", "result = est.evaluate(data=test_loader)\nfor r in result:\n print(r, \":\", result[r])", "The accuracy of this model has reached 98%.", "# stop orca context when program finishes\nstop_orca_context()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
stable/_downloads/e23ed246a9a354f899dfb3ce3b06e194/10_overview.ipynb
bsd-3-clause
[ "%matplotlib inline", "Overview of MEG/EEG analysis with MNE-Python\nThis tutorial covers the basic EEG/MEG pipeline for event-related analysis:\nloading data, epoching, averaging, plotting, and estimating cortical activity\nfrom sensor data. It introduces the core MNE-Python data structures\n~mne.io.Raw, ~mne.Epochs, ~mne.Evoked, and ~mne.SourceEstimate, and\ncovers a lot of ground fairly quickly (at the expense of depth). Subsequent\ntutorials address each of these topics in greater detail.\nWe begin by importing the necessary Python modules:", "import os\nimport numpy as np\nimport mne", "Loading data\nMNE-Python data structures are based around the FIF file format from\nNeuromag, but there are reader functions for a wide variety of other\ndata formats &lt;data-formats&gt;. MNE-Python also has interfaces to a\nvariety of publicly available datasets &lt;datasets&gt;, which MNE-Python\ncan download and manage for you.\nWe'll start this tutorial by loading one of the example datasets (called\n\"sample-dataset\"), which contains EEG and MEG data from one subject\nperforming an audiovisual experiment, along with structural MRI scans for\nthat subject. The mne.datasets.sample.data_path function will automatically\ndownload the dataset if it isn't found in one of the expected locations, then\nreturn the directory path to the dataset (see the documentation of\n~mne.datasets.sample.data_path for a list of places it checks before\ndownloading). Note also that for this tutorial to run smoothly on our\nservers, we're using a filtered and downsampled version of the data\n(:file:sample_audvis_filt-0-40_raw.fif), but an unfiltered version\n(:file:sample_audvis_raw.fif) is also included in the sample dataset and\ncould be substituted here when running the tutorial locally.", "sample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_filt-0-40_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file)", "By default, ~mne.io.read_raw_fif displays some information about the file\nit's loading; for example, here it tells us that there are four \"projection\nitems\" in the file along with the recorded data; those are :term:SSP\nprojectors &lt;projector&gt; calculated to remove environmental noise from the MEG\nsignals, plus a projector to mean-reference the EEG channels; these are\ndiscussed in the tutorial tut-projectors-background. In addition to\nthe information displayed during loading, you can get a glimpse of the basic\ndetails of a ~mne.io.Raw object by printing it; even more is available by\nprinting its info attribute (a dictionary-like object &lt;mne.Info&gt; that\nis preserved across ~mne.io.Raw, ~mne.Epochs, and ~mne.Evoked objects).\nThe info data structure keeps track of channel locations, applied\nfilters, projectors, etc. Notice especially the chs entry, showing that\nMNE-Python detects different sensor types and handles each appropriately. See\ntut-info-class for more on the ~mne.Info class.", "print(raw)\nprint(raw.info)", "~mne.io.Raw objects also have several built-in plotting methods; here we\nshow the power spectral density (PSD) for each sensor type with\n~mne.io.Raw.plot_psd, as well as a plot of the raw sensor traces with\n~mne.io.Raw.plot. In the PSD plot, we'll only plot frequencies below 50 Hz\n(since our data are low-pass filtered at 40 Hz). In interactive Python\nsessions, ~mne.io.Raw.plot is interactive and allows scrolling, scaling,\nbad channel marking, annotations, projector toggling, etc.", "raw.plot_psd(fmax=50)\nraw.plot(duration=5, n_channels=30)", "Preprocessing\nMNE-Python supports a variety of preprocessing approaches and techniques\n(maxwell filtering, signal-space projection, independent components analysis,\nfiltering, downsampling, etc); see the full list of capabilities in the\n:mod:mne.preprocessing and :mod:mne.filter submodules. Here we'll clean\nup our data by performing independent components analysis\n(~mne.preprocessing.ICA); for brevity we'll skip the steps that helped us\ndetermined which components best capture the artifacts (see\ntut-artifact-ica for a detailed walk-through of that process).", "# set up and fit the ICA\nica = mne.preprocessing.ICA(n_components=20, random_state=97, max_iter=800)\nica.fit(raw)\nica.exclude = [1, 2] # details on how we picked these are omitted here\nica.plot_properties(raw, picks=ica.exclude)", "Once we're confident about which component(s) we want to remove, we pass them\nas the exclude parameter and then apply the ICA to the raw signal. The\n~mne.preprocessing.ICA.apply method requires the raw data to be loaded into\nmemory (by default it's only read from disk as-needed), so we'll use\n~mne.io.Raw.load_data first. We'll also make a copy of the ~mne.io.Raw\nobject so we can compare the signal before and after artifact removal\nside-by-side:", "orig_raw = raw.copy()\nraw.load_data()\nica.apply(raw)\n\n# show some frontal channels to clearly illustrate the artifact removal\nchs = ['MEG 0111', 'MEG 0121', 'MEG 0131', 'MEG 0211', 'MEG 0221', 'MEG 0231',\n 'MEG 0311', 'MEG 0321', 'MEG 0331', 'MEG 1511', 'MEG 1521', 'MEG 1531',\n 'EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006',\n 'EEG 007', 'EEG 008']\nchan_idxs = [raw.ch_names.index(ch) for ch in chs]\norig_raw.plot(order=chan_idxs, start=12, duration=4)\nraw.plot(order=chan_idxs, start=12, duration=4)", "Detecting experimental events\nThe sample dataset includes several :term:\"STIM\" channels &lt;stim channel&gt;\nthat recorded electrical signals sent from the stimulus delivery computer (as\nbrief DC shifts / squarewave pulses). These pulses (often called \"triggers\")\nare used in this dataset to mark experimental events: stimulus onset,\nstimulus type, and participant response (button press). The individual STIM\nchannels are combined onto a single channel, in such a way that voltage\nlevels on that channel can be unambiguously decoded as a particular event\ntype. On older Neuromag systems (such as that used to record the sample data)\nthis summation channel was called STI 014, so we can pass that channel\nname to the mne.find_events function to recover the timing and identity of\nthe stimulus events.", "events = mne.find_events(raw, stim_channel='STI 014')\nprint(events[:5]) # show the first 5", "The resulting events array is an ordinary 3-column :class:NumPy array\n&lt;numpy.ndarray&gt;, with sample number in the first column and integer event ID\nin the last column; the middle column is usually ignored. Rather than keeping\ntrack of integer event IDs, we can provide an event dictionary that maps\nthe integer IDs to experimental conditions or events. In this dataset, the\nmapping looks like this:\n+----------+----------------------------------------------------------+\n| Event ID | Condition |\n+==========+==========================================================+\n| 1 | auditory stimulus (tone) to the left ear |\n+----------+----------------------------------------------------------+\n| 2 | auditory stimulus (tone) to the right ear |\n+----------+----------------------------------------------------------+\n| 3 | visual stimulus (checkerboard) to the left visual field |\n+----------+----------------------------------------------------------+\n| 4 | visual stimulus (checkerboard) to the right visual field |\n+----------+----------------------------------------------------------+\n| 5 | smiley face (catch trial) |\n+----------+----------------------------------------------------------+\n| 32 | subject button press |\n+----------+----------------------------------------------------------+", "event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,\n 'visual/right': 4, 'smiley': 5, 'buttonpress': 32}", "Event dictionaries like this one are used when extracting epochs from\ncontinuous data; the / character in the dictionary keys allows pooling\nacross conditions by requesting partial condition descriptors (i.e.,\nrequesting 'auditory' will select all epochs with Event IDs 1 and 2;\nrequesting 'left' will select all epochs with Event IDs 1 and 3). An\nexample of this is shown in the next section. There is also a convenient\n~mne.viz.plot_events function for visualizing the distribution of events\nacross the duration of the recording (to make sure event detection worked as\nexpected). Here we'll also make use of the ~mne.Info attribute to get the\nsampling frequency of the recording (so our x-axis will be in seconds instead\nof in samples).", "fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw.info['sfreq'],\n first_samp=raw.first_samp)", "For paradigms that are not event-related (e.g., analysis of resting-state\ndata), you can extract regularly spaced (possibly overlapping) spans of data\nby creating events using mne.make_fixed_length_events and then proceeding\nwith epoching as described in the next section.\nEpoching continuous data\nThe ~mne.io.Raw object and the events array are the bare minimum needed to\ncreate an ~mne.Epochs object, which we create with the ~mne.Epochs class\nconstructor. Here we'll also specify some data quality constraints: we'll\nreject any epoch where peak-to-peak signal amplitude is beyond reasonable\nlimits for that channel type. This is done with a rejection dictionary; you\nmay include or omit thresholds for any of the channel types present in your\ndata. The values given here are reasonable for this particular dataset, but\nmay need to be adapted for different hardware or recording conditions. For a\nmore automated approach, consider using the autoreject package_.", "reject_criteria = dict(mag=4000e-15, # 4000 fT\n grad=4000e-13, # 4000 fT/cm\n eeg=150e-6, # 150 µV\n eog=250e-6) # 250 µV", "We'll also pass the event dictionary as the event_id parameter (so we can\nwork with easy-to-pool event labels instead of the integer event IDs), and\nspecify tmin and tmax (the time relative to each event at which to\nstart and end each epoch). As mentioned above, by default ~mne.io.Raw and\n~mne.Epochs data aren't loaded into memory (they're accessed from disk only\nwhen needed), but here we'll force loading into memory using the\npreload=True parameter so that we can see the results of the rejection\ncriteria being applied:", "epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.2, tmax=0.5,\n reject=reject_criteria, preload=True)", "Next we'll pool across left/right stimulus presentations so we can compare\nauditory versus visual responses. To avoid biasing our signals to the left or\nright, we'll use ~mne.Epochs.equalize_event_counts first to randomly sample\nepochs from each condition to match the number of epochs present in the\ncondition with the fewest good epochs.", "conds_we_care_about = ['auditory/left', 'auditory/right',\n 'visual/left', 'visual/right']\nepochs.equalize_event_counts(conds_we_care_about) # this operates in-place\naud_epochs = epochs['auditory']\nvis_epochs = epochs['visual']\ndel raw, epochs # free up memory", "Like ~mne.io.Raw objects, ~mne.Epochs objects also have a number of\nbuilt-in plotting methods. One is ~mne.Epochs.plot_image, which shows each\nepoch as one row of an image map, with color representing signal magnitude;\nthe average evoked response and the sensor location are shown below the\nimage:", "aud_epochs.plot_image(picks=['MEG 1332', 'EEG 021'])", "<div class=\"alert alert-info\"><h4>Note</h4><p>Both `~mne.io.Raw` and `~mne.Epochs` objects have `~mne.Epochs.get_data`\n methods that return the underlying data as a\n :class:`NumPy array <numpy.ndarray>`. Both methods have a ``picks``\n parameter for subselecting which channel(s) to return; ``raw.get_data()``\n has additional parameters for restricting the time domain. The resulting\n matrices have dimension ``(n_channels, n_times)`` for `~mne.io.Raw` and\n ``(n_epochs, n_channels, n_times)`` for `~mne.Epochs`.</p></div>\n\nTime-frequency analysis\nThe :mod:mne.time_frequency submodule provides implementations of several\nalgorithms to compute time-frequency representations, power spectral density,\nand cross-spectral density. Here, for example, we'll compute for the auditory\nepochs the induced power at different frequencies and times, using Morlet\nwavelets. On this dataset the result is not especially informative (it just\nshows the evoked \"auditory N100\" response); see here\n&lt;inter-trial-coherence&gt; for a more extended example on a dataset with richer\nfrequency content.", "frequencies = np.arange(7, 30, 3)\npower = mne.time_frequency.tfr_morlet(aud_epochs, n_cycles=2, return_itc=False,\n freqs=frequencies, decim=3)\npower.plot(['MEG 1332'])", "Estimating evoked responses\nNow that we have our conditions in aud_epochs and vis_epochs, we can\nget an estimate of evoked responses to auditory versus visual stimuli by\naveraging together the epochs in each condition. This is as simple as calling\nthe ~mne.Epochs.average method on the ~mne.Epochs object, and then using\na function from the :mod:mne.viz module to compare the global field power\nfor each sensor type of the two ~mne.Evoked objects:", "aud_evoked = aud_epochs.average()\nvis_evoked = vis_epochs.average()\n\nmne.viz.plot_compare_evokeds(dict(auditory=aud_evoked, visual=vis_evoked),\n legend='upper left', show_sensors='upper right')", "We can also get a more detailed view of each ~mne.Evoked object using other\nplotting methods such as ~mne.Evoked.plot_joint or\n~mne.Evoked.plot_topomap. Here we'll examine just the EEG channels, and see\nthe classic auditory evoked N100-P200 pattern over dorso-frontal electrodes,\nthen plot scalp topographies at some additional arbitrary times:", "aud_evoked.plot_joint(picks='eeg')\naud_evoked.plot_topomap(times=[0., 0.08, 0.1, 0.12, 0.2], ch_type='eeg')", "Evoked objects can also be combined to show contrasts between conditions,\nusing the mne.combine_evoked function. A simple difference can be\ngenerated by passing weights=[1, -1]. We'll then plot the difference wave\nat each sensor using ~mne.Evoked.plot_topo:", "evoked_diff = mne.combine_evoked([aud_evoked, vis_evoked], weights=[1, -1])\nevoked_diff.pick_types(meg='mag').plot_topo(color='r', legend=False)", "Inverse modeling\nFinally, we can estimate the origins of the evoked activity by projecting the\nsensor data into this subject's :term:source space (a set of points either\non the cortical surface or within the cortical volume of that subject, as\nestimated by structural MRI scans). MNE-Python supports lots of ways of doing\nthis (dynamic statistical parametric mapping, dipole fitting, beamformers,\netc.); here we'll use minimum-norm estimation (MNE) to generate a continuous\nmap of activation constrained to the cortical surface. MNE uses a linear\n:term:inverse operator to project EEG+MEG sensor measurements into the\nsource space. The inverse operator is computed from the\n:term:forward solution for this subject and an estimate of the\ncovariance of sensor measurements &lt;tut-compute-covariance&gt;. For this\ntutorial we'll skip those computational steps and load a pre-computed inverse\noperator from disk (it's included with the sample data\n&lt;sample-dataset&gt;). Because this \"inverse problem\" is underdetermined (there\nis no unique solution), here we further constrain the solution by providing a\nregularization parameter specifying the relative smoothness of the current\nestimates in terms of a signal-to-noise ratio (where \"noise\" here is akin to\nbaseline activity level across all of cortex).", "# load inverse operator\ninverse_operator_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis-meg-oct-6-meg-inv.fif')\ninv_operator = mne.minimum_norm.read_inverse_operator(inverse_operator_file)\n# set signal-to-noise ratio (SNR) to compute regularization parameter (λ²)\nsnr = 3.\nlambda2 = 1. / snr ** 2\n# generate the source time course (STC)\nstc = mne.minimum_norm.apply_inverse(vis_evoked, inv_operator,\n lambda2=lambda2,\n method='MNE') # or dSPM, sLORETA, eLORETA", "Finally, in order to plot the source estimate on the subject's cortical\nsurface we'll also need the path to the sample subject's structural MRI files\n(the subjects_dir):", "# path to subjects' MRI files\nsubjects_dir = os.path.join(sample_data_folder, 'subjects')\n# plot the STC\nstc.plot(initial_time=0.1, hemi='split', views=['lat', 'med'],\n subjects_dir=subjects_dir)", "The remaining tutorials have much more detail on each of these topics (as\nwell as many other capabilities of MNE-Python not mentioned here:\nconnectivity analysis, encoding/decoding models, lots more visualization\noptions, etc). Read on to learn more!\n.. LINKS" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Elucidation/lumpsum_vs_dca
Lumpsum_vs_DCA.ipynb
apache-2.0
[ "Comparing Lump Sum vs. Dollar Cost Averaging (DCA) investing\n\nView Notebook as HTML\nView Notebook on GitHub\nView Notebook on Blog\n\n\nThe topic of investing all at once versus spreading it over time has come up a few times with peers. I remembered reading in both bogleheads and an Investopedia article that lump sum beats DCA ~66% of the time. \n\nBoth lump-sum investing and DCA have their appropriate time and place. The research shows that lump-sum investing pays off about 66% of the time, which is a long way from all the time. It certainly makes sense to look carefully at the current market conditions. If you hit that bad 33% in lumpy style, you can lose a lot of money.\n\nThe idea espoused is that the market trends up in the long term, and therefore it's better to invest as early as possible instead of spreading your investments around to avoid the bottom; time on the market is statistically better.\nSounds logical, but when it's your money on the line, something sounding good isn't always good enough.\nI decided to run an experiment validating the claim using IPython Notebook, Pandas, and matplotlib for visualization.\nThe Experiment\nThis statement of being better 66% of the time wasn't completely intuitive to me, so I decided to do a little test. Let's imagine we have \\$10k to invest any time in the last 16 years, from Feb 22, 2000 to Jan 9, 2016. And we want to choose the time and strategy that would have returned us the most money today. The two strategies I chose are:\n\nLump Sum, invest \\$10k all at once on a date of choice\nDollar Cost Average, invest \\$10k in 12 equal portions of \\$833.33 every 30 days starting from a date of choice, for a total investment period of 360 days. There are alternatives but this is the one I arbitrarily chose.\n\nI then chose the SPDR S&P 500 (SPY) as the stock we'll be investing in because it follows the Standard & Poor 500 index, and is one of the most common ways to measure/invest in the market. \n\nThe Standard & Poor's 500 often abbreviated as the S&P 500 (or just \"the S&P\"). Chosen for market size, liquidity and industry grouping, among other factors. The S&P 500 is designed to be a leading indicator of U.S. equities and is meant to reflect the risk/return characteristics of the large cap universe.\n\nHere is SPY for the last 16 years, for our actual tests we'll be using historical pricing to account for dividend yield and other things.\n\nThis was partially inspired by looking at my portfolio today, 2015 and early 2016 has been pretty rough for the market, relative to the bull run up to this point, and a drop in the bucket vs. the last crash so far.\nI have been lump sum investing up till this point. Perhaps this experiment will help illuminate the validity or flaws in the thought process behind that decision.\nI list my assumptions at the bottom of this page.\nImport financial data using Pandas\nFirst let's import data, Pandas is a great python library for data analysis and has a helper for pulling stock data from different websites like Yahoo Finance or Google Finance.", "import pandas as pd\nimport pandas_datareader.data as web\nimport datetime\npd.set_option('display.width', 200) # Displaying more columns in one row\n\n# Data date range, Google provides up to 4000 entries in one call\nstart = datetime.datetime(2000, 2, 10) \nend = datetime.datetime(2016, 1, 9)\n\nspy = web.DataReader(\"SPY\", \"yahoo\", start, end)\n\nprint(spy.head()) # See first few rows", "We'll plot all the prices at Adj Close using matplotlib, a python 2D plotting library that is Matlab flavored. We use Adjusted Close because it is commonly used for historical pricing, and accounts for all corporate actions such as stock splits, dividends/distributions and rights offerings. This happens to be our exact use-case.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import FuncFormatter\nfrom matplotlib import style\nstyle.use('fivethirtyeight')\n\nspy['Adj Close'].plot(figsize=(20,10))\nax = plt.subplot()\nax.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}'.format(x))) # Y axis dollarsymbols\nplt.title('SPY Historical Price on Close')\nplt.xlabel('')\nplt.ylabel('Stock Price ($)');", "Great, looks similar to the SPY chart from before. Notice how due to historical pricing, the effect of including things like dividend yields increases to total return over the years. We can easily see the the bubble and crash around 2007-2009, as well as the long bull market up since then. Also we can see in the last couple months the small dip in September/October, and barely see the drop in the last couple days in the beginning of 2016.\nCalculate Lump Sum\nLump Sum means to invest everything available all at once, in this case we have a hypothetical $10,000 to spend at any day in our history of the last 16 years. Then we want to know how much that investment would be worth today.\nAnother way to look at this is we can make a chart where the X axis is the date we invest the lump sum, and the Y axis is the value of that investment today.", "value_price = spy['Adj Close'][-1] # The final value of our stock\ninitial_investment = 10000 # Our initial investment of $10k\n\nnum_stocks_bought = initial_investment / spy['Adj Close']\nlumpsum = num_stocks_bought * value_price\nlumpsum.name = 'Lump Sum'\n\nlumpsum.plot(figsize=(20,10))\nax = plt.subplot()\nax.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}'.format(x))) # Y axis dollarsymbols\nplt.title('Lump sum - Value today of $10,000 invested on date')\nplt.xlabel('')\nplt.ylabel('Investment Value ($)');", "Cool! Pandas makes it really easy to manipulate data with datetime indices. Looking at the chart we see that if we'd bought right at the bottom of the 2007-2009 crash our \\$10,000 would be worth ~ $32,500. If only we had a time machine...", "print(\"Lump sum: Investing on the 1 - Best day, 2 - Worst day in past, 3 - Worst day in all\")\nprint(\"1 - Investing $10,000 on {} would be worth ${:,.2f} today.\".format(lumpsum.idxmax().strftime('%b %d, %Y'), lumpsum.max()))\nprint(\"2 - Investing $10,000 on {} would be worth ${:,.2f} today.\".format(lumpsum[:-1000].idxmin().strftime('%b %d, %Y'), lumpsum[:-1000].min()))\nprint(\"3 - Investing $10,000 on {} would be worth ${:,.2f} today.\".format(lumpsum.idxmin().strftime('%b %d, %Y'), lumpsum.min()))", "What's nice to note as well is that even if we'd invested at the worst possible time, peak of the bubble in 2007, on Oct 9th, we'd still have come out net positive at \\$14,593 today. The worst time to invest so far turns out to be more recent, on July 20th of 2015. This is because not only was the market down, but it's so recent we haven't had time for the investment to grow. Something something the best time to plant a tree was yesterday.\nCalculating Dollar Cost Averaging (DCA)\nNow lets do the same experiment, but instead we'll invest the \\$10,000 we have using Dollar Cost Averaging (DCA). For this simple test, I'll assume instead of investing all at once, I'll invest in equal portions every 30 days (roughly a month), over a course of 360 days (roughly a year) total. \nSo on day 1, I invest $10,000 / 12 ~ $833.33, on day 31, the same $833.33\nand so on for 12 total investments. A special case is investing within the last year, when there isn't time to DCA all of it, as a compromise, I invest what portions I can and keep the rest as cash, since that is how reality works.", "def doDCA(investment, start_date):\n # Get 12 investment dates in 30 day increments starting from start date\n investment_dates_all = pd.date_range(start_date,periods=12,freq='30D')\n # Remove those dates beyond our known data range\n investment_dates = investment_dates_all[investment_dates_all < spy.index[-1]]\n\n # Get closest business dates with available data\n closest_investment_dates = spy.index.searchsorted(investment_dates)\n\n # How much to invest on each date\n portion = investment/12.0 # (Python 3.0 does implicit double conversion, Python 2.7 does not)\n\n # Get the total of all stocks purchased for each of those dates (on the Close)\n stocks_invested = sum(portion / spy['Adj Close'][closest_investment_dates])\n\n # Add uninvested amount back\n uninvested_dollars = portion * sum(investment_dates_all >= spy.index[-1])\n\n # value of stocks today\n total_value = value_price*stocks_invested + uninvested_dollars\n return total_value\n\n# Generate DCA series for every possible date\ndca = pd.Series(spy.index.map(lambda x: doDCA(initial_investment, x)), index=spy.index, name='Dollar Cost Averaging (DCA)')", "Surprisingly straightforward, good job Pandas. Let's plot it similar to how we did with lump sum. The x axis is the date at which we start dollar cost averaging (and then continue for the next 360 days in 30 day increments from that date). The y axis is the final value of our investment today.", "dca.plot(figsize=(20,10))\nax = plt.subplot()\nax.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}'.format(x))) # Y axis dollarsymbols\nplt.title('Dollar Cost Averaging - Value today of $10,000 invested on date')\nplt.xlabel('')\nplt.ylabel('Investment Value ($)');", "Interesting! DCA looks way really smooth and the graph is really high up, so it must be better right!? Wait, no, the Y axis is different, in fact it's highest high is around \\$28,000 in comparison to the lump sums \\$32,500. Let's look at the ideal/worst investment dates for DCA, I include the lump sum from before as well.", "print(\"Lump sum\")\nprint(\" Crash - Investing $10,000 on {} would be worth ${:,.2f} today.\".format(lumpsum.idxmax().strftime('%b %d, %Y'), lumpsum.max()))\nprint(\"Bubble - Investing $10,000 on {} would be worth ${:,.2f} today.\".format(lumpsum[:-1500].idxmin().strftime('%b %d, %Y'), lumpsum[:-1500].min()))\nprint(\"Recent - Investing $10,000 on {} would be worth ${:,.2f} today.\".format(lumpsum.idxmin().strftime('%b %d, %Y'), lumpsum.min()))\n\nprint(\"\\nDollar Cost Averaging\")\nprint(\" Crash - Investing $10,000 on {} would be worth ${:,.2f} today.\".format(dca.idxmax().strftime('%b %d, %Y'), dca.max()))\nprint(\"Bubble - Investing $10,000 on {} would be worth ${:,.2f} today.\".format(dca[:-1500].idxmin().strftime('%b %d, %Y'), dca[:-1500].min()))\nprint(\"Recent - Investing $10,000 on {} would be worth ${:,.2f} today.\".format(dca.idxmin().strftime('%b %d, %Y'), dca.min()))", "Looking at dollar cost averaging, the best day to start dollar cost averaging was July 12, 2002, when we were still recovering from the 'tech crashs. The worst day to start was around the peak of the 2007 bubble on Jan 26, 2007, and the absolute worst would have been to start last year on Jan 20, 2015.\nWe can already see that there's some similarities between lump sum and DCA, DCA appears to have lower highs, but also higher lows. It's difficult to compare just by looking at numbers, we need to compare the two strategies visually side by side.\nComparison of Lump Sum vs Dollar Cost Averaging\nSo we've just individuallly tested two investing strategies exhaustively on every possible day in the last 16 years.\nLet's plot three charts on top of each other. The raw SPY stock price over the years on the top. Then in the middle we plot both lump sum and DCA on top of each other. Finally we'll plot the difference between them as $diff = lump sum - DCA$", "# Difference between lump sum and DCA\ndiff = (lumpsum - dca)\ndiff.name = 'Difference (Lump Sum - DCA)'\n\nfig, (ax1, ax2, ax3) = plt.subplots(3,1, sharex=True, figsize=(20,15))\n\n# SPY Actual\nspy['Adj Close'].plot(ax=ax1)\nax1.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}'.format(x))) # Y axis in dollars\nax1.set_xlabel('')\nax1.set_title('SPY Historical Stock Price')\nax1.set_ylabel('Stock Value ($)')\n\n# Comparison\ndca.plot(ax=ax2)\nlumpsum.plot(ax=ax2)\nax2.axhline(initial_investment, alpha=0.5, linestyle=\"--\", color=\"black\")\nax2.text(spy.index[50],initial_investment*1.1, \"Initial Investment\")\n# ax2.axhline(conservative, alpha=0.5, linestyle=\"--\", color=\"black\")\n# ax2.text(spy.index[-800],conservative*1.05, \"Conservative Investing Strategy\")\nax2.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}K'.format(x*1e-3))) # Y axis $1,000s\nax2.legend()\nax2.set_title('Comparison Lump Sum vs. Dollar Cost Averaging - Value of $10,000 invested on date')\nax2.set_ylabel('Investment Value ($)')\n\n# Difference\nax3.fill_between(diff.index, y1=diff, y2=0, color='blue', where=diff>0)\nax3.fill_between(diff.index, y1=diff, y2=0, color='red', where=diff<0)\n\nax3.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}K'.format(x*1e-3))) # Y axis $1,000s\nax3.set_ylabel('Difference ($)')\nax3.set_title('Difference (Lump Sum - Dollar Cost Average)')\nax3.legend([\"Lump Sum > DCA\", \"DCA > Lump Sum\"]);", "Before we start comparing, definitely take note of the middle chart, where the initial investment of \\$10k is. Notice that if we had invested using either strategy, and at any point before 2 years ago, no matter which bubble or crash, we'd have made some pretty huge returns on our investments, double and tripling at some points. This is the power of compound interest.\nLooking at the DCA curve we do see the two similar humps we saw with the lump sum, but is both smoother and lags behind it. This makes perfect sense, as we're taking a type of moving average of the stock price over a year (in 30D increments) when we buy, instead of a single date. \nAs a result our investment with DCA is less volatile (smoother), and lags behind (averages in previous investments) the lump sum values.\nThe line for difference shows a positive dollar value for how much more investing in one lump sum would return versus dollar cost averaging in (blue). Similarly a negative value shows how much more dollar cost averaging would return vs a lump sum (red). The chart shows a wide swing around 2002 and 2009 between the two strategies, but elsewhere it's mostly positive (blue), suggesting lump sum tends to return a bit more overall. Let's look at the actual percentage where the values are positive (ie. where lump sum returns more).", "print(\"Lump sum returns more than DCA %.1f%% of all the days\" % (100*sum(diff>0)/len(diff)))\nprint(\"DCA returns more than Lump sum %.1f%% of all the days\" % (100*sum(diff<0)/len(diff)))\n", "Remarkable! So 66.3% of the time lump sum results in a higher final investment value over our monthly dollar cost averaging strategy. Almost dead on to the claims of 66% by the investopedia article I'd read.\nBut maybe this isn't the whole story, perhaps the lump sum returned a little better than DCA most of the time, but in the really bad times DCA would do much better?\nOne way to look at this, would be to see the average amount improvement lump sum has when it is better, versus the average amount DCA improves, when it is better.", "print(\"Mean difference: Average dollar improvement lump sum returns vs. dca: ${:,.2f}\".format(sum(diff) / len(diff)))\nprint(\"Mean difference when lump sum > dca: ${:,.2f}\".format(sum(diff[diff>0]) / sum(diff>0)))\nprint(\"Mean difference when dca > lump sum: ${:,.2f}\".format(sum(-diff[diff<0]) / sum(diff<0)))", "So for every possible day in the last 16 years, a lump sum investment of \\$10k would have returned on average \\$224.03 more than dollar cost averaging, or 2.24%, that's actually pretty big. However, when lump sum is better, it returned about \\$1.3k or 13\\%. Conversely, when dollar cost averaging was better than lump sum, it returned \\$1.9k more on average, or about 19%. \nThat is higher! So maybe that mean's DCA is worth it? Well, unfortunately since DCA was better only about 33.7% of the time, even though it was 6% betterer than lump sum during those times it still doesn't make up for the fact it was worse (by 2.24%) overall in the 16 year period. This is why our final average difference was showing a positive \\$224.03 in favor of lump sum all together.\nSo despite the fact DCA did do a better job during the crash, it does worse enough elsewhere that in the long term it's the weaker strategy.\nThe End?\nThis small experiment explains and validates the claim that Lump sum statistically beats DCA about ~66% of the time. \nWe created a simple trial comparing lump sum to the strategy of investing in the same amount monthly over a year, and found that lump sum still won ~66.3% of the time.\n\nPerhaps at this point we could say, but wait! What we if looked at periods right after 3 months of bull markets, and then... and I would stop and take a moment. This sort of thinking is a pretty common dangerous path in a similar type of situation in data analysis and machine learning, where it's very easy to build a perfect model that is perfectly overfit for the limited training dataset it was trained on. As soon as that model is taken away from that training set and applied elsewhere, ie. the real world, it falls apart.\nWhat does that mean? Wikipedia explains the concept well, but say for example you were trying to fit a polynomial to a set of points like this, with points shown in red.\n\nThe reality is something like the green curve, but because we try to perfectly fit the data points, our model returns us something like the blue line. Sure, at those points, it's exactly right, but elsewhere, it is completely wrong. Similarly in the case of investing, as we try to more perfectly predict a DCA strategy for the stock market (for a specific stock even!), the final model is much more likely to fail spectacularly at some point in the future == lose all of your money.\nThis experiment doesn't guarantee there's not some DCA/alternative strategy that would perform better, but a good way to look at it is we've done a rough fit of our data points, and the experiment shows this DCA strategy (and we can extrapolate reasonably that other similar low-hanging variations of DCA would have similar results, perhaps a good followup experiment is to explore that space) is not superior.\nThis experiment does give us some useful insights though. Dollar Cost Averaging is a form of smoothing that reduces the volatility associated with investing date. Investing at the 'wrong' time can cause a lot of anxiety and wishful thinking, if only I had waited to buy in or if only I sold at the peak. Using DCA we can alleviate the pressure of worrying that we're investing at a peak right before a looming cliff, at the possibly acceptable cost of reducing the statistical average return of about ~1% (in this very specific example, not generalizable). \nThis experiment has temporarily satiated my curiosity on the matter of lump sum vs DCA, and 2016 will be another lump sum investing year for me. If we were all machines, we'd choose lump sum. Quantitative trading is 110% this type of unintuitive logic. I'm looking forward to revisiting this topic in the future.\nAssumptions\n\nWe have exactly \\$10k to invest at any point in time\nAssume we buy at the price of closing on any day, which is arbitrary but at least as reasonable as choosing the low or high for the day\nWe ignore the value of a dollar changing over the years since thats a whole other topic\nOur final investment value is based on the closing price on the last day (Jan 8, 2016 close)\nNo commissions yet, which would skew in lump sums favor anyway (1 vs 12 commission fees)\n~~Ignoring dividend yields, which again would skew in favor of lump sum (more dividends)~~ 1/12/2016 - Replaced Google SPY Close data with Yahoo SPY Adjusted Close data to include dividend yield among other things\nHere is a link to the original notebook which did not take into account dividend yield.\n\n\nGitHub Source Code" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fjulca-aguilar/DeepTRIOS
cnn_trioslib.ipynb
gpl-3.0
[ "Text segmentation example", "# Required modules\nfrom trios.feature_extractors import RAWFeatureExtractor\nimport trios\nimport numpy as np\nfrom TFClassifier import TFClassifier\nfrom CNN_TFClassifier import CNN_TFClassifier\nimport scipy as sp\nimport scipy.ndimage\nimport trios.shortcuts.persistence as p\nimport matplotlib\nimport matplotlib.image as mpimg\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\n%matplotlib inline", "Training examples\nTraining examples are defined through the Imageset class of TRIOSlib. The class defines a list of tuples with pairs of input and desired output image paths, and an (optional) binary image mask (see http://trioslib.sourceforge.net/index.html for more details). In this case, we use a training set with two training examples, and for each example, we define the mask as being the input image, that is, the operator is applied in each white pixel of the input image:", "train_imgs = trios.Imageset.read('images/train_images.set')\nfor i in range(len(train_imgs)):\n print(\"sample %d:\" % (i + 1))\n print(\"\\t input: %s\" % train_imgs[i][0])\n print(\"\\t desired output: %s\" % train_imgs[i][1])\n print(\"\\t mask: %s\\n\" % train_imgs[i][2])\n\nprint(\"The first pair of input and ouput examples:\")\nfig = plt.figure(1, figsize=(15,15))\nimg=mpimg.imread(train_imgs[0][0])\nfig.add_subplot(121)\nplt.imshow(img, cmap=cm.gray)\nplt.title('Input')\nimg_gt=mpimg.imread(train_imgs[0][1])\nfig.add_subplot(122)\nplt.title('Desired output')\nplt.imshow(img_gt, cmap=cm.gray)", "Training\nWe define a CNN architecture through the CNN_TFClassifier class. The classifier requires the input image shape and number of outputs for initialization. We define the input shape according to the patches extracted from the images, in this example, 11x11, and use a single sigmoid output unit for binary classification: text and non-text classes. Additional (optional) parameters include: \nlearning_rate (default 1e-4), dropout_prob (default 0.5), and output_activation=(default 'sigmoid').", "patch_side = 19\nnum_outputs = 1\nwin = np.ones((patch_side, patch_side), np.uint8)\ncnn_classifier = CNN_TFClassifier((patch_side, patch_side, 1), num_outputs, num_epochs=10, model_dir='cnn_text_segmentation')\n\nop_tf = trios.WOperator(win, TFClassifier(cnn_classifier), RAWFeatureExtractor, batch=True)\nop_tf.train(train_imgs)", "Applying the operator to a new image", "test_img = sp.ndimage.imread('images/veja11.sh50.png', mode='L')\nout_img = op_tf.apply(test_img, test_img)\n\nfig = plt.figure(2, figsize=(15,15))\nfig.add_subplot(121)\nplt.imshow(test_img, cmap=cm.gray)\nplt.title('Input')\nfig.add_subplot(122)\nplt.imshow(out_img, cmap=cm.gray)\nplt.title('CNN output')\n", "Further improvements\nThe parameters used in this example were selected for illustration only. Better results can be obtained using larger windows, increasing the number of epochs or input-output examples, and trying different CNN architectures/parameters." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
PAIR-code/what-if-tool
keras_sklearn_compare_caip_e2e.ipynb
apache-2.0
[ "Comparing Keras and Scikit models deployed on Cloud AI Platform with the What-if Tool\nIn this notebook we'll use the UCI wine quality dataset to train both tf.keras and Scikit learn regression models that will predict the quality rating of a wine given 11 numerical data points about the wine. You'll learn how to:\n* Build, train, and then deploy tf.keras and Scikit Learn models to Cloud AI Platform\n* Use the What-if Tool to compare two different models deployed on CAIP\nYou will need a Google Cloud Platform account and project to run this notebook. Instructions for creating a project can be found here.\nInstalling dependencies", "import sys\npython_version = sys.version_info[0]\n\n# If you're running on Colab, you'll need to install the What-if Tool package and authenticate\ndef pip_install(module):\n if python_version == '2':\n !pip install {module} --quiet\n else:\n !pip3 install {module} --quiet\n\ntry:\n import google.colab\n IN_COLAB = True\nexcept:\n IN_COLAB = False\n\nif IN_COLAB:\n pip_install('witwidget')\n\n from google.colab import auth\n auth.authenticate_user()\n\nimport pandas as pd\nimport numpy as np\nimport tensorflow as tf\nimport witwidget\nimport os\nimport pickle\n\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.keras.models import Sequential\n\nfrom sklearn.utils import shuffle\nfrom sklearn.linear_model import LinearRegression\nfrom witwidget.notebook.visualization import WitWidget, WitConfigBuilder\n\n# This has been tested on TF 1.14\nprint(tf.__version__)", "Download and process data\nIn this section we'll:\n* Download the wine quality data directly from UCI Machine Learning\n* Read it into a Pandas dataframe and preview it\n* Split the data and labels into train and test sets", "!wget 'http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv'\n\ndata = pd.read_csv('winequality-white.csv', index_col=False, delimiter=';')\ndata = shuffle(data, random_state=4)\n\ndata.head()\n\nlabels = data['quality']\n\nprint(labels.value_counts())\n\ndata = data.drop(columns=['quality'])\n\ntrain_size = int(len(data) * 0.8)\ntrain_data = data[:train_size]\ntrain_labels = labels[:train_size]\n\ntest_data = data[train_size:]\ntest_labels = labels[train_size:]\n\ntrain_data.head()", "Train tf.keras model\nIn this section we'll:\n\nBuild a regression model using tf.keras to predict a wine's quality score\nTrain the model\nAdd a layer to the model to prepare it for serving", "# This is the size of the array we'll be feeding into our model for each wine example\ninput_size = len(train_data.iloc[0])\nprint(input_size)\n\nmodel = Sequential()\nmodel.add(Dense(200, input_shape=(input_size,), activation='relu'))\nmodel.add(Dense(50, activation='relu'))\nmodel.add(Dense(25, activation='relu'))\nmodel.add(Dense(1))\n\nmodel.compile(loss='mean_squared_error', optimizer='adam')\n\nmodel.summary()\n\nmodel.fit(train_data.values,train_labels.values, epochs=4, batch_size=32, validation_split=0.1)", "Deploy keras model to Cloud AI Platform\nIn this section we'll:\n* Set up some global variables for our GCP project\n* Add a serving layer to our model so we can deploy it on Cloud AI Platform\n* Run the deploy command to deploy our model\n* Generate a test prediction on our deployed model", "# Update these to your own GCP project + model names\nGCP_PROJECT = 'your_gcp_project'\nKERAS_MODEL_BUCKET = 'gs://your_storage_bucket'\nKERAS_VERSION_NAME = 'v1'\n\n# Add the serving input layer below in order to serve our model on AI Platform\nclass ServingInput(tf.keras.layers.Layer):\n # the important detail in this boilerplate code is \"trainable=False\"\n def __init__(self, name, dtype, batch_input_shape=None):\n super(ServingInput, self).__init__(trainable=False, name=name, dtype=dtype, batch_input_shape=batch_input_shape)\n def get_config(self):\n return {'batch_input_shape': self._batch_input_shape, 'dtype': self.dtype, 'name': self.name }\n\nrestored_model = model\n\nserving_model = tf.keras.Sequential()\nserving_model.add(ServingInput('serving', tf.float32, (None, input_size)))\nserving_model.add(restored_model)\ntf.contrib.saved_model.save_keras_model(serving_model, os.path.join(KERAS_MODEL_BUCKET, 'keras_export')) # export the model to your GCS bucket\nexport_path = KERAS_MODEL_BUCKET + '/keras_export'\n\n# Configure gcloud to use your project\n!gcloud config set project $GCP_PROJECT\n\n# Create a new model in our project, you only need to run this once\n!gcloud ai-platform models create keras_wine\n\n# Deploy the model to Cloud AI Platform\n!gcloud beta ai-platform versions create $KERAS_VERSION_NAME --model keras_wine \\\n--origin=$export_path \\\n--python-version=3.5 \\\n--runtime-version=1.14 \\\n--framework='TENSORFLOW'\n\n%%writefile predictions.json\n[7.8, 0.21, 0.49, 1.2, 0.036, 20.0, 99.0, 0.99, 3.05, 0.28, 12.1]\n\n# Test the deployed model on an example from our test set\n# The correct score for this prediction is 7\nprediction = !gcloud ai-platform predict --model=keras_wine --json-instances=predictions.json --version=$KERAS_VERSION_NAME\nprint(prediction[1])", "Build and train Scikit learn model\nIn this section we'll:\n* Train a regression model using Scikit Learn\n* Save the model to a local file using pickle", "SKLEARN_VERSION_NAME = 'v1'\nSKLEARN_MODEL_BUCKET = 'gs://sklearn_model_bucket'\n\nscikit_model = LinearRegression().fit(train_data.values, train_labels.values)\n\n# Export the model to a local file using pickle\npickle.dump(scikit_model, open('model.pkl', 'wb'))", "Deploy Scikit model to CAIP\nIn this section we'll:\n* Copy our saved model file to Cloud Storage\n* Run the gcloud command to deploy our model\n* Generate a prediction on our deployed model", "# Copy the saved model to Cloud Storage\n!gsutil cp ./model.pkl gs://wine_sklearn/model.pkl\n\n# Create a new model in our project, you only need to run this once\n!gcloud ai-platform models create sklearn_wine\n\n!gcloud beta ai-platform versions create $SKLEARN_VERSION_NAME --model=sklearn_wine \\\n--origin=$SKLEARN_MODEL_BUCKET \\\n--runtime-version=1.14 \\\n--python-version=3.5 \\\n--framework='SCIKIT_LEARN'\n\n# Test the model usnig the same example instance from above\n!gcloud ai-platform predict --model=sklearn_wine --json-instances=predictions.json --version=$SKLEARN_VERSION_NAME", "Compare tf.keras and Scikit models with the What-if Tool\nNow we're ready for the What-if Tool! In this section we'll:\n* Create an array of our test examples with their ground truth values. The What-if Tool works best when we send the actual values for each example input.\n* Instantiate the What-if Tool using the set_compare_ai_platform_model method. This lets us compare 2 models deployed on Cloud AI Platform.", "# Create np array of test examples + their ground truth labels\ntest_examples = np.hstack((test_data[:200].values,test_labels[:200].values.reshape(-1,1)))\nprint(test_examples.shape)\n\n# Create a What-if Tool visualization, it may take a minute to load\n# See the cell below this for exploration ideas\n\n# We use `set_predict_output_tensor` here becuase our tf.keras model returns a dict with a 'sequential' key\n\nconfig_builder = (WitConfigBuilder(test_examples.tolist(), data.columns.tolist() + ['quality'])\n .set_ai_platform_model(GCP_PROJECT, 'keras_wine', KERAS_VERSION_NAME).set_predict_output_tensor('sequential').set_uses_predict_api(True)\n .set_target_feature('quality')\n .set_model_type('regression')\n .set_compare_ai_platform_model(GCP_PROJECT, 'sklearn_wine', SKLEARN_VERSION_NAME))\nWitWidget(config_builder, height=800)", "What-if Tool Exploration ideas\n\nLook at the scatter plot of \"Inference value scikit_wine\" vs \"Inference value keras_wine\"\nExamples off of the diagonal represent wines for which the two models have large disagreement on the quality score. Click on some of these and explore the features. \nYou can also click on individual examples, change some of the feature values for that example, and compare the impact of that change on the model's prediction\n\nCheck out the partial dependence plots to see what features are causes the large skew between the two models.\n\n\nGo to the Performance tab and see the overall performance of each model. Is one more accurate over the test data than the other?\n\nIn this tab, use the \"Slice by\" dropdown to slice the data into subgroups and see how both models perform across those subgroups. Try slicing by alcohol. Which model has more consistent performance across the slices?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/mpi-m/cmip6/models/sandbox-2/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: MPI-M\nSource ID: SANDBOX-2\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:17\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mpi-m', 'sandbox-2', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/test-institute-2/cmip6/models/sandbox-3/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: TEST-INSTITUTE-2\nSource ID: SANDBOX-3\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:45\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-3', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ericmjl/Network-Analysis-Made-Simple
archive/6-bipartite-graphs-student.ipynb
mit
[ "import networkx as nx\nfrom custom import load_data as cf\nfrom networkx.algorithms import bipartite\nfrom nxviz import CircosPlot\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'", "Introduction\nBipartite graphs are graphs that have two (bi-) partitions (-partite) of nodes. Nodes within each partition are not allowed to be connected to one another; rather, they can only be connected to nodes in the other partition.\nBipartite graphs can be useful for modelling relations between two sets of entities. We will explore the construction and analysis of bipartite graphs here.\n\nLet's load a crime data bipartite graph and quickly explore it.\n\nThis bipartite network contains persons who appeared in at least one crime case as either a suspect, a victim, a witness or both a suspect and victim at the same time. A left node represents a person and a right node represents a crime. An edge between two nodes shows that the left node was involved in the crime represented by the right node.", "G = cf.load_crime_network()\nlist(G.edges(data=True))[0:5]\n\nlist(G.nodes(data=True))[0:10]", "Projections\nBipartite graphs can be projected down to one of the projections. For example, we can generate a person-person graph from the person-crime graph, by declaring that two nodes that share a crime node are in fact joined by an edge.\n\nExercise\nFind the bipartite projection function in the NetworkX bipartite module docs, and use it to obtain the unipartite projection of the bipartite graph. (5 min.)", "person_nodes = \npG = \nlist(pG.nodes(data=True))[0:5]", "Exercise\nTry visualizing the person-person crime network by using a Circos plot. Ensure that the nodes are grouped by gender and then by number of connections. (5 min.)\nAgain, recapping the Circos Plot API:\npython\nc = CircosPlot(graph_object, node_color='metadata_key1', node_grouping='metadata_key2', node_order='metadat_key3')\nc.draw()\nplt.show() # or plt.savefig('...')", "for n, d in pG.nodes(data=True):\n ____________________\nc = CircosPlot(______, node_color=_________, node_grouping=_________, node_order=__________)\n_________\nplt.savefig('images/crime-person.png', dpi=300)", "Exercise\nUse a similar logic to extract crime links. (2 min.)", "crime_nodes = _________\ncG = _____________ # cG stands for \"crime graph\"", "Exercise\nCan you plot how the crimes are connected, using a Circos plot? Try ordering it by number of connections. (5 min.)", "for n in cG.nodes():\n ___________\n\nc = CircosPlot(___________)\n___________\nplt.savefig('images/crime-crime.png', dpi=300)", "Exercise\nNetworkX also implements centrality measures for bipartite graphs, which allows you to obtain their metrics without first converting to a particular projection. This is useful for exploratory data analysis. \nTry the following challenges, referring to the API documentation to help you:\n\nWhich crimes have the most number of people involved?\nWhich people are involved in the most number of crimes?\n\nExercise total: 5 min.", "# Degree Centrality\nbpdc = _______________________\nsorted(___________, key=lambda x: ___, reverse=True)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google-research/meta-dataset
Intro_to_Metadataset.ipynb
apache-2.0
[ "Copyright 2019 Google LLC.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Using the Meta-Dataset Data Pipeline\nThis notebook shows how to use meta_dataset’s input pipeline to sample data for the Meta-Dataset benchmark. There are two main ways in which data is sampled:\n1. episodic: Returns N-way classification episodes, which contain a support (training) set and a query (test) set. The number of classes (N) may vary from episode to episode.\n2. batch: Returns batches of images and their corresponding label, sampled from all available classes.\nWe first import meta_dataset and other required packages, and define utility functions for visualization. We’ll make use of meta_dataset.data.learning_spec and meta_dataset.data.pipeline; their purpose will be made clear later on.", "#@title Imports and Utility Functions\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nfrom collections import Counter\nimport gin\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nfrom meta_dataset.data import config\nfrom meta_dataset.data import dataset_spec as dataset_spec_lib\nfrom meta_dataset.data import learning_spec\nfrom meta_dataset.data import pipeline\n\n\ndef plot_episode(support_images, support_class_ids, query_images,\n query_class_ids, size_multiplier=1, max_imgs_per_col=10,\n max_imgs_per_row=10):\n for name, images, class_ids in zip(('Support', 'Query'),\n (support_images, query_images),\n (support_class_ids, query_class_ids)):\n n_samples_per_class = Counter(class_ids)\n n_samples_per_class = {k: min(v, max_imgs_per_col)\n for k, v in n_samples_per_class.items()}\n id_plot_index_map = {k: i for i, k\n in enumerate(n_samples_per_class.keys())}\n num_classes = min(max_imgs_per_row, len(n_samples_per_class.keys()))\n max_n_sample = max(n_samples_per_class.values())\n figwidth = max_n_sample\n figheight = num_classes\n if name == 'Support':\n print('#Classes: %d' % len(n_samples_per_class.keys()))\n figsize = (figheight * size_multiplier, figwidth * size_multiplier)\n fig, axarr = plt.subplots(\n figwidth, figheight, figsize=figsize)\n fig.suptitle('%s Set' % name, size='20')\n fig.tight_layout(pad=3, w_pad=0.1, h_pad=0.1)\n reverse_id_map = {v: k for k, v in id_plot_index_map.items()}\n for i, ax in enumerate(axarr.flat):\n ax.patch.set_alpha(0)\n # Print the class ids, this is needed since, we want to set the x axis\n # even there is no picture.\n ax.set(xlabel=reverse_id_map[i % figheight], xticks=[], yticks=[])\n ax.label_outer()\n for image, class_id in zip(images, class_ids):\n # First decrement by one to find last spot for the class id.\n n_samples_per_class[class_id] -= 1\n # If class column is filled or not represented: pass.\n if (n_samples_per_class[class_id] < 0 or\n id_plot_index_map[class_id] >= max_imgs_per_row):\n continue\n # If width or height is 1, then axarr is a vector.\n if axarr.ndim == 1:\n ax = axarr[n_samples_per_class[class_id]\n if figheight == 1 else id_plot_index_map[class_id]]\n else:\n ax = axarr[n_samples_per_class[class_id], id_plot_index_map[class_id]]\n ax.imshow(image / 2 + 0.5)\n plt.show()\n\n\ndef plot_batch(images, labels, size_multiplier=1):\n num_examples = len(labels)\n figwidth = np.ceil(np.sqrt(num_examples)).astype('int32')\n figheight = num_examples // figwidth\n figsize = (figwidth * size_multiplier, (figheight + 1.5) * size_multiplier)\n _, axarr = plt.subplots(figwidth, figheight, dpi=300, figsize=figsize)\n\n for i, ax in enumerate(axarr.transpose().ravel()):\n # Images are between -1 and 1.\n ax.imshow(images[i] / 2 + 0.5)\n ax.set(xlabel=labels[i], xticks=[], yticks=[])\n plt.show()", "Primers\n\nDownload your data and process it as explained in link. Set BASE_PATH pointing the processed tf-records ($RECORDS in the conversion instructions).\nmeta_dataset supports many different setting for sampling data. We use gin-config to control default parameters of our functions. You can go to default gin file we are pointing and see the default values.\nYou can use meta_dataset in eager or graph mode.\nLet's write a generator that makes the right calls to return data from dataset. dataset.make_one_shot_iterator() returns an iterator where each element is an episode.\nSPLIT is used to define which part of the meta-split is going to be used. Different splits have different classes and the details on how they are created can be found in the paper.", "# 1\nBASE_PATH = '/path/to/records'\nGIN_FILE_PATH = 'meta_dataset/learn/gin/setups/data_config.gin'\n# 2\ngin.parse_config_file(GIN_FILE_PATH)\n# 3\n# Comment out to disable eager execution.\ntf.enable_eager_execution()\n# 4\ndef iterate_dataset(dataset, n):\n if not tf.executing_eagerly():\n iterator = dataset.make_one_shot_iterator()\n next_element = iterator.get_next()\n with tf.Session() as sess:\n for idx in range(n):\n yield idx, sess.run(next_element)\n else:\n for idx, episode in enumerate(dataset):\n if idx == n:\n break\n yield idx, episode\n# 5\nSPLIT = learning_spec.Split.TRAIN", "Reading datasets\nIn order to sample data, we need to read the dataset_spec files for each dataset. Following snippet reads those files into a list.", "ALL_DATASETS = ['aircraft', 'cu_birds', 'dtd', 'fungi', 'ilsvrc_2012',\n 'omniglot', 'quickdraw', 'vgg_flower']\n\nall_dataset_specs = []\nfor dataset_name in ALL_DATASETS:\n dataset_records_path = os.path.join(BASE_PATH, dataset_name)\n dataset_spec = dataset_spec_lib.load_dataset_spec(dataset_records_path)\n all_dataset_specs.append(dataset_spec)", "(1) Episodic Mode\nmeta_dataset uses tf.data.Dataset API and it takes one call to pipeline.make_multisource_episode_pipeline(). We loaded or defined most of the variables used during this call above. The remaining parameters are explained below:\n\nuse_bilevel_ontology_list: This is a list of booleans indicating whether corresponding dataset in ALL_DATASETS should use bilevel ontology. Omniglot is set up with a hierarchy with two level: the alphabet (Latin, Inuktitut...), and the character (with 20 examples per character).\nThe flag means that each episode will contain classes from a single alphabet. \nuse_dag_ontology_list: This is a list of booleans indicating whether corresponding dataset in ALL_DATASETS should use dag_ontology. Same idea for ImageNet, except it uses the hierarchical sampling procedure described in the article.\nimage_size: All images from various datasets are down or upsampled to the same size. This is the flag controls the edge size of the square.\nshuffle_buffer_size: Controls the amount of shuffling among examples from any given class.", "use_bilevel_ontology_list = [False]*len(ALL_DATASETS)\nuse_dag_ontology_list = [False]*len(ALL_DATASETS)\n# Enable ontology aware sampling for Omniglot and ImageNet. \nuse_bilevel_ontology_list[5] = True\nuse_dag_ontology_list[4] = True\nvariable_ways_shots = config.EpisodeDescriptionConfig(\n num_query=None, num_support=None, num_ways=None)\n\ndataset_episodic = pipeline.make_multisource_episode_pipeline(\n dataset_spec_list=all_dataset_specs,\n use_dag_ontology_list=use_dag_ontology_list,\n use_bilevel_ontology_list=use_bilevel_ontology_list,\n episode_descr_config=variable_ways_shots,\n split=SPLIT,\n image_size=84,\n shuffle_buffer_size=300)", "Using Dataset\n\nThe episodic dataset consist in a tuple of the form (Episode, data source ID). The data source ID is an integer Tensor containing a value in the range [0, len(all_dataset_specs) - 1]\nsignifying which of the datasets of the multisource pipeline the given episode\ncame from. Episodes consist of support and query sets and we want to learn to classify images at the query set correctly given the support images. For both support and query set we have images, labels and class_ids. Labels are transformed class_ids offset to zero, so that global class_ids are set to [0, N] where N is the number of classes in an episode.\nAs one can see the number of images in query set and support set is different. Images are scaled, copied into 84*84*3 tensors. Labels are presented in two forms:\n*_labels are relative to the classes selected for the current episode only. They are used as targets for this episode.\n*_class_ids are the original class ids relative to the whole dataset. They are used for visualization and diagnostics.\nIt easy to convert tensors of the episode into numpy arrays and use them outside of the Tensorflow framework.\nClasses might have different number of samples in the support set, whereas each class has 10 samples in the query set.", "# 1\nidx, (episode, source_id) = next(iterate_dataset(dataset_episodic, 1))\nprint('Got an episode from dataset:', all_dataset_specs[source_id].name)\n\n# 2\nfor t, name in zip(episode,\n ['support_images', 'support_labels', 'support_class_ids',\n 'query_images', 'query_labels', 'query_class_ids']):\n print(name, t.shape)\n\n# 3\nepisode = [a.numpy() for a in episode]\n\n# 4\nsupport_class_ids, query_class_ids = episode[2], episode[5]\nprint(Counter(support_class_ids))\nprint(Counter(query_class_ids))", "Visualizing Episodes\nLet's visualize the episodes. \n\nSupport and query set for each episode plotted sequentially. Set N_EPISODES to control number of episodes visualized.\nEach episode is sampled from a single dataset and include N different classes. Each class might have different number of samples in support set, whereas number of images in query set is fixed. We limit number of classes and images per class to 10 in order to create legible plots. Actual episodes might have more classes and samples. \nEach column represents a distinct class and dataset specific class ids are plotted on the x_axis.", "# 1\nN_EPISODES=2\n# 2, 3\nfor idx, (episode, source_id) in iterate_dataset(dataset_episodic, N_EPISODES):\n print('Episode id: %d from source %s' % (idx, all_dataset_specs[source_id].name))\n episode = [a.numpy() for a in episode]\n plot_episode(support_images=episode[0], support_class_ids=episode[2],\n query_images=episode[3], query_class_ids=episode[5])", "(2) Batch Mode\nSecond mode that meta_dataset library provides is the batch mode, where one can sample batches from the list of datasets in a non-episodic manner and use it to train baseline models. There are couple things to note here:\n\nEach batch is sampled from a different dataset.\nADD_DATASET_OFFSET controls whether the class_id's returned by the iterator overlaps among different datasets or not. A dataset specific offset is added in order to make returned ids unique.\nmake_multisource_batch_pipeline() creates a tf.data.Dataset object that returns datasets of the form (Batch, data source ID) where similarly to the\nepisodic case, the data source ID is an integer Tensor that identifies which\ndataset the given batch originates from.\nshuffle_buffer_size controls the amount of shuffling done among examples from a given dataset (unlike for the episodic pipeline).", "BATCH_SIZE = 16\nADD_DATASET_OFFSET = True\n\ndataset_batch = pipeline.make_multisource_batch_pipeline(\n dataset_spec_list=all_dataset_specs, batch_size=BATCH_SIZE, split=SPLIT,\n image_size=84, add_dataset_offset=ADD_DATASET_OFFSET,\n shuffle_buffer_size=1000)\n\nfor idx, ((images, labels), source_id) in iterate_dataset(dataset_batch, 1):\n print(images.shape, labels.shape)\n\nN_BATCH = 2\nfor idx, (batch, source_id) in iterate_dataset(dataset_batch, N_BATCH):\n print('Batch-%d from source %s' % (idx, all_dataset_specs[source_id].name))\n plot_batch(*map(lambda a: a.numpy(), batch), size_multiplier=0.5)", "(3) Fixing Ways and Shots\n\nmeta_dataset library provides option to set number of classes/samples per episode. There are 3 main flags you can set. \nNUM_WAYS: Fixes the # classes per episode. We would still get variable number of samples per class in the support set.\nNUM_SUPPORT: Fixes # samples per class in the support set.\nNUM_SUPPORT: Fixes # samples per class in the query set.\n\n\nIf we want to use fixed num_ways, we have to disable ontology based sampling for omniglot and imagenet. We advise using single dataset for using this feature, since using multiple datasets is not supported/tested. In this notebook, we are using Quick, Draw! Dataset.\nWe sample episodes and visualize them as we did earlier.", "#1\nNUM_WAYS = 8\nNUM_SUPPORT = 3\nNUM_QUERY = 5\nfixed_ways_shots = config.EpisodeDescriptionConfig(\n num_ways=NUM_WAYS, num_support=NUM_SUPPORT, num_query=NUM_QUERY)\n\n#2\nuse_bilevel_ontology_list = [False]*len(ALL_DATASETS)\nuse_dag_ontology_list = [False]*len(ALL_DATASETS)\nquickdraw_spec = [all_dataset_specs[6]]\n#3\ndataset_fixed = pipeline.make_multisource_episode_pipeline(\n dataset_spec_list=quickdraw_spec, use_dag_ontology_list=[False],\n use_bilevel_ontology_list=use_bilevel_ontology_list, split=SPLIT,\n image_size=84, episode_descr_config=fixed_ways_shots)\n\nN_EPISODES = 2\nfor idx, (episode, source_id) in iterate_dataset(dataset_fixed, N_EPISODES):\n print('Episode id: %d from source %s' % (idx, quickdraw_spec[source_id].name))\n episode = [a.numpy() for a in episode]\n plot_episode(support_images=episode[0], support_class_ids=episode[2],\n query_images=episode[3], query_class_ids=episode[5])", "(4) Using Meta-dataset with PyTorch\nAs mentioned above it is super easy to consume meta_dataset as NumPy arrays. This also enables easy integration into other popular deep learning frameworks like PyTorch. TensorFlow code processes the data and passes it to PyTorch, ready to be consumed. Since the data loader and processing steps do not have any operation on the GPU, TF should not attempt to grab the GPU, and it should be available for PyTorch.\n1. Let's use an episodic dataset created earlier, dataset_episodic, and build on top of it. We will transpose tensor to CHW, which is the common order used by convolutional layers of PyTorch. \n2. We will use zero-indexed labels, therefore grabbing e[1] and e[4]. At the end we return a generator that consumes the tf.Dataset. \n3. Using .cuda() on PyTorch tensors should distribute them to appropriate devices.", "import torch\n# 1\nto_torch_labels = lambda a: torch.from_numpy(a.numpy()).long()\nto_torch_imgs = lambda a: torch.from_numpy(np.transpose(a.numpy(), (0, 3, 1, 2)))\n# 2\ndef data_loader(n_batches):\n for i, (e, _) in enumerate(dataset_episodic):\n if i == n_batches:\n break\n yield (to_torch_imgs(e[0]), to_torch_labels(e[1]),\n to_torch_imgs(e[3]), to_torch_labels(e[4]))\n\nfor i, batch in enumerate(data_loader(n_batches=2)):\n #3\n data_support, labels_support, data_query, labels_query = [x.cuda() for x in batch]\n print(data_support.shape, labels_support.shape, data_query.shape, labels_query.shape) " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
WillRhB/PythonLesssons
Basemap-final.ipynb
mit
[ "Widgets and Interactions", "!conda install -y netcdf4\n\nfrom netCDF4 import Dataset, num2date, date2num\nfrom numpy import * \nimport matplotlib.pyplot as plt \n%matplotlib inline\n\nfrom ipywidgets import interact, interactive, fixed\nimport ipywidgets as widgets \n\nx = linspace(0, 1, 100) # generates a hundred values between 0 and 1\nf = 2\na = 3\n\nplt.plot(x, sin(2*pi*x*f))\n\ndef pltsin(f):\n plt.plot(x, sin(2*pi*x*f))\n\npltsin(0.5)", "Add to the function to allow amplitude to be varied and aadd in an additional slider to vary both f and a\n\nmay want to limit ylim", "interact(pltsin, f=(1, 10, 0.2), x = (1, 10, 0.2))\n\ndef pltsina(f, a):\n plt.plot(x, a*sin(2*pi*x*f))\n plt.ylim(-10.5, 10.5)\n\ninteract(pltsina, f=(1, 10, 0.2), a = (1, 10, 0.2))\n", "Climate data", "f=Dataset ('ncep-data/air.sig995.2013.nc') # get individual data set out of the right folder \n\nair = f.variables['air'] # get variable\n\nplt.imshow(air[0,:,:]) # display first timestep\n\n# Create function to browse through the days \n\ndef sh(time):\n plt.imshow(air[time,:,:])\n\n# Now make it interactive\n\ninteract(sh, time=(0, 355, 1))\n\n# Browse variable \n\ndef sh(time =0, var='air', year = '2013'):\n f=Dataset('ncep-data/'+var+'.sig995.'+year+'.nc')\n vv=f.variables[var]\n plt.imshow(vv[time,:,:])\n\n#Give a list of variables \n\nvariabs =['air', 'uwnd', 'vwnd', 'rhum']\nyear = ['2013', '2014', '2015']\n\n# Now interact with it \n\ninteract(sh, time=(0, 355, 1), year = year, var=variabs)\n\nhelp(sh)\n\nfrom mpl_toolkits.basemap import Basemap\n\n# create north polar sterographic projection \n\nm=Basemap(projection='npstere', boundinglat=60, lon_0=0, resolution ='l')\nm.fillcontinents(color='gray', lake_color='gray')\nm.drawparallels(arange(-80.,81.,20.))\nm.drawmeridians(arange(-180.,181.,20.))\nm.drawmapboundary(fill_color='white')\n\n# Set up some variables \nlon = f.variables['lon'][:]\nlat = f.variables['lat'][:]\nlon, lat = meshgrid(lon, lat)\nx, y = m(lon, lat)\n\ndef sh(time =0, var='air', year = '2013'):\n f=Dataset('ncep-data/'+var+'.sig995.'+year+'.nc')\n vv=f.variables[var]\n tt=f.variables['time']\n dd=num2date(tt[time], tt.units)\n m.fillcontinents(color='gray', lake_color='gray')\n m.drawparallels(arange(-80.,81.,20.))\n m.drawmeridians(arange(-180.,181.,20.))\n m.drawmapboundary(fill_color='white')\n cs = m.contourf(x, y, vv[time,:,:]-273.15)\n\ninteract(sh, year=year, time=(0,355,1), var=variabs)\n\nmy_map = Basemap (projection='merc', lat_0=0, lon_0=30,\n resolution='h', area_thresh=1000.0,\n llcrnrlon=29, llcrnrlat=-1,\n urcrnrlon=31, urcrnrlat=1)\n# area threshold states how rivers etc look - scale, resolution sets resolution, llcrnlon etc sets box, \n# lat and lon decide where you look \nmy_map.drawcoastlines()\n\nmy_map.drawcountries()\nmy_map.fillcontinents(color='coral')\n\nmy_map.drawmapboundary()\n\nmy_map.drawmeridians(arange(0,360,30))\nmy_map.drawparallels(arange(-90, 90, 30))\n\nlon=30\nlat=0\n\nx,y=my_map(lon, lat)\nmy_map.plot(x, y, 'bo', markersize=7.2)\nplt.show() # here the function that decides actually plots \n\n# This just lets the output of the following code samples \n# display inline on this page, at an appropirate size\nfrom pylab import rcParams\n\n# Create a simple basemap \n\nmy_map = Basemap (projection='ortho', lat_0=50, lon_0=0,\n resolution='l', area_thresh=1000.0)\n\nmy_map.drawcoastlines()\n\nmy_map.drawcountries()\nmy_map.fillcontinents(color='red', lake_color='gray')\n\n\nplt.show()", "Plotting some live (ish) earthquake data...\nDownload the data first: http://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/1.0_week.csv\nThis will download a file locally- move it into your working directory. Alternatively, use the historic dataset provided in this repo.", "#Check the first few lats and longs\n\nimport csv\n\n# Open the earthquake data file.\nfilename = '1.0_week.csv'\n\n# Create empty lists for the latitudes and longitudes.\nlats, lons, mags = [], [], []\n\n# Read through the entire file, skip the first line,\n# and pull out just the lats and lons.\nwith open(filename) as f:\n # Create a csv reader object.\n reader = csv.reader(f)\n \n # Ignore the header row.\n next(reader)\n \n # Store the latitudes and longitudes in the appropriate lists.\n for row in reader:\n lats.append(float(row[1]))\n lons.append(float(row[2]))\n mags.append(float(row[4]))\n \n# Display the first 5 lats and lons.\nprint('lats', lats[0:5])\nprint('lons', lons[0:5])\nprint('mags', mags[0:5])\n\n### And now create a plot of these on a map projection\n\nimport csv\n\n# Open the earthquake data file.\nfilename = '1.0_week.csv'\n\n# Create empty lists for the latitudes and longitudes.\nlats, lons, mags = [], [], []\n\n# Read through the entire file, skip the first line,\n# and pull out just the lats and lons.\nwith open(filename) as f:\n # Create a csv reader object.\n reader = csv.reader(f)\n \n # Ignore the header row.\n next(reader)\n \n # Store the latitudes and longitudes in the appropriate lists.\n for row in reader:\n lats.append(float(row[1]))\n lons.append(float(row[2]))\n mags.append(float(row[4]))\n \n# --- Build Map ---\nfrom mpl_toolkits.basemap import Basemap\nimport matplotlib.pyplot as plt\nimport numpy as np\n \neq_map = Basemap(projection='robin', resolution = 'l', area_thresh = 1000.0,\n lat_0=52, lon_0=0)\neq_map.drawcoastlines()\neq_map.drawcountries()\neq_map.fillcontinents(color = 'coral')\neq_map.drawmapboundary()\neq_map.drawmeridians(np.arange(0, 360, 30))\neq_map.drawparallels(np.arange(-90, 90, 30))\n \nmin_marker_size = 1\nfor lon, lat, mags in zip(lons, lats, mags):\n x,y = eq_map(lon, lat) \n msize = mags * min_marker_size \n eq_map.plot(x, y, , markersize=msize)\n if mags >= 5.0 \n eqcolor = 'r'\n elif: mags >= 1.0 and <= 3.0 \n eqcolor = 'g'\n elif: <= 1.0\n eqcolor = 'y\n eq_map.plot(x, y, eqcolor, markersize=msize)\n \nplt.show()\n\n", "This is great but one cool enhancement would be to make the size of the point represent the magnitude of the earthquake.\nHere's one way to do it:\nRead the magnitudes into a list along with their respective lat and long\nLoop through the list, plotting one point at a time\nAs the magnitudes start at 1.0, you can just use the magnitude directly as the scale factor\nTo get the marker size, multiply the magnitude by the smallest dot you want on the map.\nAdd an extra enhancement of colour:\nmake small earthquakes\nSee if you can get similar data, perhaps for Whale sightings, and plot those on a map.\nYou might even have some of your own data to plot..", "x,y" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
chivalrousS/word2vec
examples/doc2vec.ipynb
apache-2.0
[ "doc2vec\nThis is an experimental code developed by Tomas Mikolov found in the word2vec google group: https://groups.google.com/d/msg/word2vec-toolkit/Q49FIrNOQRo/J6KG8mUj45sJ\nThis is not yet available on Pypi you need the latest master branch from git.\nThe input format for doc2vec is still one big text document but every line should be one document prepended with an unique id, for example:\n_*0 This is sentence 1\n_*1 This is sentence 2\nRequirements\n\nnltk\nDownload some data: http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\nUntar that data: tar -xvf aclImdb_v1.tar.gz\n\nPreprocess\nMerge data into one big document with an id per line and do some basic preprocessing: word tokenizer.", "from __future__ import unicode_literals\n\nimport os\nimport nltk\n\ndirectories = ['train/pos', 'train/neg', 'test/pos', 'test/neg', 'train/unsup']\n\ninput_file = open('/Users/drodriguez/Downloads/alldata.txt', 'w')\n\nid_ = 0\nfor directory in directories:\n rootdir = os.path.join('/Users/drodriguez/Downloads/aclImdb', directory)\n for subdir, dirs, files in os.walk(rootdir):\n for file_ in files:\n with open(os.path.join(subdir, file_), 'r') as f:\n doc_id = '_*%i' % id_\n id_ = id_ + 1\n\n text = f.read()\n text = text.decode('utf-8')\n tokens = nltk.word_tokenize(text)\n doc = ' '.join(tokens).lower()\n doc = doc.encode('ascii', 'ignore')\n input_file.write('%s %s\\n' % (doc_id, doc))\n\ninput_file.close()", "doc2vec", "%load_ext autoreload\n%autoreload 2\n\nimport word2vec\n\nword2vec.doc2vec('/Users/drodriguez/Downloads/alldata.txt', '/Users/drodriguez/Downloads/vectors.bin', cbow=0, size=100, window=10, negative=5, hs=0, sample='1e-4', threads=12, iter_=20, min_count=1, verbose=True)", "Predictions\nIs possible to load the vectors using the same wordvectors class as a regular word2vec binary file.", "%load_ext autoreload\n%autoreload 2\n\nimport word2vec\n\nmodel = word2vec.load('/Users/drodriguez/Downloads/vectors.bin')\n\nmodel.vectors.shape", "The documents vector are going to be identified by the id we used in the preprocesing section, for example document 1 is going to have vector:", "model['_*1']", "We can ask for similarity words or documents on document 1", "indexes, metrics = model.cosine('_*1')\n\nmodel.generate_response(indexes, metrics).tolist()", "Now its just a case of matching the id to the data created on the preprocessing step" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
konstantinstadler/pymrio
doc/source/notebooks/working_with_exiobase.ipynb
gpl-3.0
[ "Working with the EXIOBASE EE MRIO database\nGetting EXIOBASE\nEXIOBASE 1 (developed in the fp6 project EXIOPOL), EXIOBASE 2 (outcome of the fp7 project CREEA) and EXIOBASE 3 (outcome of the fp7 project DESIRE) are available on the EXIOBASE webpage.\nYou need to register before you can download the full dataset.\nFurther information on the different EXIOBASE versions can be found in corresponding method papers.\n\nEXIOBASE 1: Tukker et al. 2013. Exiopol – Development and Illustrative Analyses of a Detailed Global MR EE SUT/IOT. Economic Systems Research 25(1), 50-70 \nEXIOBASE 2: Wood et al. 2015. Global Sustainability Accounting—Developing EXIOBASE for Multi-Regional Footprint Analysis. Sustainability 7(1), 138-163\nEXIOBASE 3: Stadler et al. 2018. EXIOBASE 3: Developing a Time Series of Detailed Environmentally Extended Multi‐Regional Input‐Output Tables. Journal of Industrial Ecology 22(3), 502-515\n\nEXIOBASE 1\nTo download EXIOBASE 1 for the use with pymrio, navigate to the EXIOBASE webpage - section(tab) \"Data Download\" - \"EXIOBASE 1 - full dataset\" and download either \n\n\npxp_ita_44_regions_coeff_txt for the product by product (pxp) MRIO system or\n\n\nixi_fpa_44_regions_coeff_txt for the industry by industry (ixi) MRIO system or\n\n\npxp_ita_44_regions_coeff_src_txt for the product by product (pxp) MRIO system with emission data per source or\n\n\nixi_fpa_44_regions_coeff_src_txt for the industry by industry (ixi) wMRIO system with emission data per source.\n\n\nThe links above directly lead to the required file(s), but remember that you need to be logged in to access them.\nThe Pymrio parser works with the compressed (zip) files as well as the unpacked files. If you want to unpack the files, make sure that you store them in different folders since they unpack in the current directory.\nEXIOBASE 2\nEXIOBASE 3 is available at the EXIOBASE webpage at the section (tab) tab \"Data Download\" - \"EXIOBASE 2 - full dataset\".\nYou can download either \n\n\nMrIOT PxP ita coefficient version2 2 2 for the product by product (pxp) MRIO system or\n\n\nMrIOT IxI fpa coefficient version2 2 2 for the industry by industry (ixi) MRIO system.\n\n\nThe links above directly lead to the required file(s), but remember that you need to be logged in to access them.\nThe pymrio parser works with the compressed (zip) files as well as the unpacked files. You can unpack the files together in one directory (unpacking creates a separate folder for each EXIOBASE 2 version). The unpacking of the PxP version also creates a folder \"__MACOSX\" - you can delete this folder.\nEXIOBASE 3\nEXIOBASE 3 is available at the EXIOBASE webpage at the section (tab) tab \"Data Download\" - \"EXIOBASE 3 - monetary\".\nThe EXIOBASE 3 parser works with both, the compressed zip archives and the extracted database.\nParsing", "import pymrio", "For each publically available version of EXIOBASE pymrio provides a specific parser. \nAll exiobase parser work with the zip archive (as downloaded from the exiobase webpage) or the extracted data.\nTo parse EXIOBASE 1 use:", "exio1 = pymrio.parse_exiobase1(path='/tmp/mrios/exio1/zip/121016_EXIOBASE_pxp_ita_44_regions_coeff_txt.zip')", "The parameter 'path' needs to point to either the folder with the extracted EXIOBASE1 files for the downloaded zip archive.\nSimilarly, EXIOBASE 2 can be parsed by:", "exio2 = pymrio.parse_exiobase2(path='/tmp/mrios/exio2/zip/mrIOT_PxP_ita_coefficient_version2.2.2.zip',\n charact=True, popvector='exio2')", "The additional parameter 'charact' specifies if the characterization matrix provided with EXIOBASE 2 should be used. This can be specified with True or False; in addition, a custom one can be provided. In the latter case, pass the full path to the custom characterisatio file to 'charact'.\nThe parameter 'popvector' allows to pass information about the population per EXIOBASE2 country. This can either be a custom vector of, if 'exio2' is passed, the one provided with pymrio.\nEXIOBASE 3 can be parsed by:", "exio3 = pymrio.parse_exiobase3(path='/tmp/mrios/exio3/zip/exiobase3.4_iot_2009_pxp.zip')", "Currently, no characterization or population vectors are provided for EXIOBASE 3.\nFor the rest of the tutorial, we use exio2; deleting exio1 and exio3 to free some memory:", "del exio1\ndel exio3", "Exploring EXIOBASE\nAfter parsing a EXIOBASE version, the handling of the database is the same as for any IO. \nHere we use the parsed EXIOBASE2 to explore some characteristics of the EXIBOASE system.\nAfter reading the raw files, metadata about EXIOBASE can be accessed within the meta field:", "exio2.meta", "Custom points can be added to the history in the meta record. For example:", "exio2.meta.note(\"First test run of EXIOBASE 2\")\nexio2.meta", "To check for sectors, regions and extensions:", "exio2.get_sectors()\n\nexio2.get_regions()\n\nlist(exio2.get_extensions())", "Calculating the system and extension results\nThe following command checks for missing parts in the system and calculates them. In case of the parsed EXIOBASE this includes A, L, multipliers M, footprint accounts, ..", "exio2.calc_all()", "Exploring the results", "import matplotlib.pyplot as plt\n\nplt.figure(figsize=(15,15))\nplt.imshow(exio2.A, vmax=1E-3)\nplt.xlabel('Countries - sectors')\nplt.ylabel('Countries - sectors')\nplt.show()", "The available impact data can be checked with:", "list(exio2.impact.get_rows())", "And to get for example the footprint of a specific impact do:", "print(exio2.impact.unit.loc['global warming (GWP100)'])\nexio2.impact.D_cba_reg.loc['global warming (GWP100)']", "Visualizing the data", "with plt.style.context('ggplot'):\n exio2.impact.plot_account(['global warming (GWP100)'], figsize=(15,10))\n plt.show()", "See the other notebooks for further information on aggregation and file io." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
junpenglao/Bayesian-Cognitive-Modeling-in-Pymc3
CaseStudies/TheBARTModelofRiskTaking.ipynb
gpl-3.0
[ "import pymc3 as pm\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n%qtconsole --colors=linux\nplt.style.use('ggplot')\n\nfrom matplotlib import gridspec\nfrom theano import tensor as tt\nfrom scipy import stats", "Chapter 16 - The BART model of risk taking\n16.1 The BART model\nBalloon Analogue Risk Task (BART: Lejuez et al., 2002): Every trial in this task starts by showing a balloon representing a small monetary value. The subject can then either transfer the money to a virtual bank account, or choose to pump, which adds a small amount of air to the balloon, and increases its value. There is some probability, however, that pumping the balloon will cause it to burst, causing all the money to be lost. A trial finishes when either the subject has transferred the money, or the balloon has burst.\n$$ \\gamma^{+} \\sim \\text{Uniform}(0,10) $$\n$$ \\beta \\sim \\text{Uniform}(0,10) $$\n$$ \\omega = -\\gamma^{+} \\,/\\,\\text{log}(1-p) $$\n$$ \\theta_{jk} = \\frac{1} {1+e^{\\beta(k-\\omega)}} $$\n$$ d_{jk} \\sim \\text{Bernoulli}(\\theta_{jk}) $$", "p = .15 # (Belief of) bursting probability\nntrials = 90 # Number of trials for the BART\n\nData = pd.read_csv('data/GeorgeSober.txt', sep='\\t')\n# Data.head()\ncash = np.asarray(Data['cash']!=0, dtype=int)\nnpumps = np.asarray(Data['pumps'], dtype=int)\n\noptions = cash + npumps\n\nd = np.full([ntrials,30], np.nan)\nk = np.full([ntrials,30], np.nan)\n# response vector\nfor j, ipumps in enumerate(npumps):\n inds = np.arange(options[j],dtype=int)\n k[j,inds] = inds+1\n if ipumps > 0:\n d[j,0:ipumps] = 0\n if cash[j] == 1:\n d[j,ipumps] = 1\n \nindexmask = np.isfinite(d)\nd = d[indexmask]\nk = k[indexmask]\n\nwith pm.Model():\n gammap = pm.Uniform('gammap', lower=0, upper=10, testval=1.2)\n beta = pm.Uniform('beta', lower=0, upper=10, testval=.5)\n omega = pm.Deterministic('omega', -gammap/np.log(1-p))\n \n thetajk = 1 - pm.math.invlogit(- beta * (k - omega))\n \n djk = pm.Bernoulli('djk', p=thetajk, observed=d)\n \n trace = pm.sample(3e3, njobs=2)\n \npm.traceplot(trace, varnames=['gammap', 'beta']);\n\nfrom scipy.stats.kde import gaussian_kde\nburnin=2000\ngammaplus = trace['gammap'][burnin:]\nbeta = trace['beta'][burnin:]\n\nfig = plt.figure(figsize=(15, 5))\ngs = gridspec.GridSpec(1, 3)\n\nax0 = plt.subplot(gs[0])\nax0.hist(npumps, bins=range(1, 9), rwidth=.8, align='left')\nplt.xlabel('Number of Pumps', fontsize=12)\nplt.ylabel('Frequency', fontsize=12)\n\nax1 = plt.subplot(gs[1])\nmy_pdf1 = gaussian_kde(gammaplus)\nx1=np.linspace(.5, 1, 200)\nax1.plot(x1, my_pdf1(x1), 'k', lw=2.5, alpha=0.6) # distribution function\nplt.xlim((.5, 1))\nplt.xlabel(r'$\\gamma^+$', fontsize=15)\nplt.ylabel('Posterior Density', fontsize=12)\n\nax2 = plt.subplot(gs[2])\nmy_pdf2 = gaussian_kde(beta)\nx2=np.linspace(0.3, 1.3, 200)\nax2.plot(x2, my_pdf2(x2), 'k', lw=2.5, alpha=0.6,) # distribution function\nplt.xlim((0.3, 1.3))\nplt.xlabel(r'$\\beta$', fontsize=15)\nplt.ylabel('Posterior Density', fontsize=12);", "16.2 A hierarchical extension of the BART model\n$$ \\mu_{\\gamma^{+}} \\sim \\text{Uniform}(0,10) $$\n$$ \\sigma_{\\gamma^{+}} \\sim \\text{Uniform}(0,10) $$\n$$ \\mu_{\\beta} \\sim \\text{Uniform}(0,10) $$\n$$ \\sigma_{\\beta} \\sim \\text{Uniform}(0,10) $$\n$$ \\gamma^{+}i \\sim \\text{Gaussian}(\\mu{\\gamma^{+}}, 1/\\sigma_{\\gamma^{+}}^2) $$\n$$ \\beta_i \\sim \\text{Gaussian}(\\mu_{\\beta}, 1/\\sigma_{\\beta}^2) $$\n$$ \\omega_i = -\\gamma^{+}i \\,/\\,\\text{log}(1-p) $$\n$$ \\theta{ijk} = \\frac{1} {1+e^{\\beta_i(k-\\omega_i)}} $$\n$$ d_{ijk} \\sim \\text{Bernoulli}(\\theta_{ijk}) $$", "p = .15 # (Belief of) bursting probability\nntrials = 90 # Number of trials for the BART\nNcond = 3\n\ndall = np.full([Ncond,ntrials,30], np.nan)\noptions = np.zeros((Ncond,ntrials))\nkall = np.full([Ncond,ntrials,30], np.nan)\nnpumps_ = np.zeros((Ncond,ntrials))\n\nfor icondi in range(Ncond):\n if icondi == 0:\n Data = pd.read_csv('data/GeorgeSober.txt',sep='\\t')\n elif icondi == 1:\n Data = pd.read_csv('data/GeorgeTipsy.txt',sep='\\t')\n elif icondi == 2:\n Data = pd.read_csv('data/GeorgeDrunk.txt',sep='\\t')\n # Data.head()\n cash = np.asarray(Data['cash']!=0, dtype=int)\n npumps = np.asarray(Data['pumps'], dtype=int)\n npumps_[icondi,:] = npumps\n options[icondi,:] = cash + npumps\n # response vector\n for j, ipumps in enumerate(npumps):\n inds = np.arange(options[icondi,j],dtype=int)\n kall[icondi,j,inds] = inds+1\n if ipumps > 0:\n dall[icondi,j,0:ipumps] = 0\n if cash[j] == 1:\n dall[icondi,j,ipumps] = 1\n \nindexmask = np.isfinite(dall)\ndij = dall[indexmask]\nkij = kall[indexmask]\ncondall = np.tile(np.arange(Ncond,dtype=int),(30,ntrials,1))\ncondall = np.swapaxes(condall,0,2)\ncij = condall[indexmask]\n\nwith pm.Model() as model2:\n mu_g = pm.Uniform('mu_g', lower=0, upper=10)\n sigma_g = pm.Uniform('sigma_g', lower=0, upper=10)\n mu_b = pm.Uniform('mu_b', lower=0, upper=10)\n sigma_b = pm.Uniform('sigma_b', lower=0, upper=10)\n \n gammap = pm.Normal('gammap', mu=mu_g, sd=sigma_g, shape=Ncond)\n beta = pm.Normal('beta', mu=mu_b, sd=sigma_b, shape=Ncond)\n \n omega = -gammap[cij]/np.log(1-p)\n thetajk = 1 - pm.math.invlogit(- beta[cij] * (kij - omega))\n \n djk = pm.Bernoulli(\"djk\", p=thetajk, observed=dij)\n \n approx = pm.fit(n=100000, method='advi',\n obj_optimizer=pm.adagrad_window\n ) # type: pm.MeanField\n start = approx.sample(draws=2, include_transformed=True)\n trace2 = pm.sample(3e3, njobs=2, init='adapt_diag', start=list(start))\n \npm.traceplot(trace2, varnames=['gammap', 'beta']);\n\nburnin=1000\ngammaplus = trace2['gammap'][burnin:]\nbeta = trace2['beta'][burnin:]\nylabels = ['Sober', 'Tipsy', 'Drunk']\n\nfig = plt.figure(figsize=(15, 12))\ngs = gridspec.GridSpec(3, 3)\nfor ic in range(Ncond):\n ax0 = plt.subplot(gs[0+ic*3])\n ax0.hist(npumps_[ic], bins=range(1, 10), rwidth=.8, align='left')\n plt.xlabel('Number of Pumps', fontsize=12)\n plt.ylabel(ylabels[ic], fontsize=12)\n\n ax1 = plt.subplot(gs[1+ic*3])\n my_pdf1 = gaussian_kde(gammaplus[:, ic])\n x1=np.linspace(.5, 1.8, 200)\n ax1.plot(x1, my_pdf1(x1), 'k', lw=2.5, alpha=0.6) # distribution function\n plt.xlim((.5, 1.8))\n plt.xlabel(r'$\\gamma^+$', fontsize=15)\n plt.ylabel('Posterior Density', fontsize=12)\n\n ax2 = plt.subplot(gs[2+ic*3])\n my_pdf2 = gaussian_kde(beta[:, ic])\n x2=np.linspace(0.1, 1.5, 200)\n ax2.plot(x2, my_pdf2(x2), 'k', lw=2.5, alpha=0.6) # distribution function\n plt.xlim((0.1, 1.5))\n plt.xlabel(r'$\\beta$', fontsize=15)\n plt.ylabel('Posterior Density', fontsize=12);" ]
[ "code", "markdown", "code", "markdown", "code" ]
nproctor/phys202-2015-work
assignments/assignment06/InteractEx05.ipynb
mit
[ "Interact Exercise 5\nImports\nPut the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.html import widgets\nfrom IPython.display import display\n\nfrom IPython.display import (\n display_pretty, display_html, display_jpeg,\n display_png, display_json, display_latex, display_svg\n)\nfrom IPython.display import SVG", "Interact with SVG display\nSVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:", "s = \"\"\"\n<svg width=\"100\" height=\"100\">\n <circle cx=\"50\" cy=\"50\" r=\"20\" fill=\"aquamarine\" />\n</svg>\n\"\"\"\n\nS = SVG(s)\ndisplay(S)", "Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.", "def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):\n \"\"\"Draw an SVG circle.\n \n Parameters\n ----------\n width : int\n The width of the svg drawing area in px.\n height : int\n The height of the svg drawing area in px.\n cx : int\n The x position of the center of the circle in px.\n cy : int\n The y position of the center of the circle in px.\n r : int\n The radius of the circle in px.\n fill : str\n The fill color of the circle.\n \"\"\"\n \n listed = ['<svg width=','\"',str(width),'\"',' height=','\"', str(height),'\"', '> <circle cx=','\"', str(cx),'\"',' cy=','\"',\n str(cy),'\"',' r=','\"', str(r),'\"', ' fill=','\"',fill,'\"',' /> </svg>']\n s = \"\".join(listed)\n S = SVG(s)\n display(S)\n print(s)\n \n\ndraw_circle(cx=30, cy=30, r=30, fill='plum')\n\n\nassert True # leave this to grade the draw_circle function", "Use interactive to build a user interface for exploing the draw_circle function:\n\nwidth: a fixed value of 300px\nheight: a fixed value of 300px\ncx/cy: a slider in the range [0,300]\nr: a slider in the range [0,50]\nfill: a text area in which you can type a color's name\n\nSave the return value of interactive to a variable named w.", "w = interactive(draw_circle, width=fixed(300), height=fixed(300), cx=(0,300,1), cy=(0,300,1), r=(0,50,1), fill='red')\n\nc = w.children\nassert c[0].min==0 and c[0].max==300\nassert c[1].min==0 and c[1].max==300\nassert c[2].min==0 and c[2].max==50\nassert c[3].value=='red'", "Use the display function to show the widgets created by interactive:", "display(w)\n\nassert True # leave this to grade the display of the widget", "Play with the sliders to change the circles parameters interactively." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
carefree0910/MachineLearning
Notebooks/numba/zh-cn/CNN.ipynb
mit
[ "import numba as nb\nimport numpy as np\n\ndef conv_kernel(x, w, rs, n, n_channels, height, width, n_filters, filter_height, filter_width, out_h, out_w):\n for i in range(n):\n for j in range(out_h):\n for p in range(out_w):\n window = x[i, ..., j:j+filter_height, p:p+filter_width]\n for q in range(n_filters):\n rs[i, q, j, p] += np.sum(w[q] * window)\n\[email protected](nopython=True)\ndef jit_conv_kernel(x, w, rs, n, n_channels, height, width, n_filters, filter_height, filter_width, out_h, out_w):\n for i in range(n):\n for j in range(out_h):\n for p in range(out_w):\n window = x[i, ..., j:j+filter_height, p:p+filter_width]\n for q in range(n_filters):\n rs[i, q, j, p] += np.sum(w[q] * window)\n\ndef conv(x, w, kernel, args):\n n, n_filters = args[0], args[4]\n out_h, out_w = args[-2:]\n rs = np.zeros([n, n_filters, out_h, out_w], dtype=np.float32)\n kernel(x, w, rs, *args)\n return rs\n\ndef cs231n_conv(x, w, args):\n n, n_channels, height, width, n_filters, filter_height, filter_width, out_h, out_w = args\n shape = (n_channels, filter_height, filter_width, n, out_h, out_w)\n strides = (height * width, width, 1, n_channels * height * width, width, 1)\n strides = x.itemsize * np.asarray(strides)\n x_cols = np.lib.stride_tricks.as_strided(x, shape=shape, strides=strides).reshape(\n n_channels * filter_height * filter_width, n * out_h * out_w)\n return w.reshape(n_filters, -1).dot(x_cols).reshape(n_filters, n, out_h, out_w).transpose(1, 0, 2, 3)\n\n# 64 个 3 x 28 x 28 的图像输入(模拟 mnist)\nx = np.random.randn(64, 3, 28, 28).astype(np.float32)\n# 16 个 5 x 5 的 kernel\nw = np.random.randn(16, x.shape[1], 5, 5).astype(np.float32)\n\nn, n_channels, height, width = x.shape\nn_filters, _, filter_height, filter_width = w.shape\nout_h = height - filter_height + 1\nout_w = width - filter_width + 1\nargs = (n, n_channels, height, width, n_filters, filter_height, filter_width, out_h, out_w)\n\nprint(np.linalg.norm((cs231n_conv(x, w, args) - conv(x, w, conv_kernel, args)).ravel()))\nprint(np.linalg.norm((cs231n_conv(x, w, args) - conv(x, w, jit_conv_kernel, args)).ravel()))\nprint(np.linalg.norm((conv(x, w, conv_kernel, args) - conv(x, w, jit_conv_kernel, args)).ravel()))\n%timeit conv(x, w, conv_kernel, args)\n%timeit conv(x, w, jit_conv_kernel, args)\n%timeit cs231n_conv(x, w, args)", "注意:这里如果使用np.allclose的话会过不了assert;事实上,仅仅是将数组的dtype从float64变成float32、精度就会下降很多,毕竟卷积涉及到的运算太多", "@nb.jit(nopython=True)\ndef jit_conv_kernel2(x, w, rs, n, n_channels, height, width, n_filters, filter_height, filter_width, out_h, out_w):\n for i in range(n):\n for j in range(out_h):\n for p in range(out_w):\n for q in range(n_filters):\n for r in range(n_channels):\n for s in range(filter_height):\n for t in range(filter_width):\n rs[i, q, j, p] += x[i, r, j+s, p+t] * w[q, r, s, t]\n \nassert np.allclose(conv(x, w, jit_conv_kernel, args), conv(x, w, jit_conv_kernel, args))\n%timeit conv(x, w, jit_conv_kernel, args)\n%timeit conv(x, w, jit_conv_kernel2, args)\n%timeit cs231n_conv(x, w, args)", "可以看到,使用jit和使用纯numpy进行编程的很大一点不同就是,不要畏惧用for;事实上一般来说,代码“长得越像 C”、速度就会越快", "def max_pool_kernel(x, rs, *args):\n n, n_channels, pool_height, pool_width, out_h, out_w = args\n for i in range(n):\n for j in range(n_channels):\n for p in range(out_h):\n for q in range(out_w):\n window = x[i, j, p:p+pool_height, q:q+pool_width]\n rs[i, j, p, q] += np.max(window)\n\[email protected](nopython=True)\ndef jit_max_pool_kernel(x, rs, *args):\n n, n_channels, pool_height, pool_width, out_h, out_w = args\n for i in range(n):\n for j in range(n_channels):\n for p in range(out_h):\n for q in range(out_w):\n window = x[i, j, p:p+pool_height, q:q+pool_width]\n rs[i, j, p, q] += np.max(window)\n \[email protected](nopython=True)\ndef jit_max_pool_kernel2(x, rs, *args):\n n, n_channels, pool_height, pool_width, out_h, out_w = args\n for i in range(n):\n for j in range(n_channels):\n for p in range(out_h):\n for q in range(out_w):\n _max = x[i, j, p, q]\n for r in range(pool_height):\n for s in range(pool_width):\n _tmp = x[i, j, p+r, q+s]\n if _tmp > _max:\n _max = _tmp\n rs[i, j, p, q] += _max\n\ndef max_pool(x, kernel, args):\n n, n_channels = args[:2]\n out_h, out_w = args[-2:]\n rs = np.zeros([n, n_filters, out_h, out_w], dtype=np.float32)\n kernel(x, rs, *args)\n return rs\n\npool_height, pool_width = 2, 2\nn, n_channels, height, width = x.shape\nout_h = height - pool_height + 1\nout_w = width - pool_width + 1\nargs = (n, n_channels, pool_height, pool_width, out_h, out_w)\n\nassert np.allclose(max_pool(x, max_pool_kernel, args), max_pool(x, jit_max_pool_kernel, args))\nassert np.allclose(max_pool(x, jit_max_pool_kernel, args), max_pool(x, jit_max_pool_kernel2, args))\n%timeit max_pool(x, max_pool_kernel, args)\n%timeit max_pool(x, jit_max_pool_kernel, args)\n%timeit max_pool(x, jit_max_pool_kernel2, args)" ]
[ "code", "markdown", "code", "markdown", "code" ]
amueller/advanced_training
05.3 Support Vector Machines.ipynb
bsd-2-clause
[ "%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt", "Support Vector Machines", "from sklearn.datasets import load_digits\nfrom sklearn.cross_validation import train_test_split\n\ndigits = load_digits()\nX_train, X_test, y_train, y_test = train_test_split(digits.data / 16., digits.target % 2, random_state=2)\n\nfrom sklearn.svm import LinearSVC, SVC\nlinear_svc = LinearSVC(loss=\"hinge\").fit(X_train, y_train)\nsvc = SVC(kernel=\"linear\").fit(X_train, y_train)\n\nnp.mean(linear_svc.predict(X_test) == svc.predict(X_test))", "Kernel SVMs\nPredictions in a kernel-SVM are made using the formular\n$$\n\\hat{y} = \\alpha_0 + \\alpha_1 y_1 k(\\mathbf{x^{(1)}}, \\mathbf{x}) + ... + \\alpha_n y_n k(\\mathbf{x^{(n)}}, \\mathbf{x})> 0\n$$\n$$\n0 \\leq \\alpha_i \\leq C\n$$\nRadial basis function (Gaussian) kernel:\n$$k(\\mathbf{x}, \\mathbf{x'}) = \\exp(-\\gamma ||\\mathbf{x} - \\mathbf{x'}||^2)$$", "from sklearn.metrics.pairwise import rbf_kernel\nline = np.linspace(-3, 3, 100)[:, np.newaxis]\nkernel_value = rbf_kernel([[0]], line, gamma=1)\nplt.plot(line, kernel_value.T)\n\nfrom plots import plot_svm_interactive\nplot_svm_interactive()\n\nsvc = SVC().fit(X_train, y_train)\nsvc.score(X_test, y_test)\n\nCs = [0.001, 0.01, 0.1, 1, 10, 100]\ngammas = [0.001, 0.01, 0.1, 1, 10, 100]\n\nfrom sklearn.grid_search import GridSearchCV\n\nparam_grid = {'C': Cs, 'gamma' : gammas}\ngrid_search = GridSearchCV(SVC(), param_grid, cv=5)\ngrid_search.fit(X_train, y_train)\n\ngrid_search.score(X_test, y_test)\n\n# We extract just the scores\nscores = [x[1] for x in grid_search.grid_scores_]\nscores = np.array(scores).reshape(6, 6)\n\nplt.matshow(scores)\nplt.xlabel('gamma')\nplt.ylabel('C')\nplt.colorbar()\nplt.xticks(np.arange(6), param_grid['gamma'])\nplt.yticks(np.arange(6), param_grid['C']);", "Excercise\n\nScale the data using StandardScaler before applying the SVC. How does the performance of the default parameters change?\nGrid-Search the parameters for the scaled data. How do they differ from the previous ones?" ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
davewsmith/notebooks
temperature/FoamCoreExperiment.ipynb
mit
[ "The foam core experiment\nContinuing the project to distribute WiFi temperature probes around the house.\nIn the initial experiment, sensors were laid out haphazardly. For this round, I built a mounting harness so that the sensors would get more consistent airflow. I was also curious to see if putting a piece of foam core board between the SHT30 sensor board and the ESP8266 board would help isolate the sensor from heat radiated by the ESP8266. I was hoping the temperature readings would drop a few degrees, since a piece of foam core would be a cheap and easy fix.\nA few minutes before 19:00, I changed the setup from\n\nto use a harness built out of scrap foam core board.\n\nThe code below is explained in the InitialTemperatureValues notebook.", "%matplotlib inline\nimport matplotlib\nmatplotlib.rcParams['figure.figsize'] = (12, 5)\nimport pandas as pd\n\ndf = pd.read_csv('temps-foamcore.csv', header=None, names=['time', 'mac', 'f', 'h'], parse_dates=[0])\nper_sensor_f = df.pivot(index='time', columns='mac', values='f')\ndownsampled_f = per_sensor_f.resample('2T').mean()\ndownsampled_f.plot();", "Confirmation that the sensors are sensitive to airflow.\nThe outlier sensor (:F8) is still there. The spike at 18:20 is probably from me holding it while wondering about heat disappation. One of the WiFi drop-out issues got fixed (and another discovered).\nApplying the same guestimated correction to the outlier sensor from the first experiment...", "downsampled_f['5C:CF:7F:33:F7:F8'] += 5.0\ndownsampled_f.plot();", "During the experiment, the room thermostat claimed 84F. It sure didn't feel like 91F. So it looks like foam core board either isn't a great insulator, or (more likely) heat from the ESP8266 is working its way through the connections to the sensor board and getting to the SHT30 chip. The foam core divider did seem warmer on the ESP8266 side, but the overall probe still felt slightly warm to the touch. A few minutes with a FLIR camera would be great, but this project doesn't justify buying one, and I don't know of one nearby to borrow.\nNext Steps\nI'm puzzled about the degree to which heat from the ESP8266 is affecting the temperature sensors, so I might try moving the boards a few inches apart. That will require jumper wires, which would be a more fragile arrangement to deploy; applying a per-sensor adjustment in software would be simpler.\nWhether or not that works, I'll run this for a day to gather enough data for per-sensor adjustments. The goal is to get a handle on temperature changes within the house. That doesn't require crazy high precision. It looks like this'll be Good Enough." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/official/migration/UJ10 Vertex SDK Custom Scikit-Learn with pre-built training container.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex AI: Vertex AI Migration: Custom Scikit-Learn model with pre-built training container\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ10%20Vertex%20SDK%20Custom%20Scikit-Learn%20with%20pre-built%20training%20container.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ10%20Vertex%20SDK%20Custom%20Scikit-Learn%20with%20pre-built%20training%20container.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n<br/><br/><br/>\nDataset\nThe dataset used for this tutorial is the UCI Machine Learning US Census Data (1990) dataset.The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.\nThe dataset predicts whether a persons income will be above $50K USD.\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements. You need the following:\n\nThe Cloud Storage SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:\n\n\nInstall and initialize the SDK.\n\n\nInstall Python 3.\n\n\nInstall virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.\n\n\nTo install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.\n\n\nTo launch Jupyter, run jupyter notebook on the command-line in a terminal shell.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstallation\nInstall the latest version of Vertex SDK for Python.", "import os\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG", "Install the latest GA version of google-cloud-storage library as well.", "! pip3 install -U google-cloud-storage $USER_FLAG\n\nif os.getenv(\"IS_TESTING\"):\n ! pip3 install --upgrade tensorflow $USER_FLAG", "Restart the kernel\nOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nGPU runtime\nThis tutorial does not require a GPU runtime.\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\nLearn more about Vertex AI regions", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nimport os\nimport sys\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.", "BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants", "import google.cloud.aiplatform as aip", "Initialize Vertex SDK for Python\nInitialize the Vertex SDK for Python for your project and corresponding bucket.", "aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)", "Set pre-built containers\nSet the pre-built Docker container image for training and prediction.\nFor the latest list, see Pre-built containers for training.\nFor the latest list, see Pre-built containers for prediction.", "TRAIN_VERSION = \"scikit-learn-cpu.0-23\"\nDEPLOY_VERSION = \"sklearn-cpu.0-23\"\n\nTRAIN_IMAGE = \"gcr.io/cloud-aiplatform/training/{}:latest\".format(TRAIN_VERSION)\nDEPLOY_IMAGE = \"gcr.io/cloud-aiplatform/prediction/{}:latest\".format(DEPLOY_VERSION)", "Set machine type\nNext, set the machine type to use for training and prediction.\n\nSet the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.\nmachine type\nn1-standard: 3.75GB of memory per vCPU.\nn1-highmem: 6.5GB of memory per vCPU\nn1-highcpu: 0.9 GB of memory per vCPU\n\n\nvCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]\n\nNote: The following is not supported for training:\n\nstandard: 2 vCPUs\nhighcpu: 2, 4 and 8 vCPUs\n\nNote: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.", "if os.getenv(\"IS_TESTING_TRAIN_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_TRAIN_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nTRAIN_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Train machine type\", TRAIN_COMPUTE)\n\nif os.getenv(\"IS_TESTING_DEPLOY_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_DEPLOY_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nDEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Deploy machine type\", DEPLOY_COMPUTE)", "Examine the training package\nPackage layout\nBefore you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.\n\nPKG-INFO\nREADME.md\nsetup.cfg\nsetup.py\ntrainer\n__init__.py\ntask.py\n\nThe files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.\nThe file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).\nPackage Assembly\nIn the following cells, you will assemble the training package.", "# Make folder for Python training script\n! rm -rf custom\n! mkdir custom\n\n# Add package information\n! touch custom/README.md\n\nsetup_cfg = \"[egg_info]\\n\\ntag_build =\\n\\ntag_date = 0\"\n! echo \"$setup_cfg\" > custom/setup.cfg\n\nsetup_py = \"import setuptools\\n\\nsetuptools.setup(\\n\\n install_requires=[\\n\\n 'tensorflow_datasets==1.3.0',\\n\\n ],\\n\\n packages=setuptools.find_packages())\"\n! echo \"$setup_py\" > custom/setup.py\n\npkg_info = \"Metadata-Version: 1.0\\n\\nName: US Census Data (1990) tabular binary classification\\n\\nVersion: 0.0.0\\n\\nSummary: Demostration training script\\n\\nHome-page: www.google.com\\n\\nAuthor: Google\\n\\nAuthor-email: [email protected]\\n\\nLicense: Public\\n\\nDescription: Demo\\n\\nPlatform: Vertex\"\n! echo \"$pkg_info\" > custom/PKG-INFO\n\n# Make the training subfolder\n! mkdir custom/trainer\n! touch custom/trainer/__init__.py\n\n%%writefile custom/trainer/task.py\n# Single Instance Training for Census Income\n\nfrom sklearn.ensemble import RandomForestClassifier\nimport joblib\nfrom sklearn.feature_selection import SelectKBest\nfrom sklearn.pipeline import FeatureUnion\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import LabelBinarizer\nimport datetime\nimport pandas as pd\n\nfrom google.cloud import storage\n\nimport numpy as np\nimport argparse\nimport os\nimport sys\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--model-dir', dest='model_dir',\n default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')\nargs = parser.parse_args()\n\nprint('Python Version = {}'.format(sys.version))\n\n# Public bucket holding the census data\nbucket = storage.Client().bucket('cloud-samples-data')\n\n# Path to the data inside the public bucket\nblob = bucket.blob('ai-platform/sklearn/census_data/adult.data')\n# Download the data\nblob.download_to_filename('adult.data')\n\n# Define the format of your input data including unused columns (These are the columns from the census data files)\nCOLUMNS = (\n 'age',\n 'workclass',\n 'fnlwgt',\n 'education',\n 'education-num',\n 'marital-status',\n 'occupation',\n 'relationship',\n 'race',\n 'sex',\n 'capital-gain',\n 'capital-loss',\n 'hours-per-week',\n 'native-country',\n 'income-level'\n)\n\n\n\n# Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn\nCATEGORICAL_COLUMNS = (\n 'workclass',\n 'education',\n 'marital-status',\n 'occupation',\n 'relationship',\n 'race',\n 'sex',\n 'native-country'\n)\n\n# Load the training census dataset\nwith open('./adult.data', 'r') as train_data:\n raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)\n\n# Remove the column we are trying to predict ('income-level') from our features list\n# Convert the Dataframe to a lists of lists\ntrain_features = raw_training_data.drop('income-level', axis=1).values.tolist()\n# Create our training labels list, convert the Dataframe to a lists of lists\ntrain_labels = (raw_training_data['income-level'] == ' >50K').values.tolist()\n\n# Since the census data set has categorical features, we need to convert\n# them to numerical values. We'll use a list of pipelines to convert each\n# categorical column and then use FeatureUnion to combine them before calling\n# the RandomForestClassifier.\ncategorical_pipelines = []\n\n# Each categorical column needs to be extracted individually and converted to a numerical value.\n# To do this, each categorical column will use a pipeline that extracts one feature column via\n# SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one.\n# A scores array (created below) will select and extract the feature column. The scores array is\n# created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN.\nfor i, col in enumerate(COLUMNS[:-1]):\n if col in CATEGORICAL_COLUMNS:\n # Create a scores array to get the individual categorical column.\n # Example:\n # data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical',\n # 'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States']\n # scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n #\n # Returns: [['State-gov']]\n # Build the scores array.\n scores = [0] * len(COLUMNS[:-1])\n # This column is the categorical column we want to extract.\n scores[i] = 1\n skb = SelectKBest(k=1)\n skb.scores_ = scores\n # Convert the categorical column to a numerical value\n lbn = LabelBinarizer()\n r = skb.transform(train_features)\n lbn.fit(r)\n # Create the pipeline to extract the categorical feature\n categorical_pipelines.append(\n ('categorical-{}'.format(i), Pipeline([\n ('SKB-{}'.format(i), skb),\n ('LBN-{}'.format(i), lbn)])))\n\n# Create pipeline to extract the numerical features\nskb = SelectKBest(k=6)\n# From COLUMNS use the features that are numerical\nskb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0]\ncategorical_pipelines.append(('numerical', skb))\n\n# Combine all the features using FeatureUnion\npreprocess = FeatureUnion(categorical_pipelines)\n\n# Create the classifier\nclassifier = RandomForestClassifier()\n\n# Transform the features and fit them to the classifier\nclassifier.fit(preprocess.transform(train_features), train_labels)\n\n# Create the overall model as a single pipeline\npipeline = Pipeline([\n ('union', preprocess),\n ('classifier', classifier)\n])\n\n# Split path into bucket and subdirectory\nbucket = args.model_dir.split('/')[2]\nsubdirs = args.model_dir.split('/')[3:]\nsubdir = subdirs[0]\nsubdirs.pop(0)\nfor comp in subdirs:\n subdir = os.path.join(subdir, comp)\n\n# Write model to a local file\njoblib.dump(pipeline, 'model.joblib')\n\n# Upload the model to GCS\nbucket = storage.Client().bucket(bucket)\nblob = bucket.blob(subdir + '/model.joblib')\nblob.upload_from_filename('model.joblib')", "Store training script on your Cloud Storage bucket\nNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.", "! rm -f custom.tar custom.tar.gz\n! tar cvf custom.tar custom\n! gzip custom.tar\n! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_census.tar.gz", "Train a model\ntraining.create-python-pre-built-container\nCreate and run custom training job\nTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.\nCreate custom training job\nA custom training job is created with the CustomTrainingJob class, with the following parameters:\n\ndisplay_name: The human readable name for the custom training job.\ncontainer_uri: The training container image.\nrequirements: Package requirements for the training container image (e.g., pandas).\nscript_path: The relative path to the training script.", "job = aip.CustomTrainingJob(\n display_name=\"census_\" + TIMESTAMP,\n script_path=\"custom/trainer/task.py\",\n container_uri=TRAIN_IMAGE,\n requirements=[\"gcsfs==0.7.1\", \"tensorflow-datasets==4.4\"],\n)\n\nprint(job)", "Example output:\n&lt;google.cloud.aiplatform.training_jobs.CustomTrainingJob object at 0x7feab1346710&gt;\n\nRun the custom training job\nNext, you run the custom job to start the training job by invoking the method run, with the following parameters:\n\nreplica_count: The number of compute instances for training (replica_count = 1 is single node training).\nmachine_type: The machine type for the compute instances.\nbase_output_dir: The Cloud Storage location to write the model artifacts to.\nsync: Whether to block until completion of the job.", "MODEL_DIR = \"{}/{}\".format(BUCKET_NAME, TIMESTAMP)\n\n\njob.run(\n replica_count=1, machine_type=TRAIN_COMPUTE, base_output_dir=MODEL_DIR, sync=True\n)\n\nMODEL_DIR = MODEL_DIR + \"/model\"\nmodel_path_to_deploy = MODEL_DIR", "general.import-model\nUpload the model\nNext, upload your model to a Model resource using Model.upload() method, with the following parameters:\n\ndisplay_name: The human readable name for the Model resource.\nartifact: The Cloud Storage location of the trained model artifacts.\nserving_container_image_uri: The serving container image.\nsync: Whether to execute the upload asynchronously or synchronously.\n\nIf the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.", "model = aip.Model.upload(\n display_name=\"census_\" + TIMESTAMP,\n artifact_uri=MODEL_DIR,\n serving_container_image_uri=DEPLOY_IMAGE,\n sync=False,\n)\n\nmodel.wait()", "Example output:\nINFO:google.cloud.aiplatform.models:Creating Model\nINFO:google.cloud.aiplatform.models:Create Model backing LRO: projects/759209241365/locations/us-central1/models/925164267982815232/operations/3458372263047331840\nINFO:google.cloud.aiplatform.models:Model created. Resource name: projects/759209241365/locations/us-central1/models/925164267982815232\nINFO:google.cloud.aiplatform.models:To use this Model in another session:\nINFO:google.cloud.aiplatform.models:model = aiplatform.Model('projects/759209241365/locations/us-central1/models/925164267982815232')\n\nMake batch predictions\npredictions.batch-prediction\nMake test items\nYou will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.", "INSTANCES = [\n [\n 25,\n \"Private\",\n 226802,\n \"11th\",\n 7,\n \"Never-married\",\n \"Machine-op-inspct\",\n \"Own-child\",\n \"Black\",\n \"Male\",\n 0,\n 0,\n 40,\n \"United-States\",\n ],\n [\n 38,\n \"Private\",\n 89814,\n \"HS-grad\",\n 9,\n \"Married-civ-spouse\",\n \"Farming-fishing\",\n \"Husband\",\n \"White\",\n \"Male\",\n 0,\n 0,\n 50,\n \"United-States\",\n ],\n]", "Make the batch input file\nNow make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a list of the form:\n [ [ content_1], [content_2] ]\n\n\ncontent: The feature values of the test item as a list.", "import json\n\nimport tensorflow as tf\n\ngcs_input_uri = BUCKET_NAME + \"/\" + \"test.jsonl\"\nwith tf.io.gfile.GFile(gcs_input_uri, \"w\") as f:\n for i in INSTANCES:\n f.write(json.dumps(i) + \"\\n\")\n\n! gsutil cat $gcs_input_uri", "Make the batch prediction request\nNow that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:\n\njob_display_name: The human readable name for the batch prediction job.\ngcs_source: A list of one or more batch request input files.\ngcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.\ninstances_format: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'.\npredictions_format: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'.\nmachine_type: The type of machine to use for training.\nsync: If set to True, the call will block while waiting for the asynchronous batch job to complete.", "MIN_NODES = 1\nMAX_NODES = 1\n\nbatch_predict_job = model.batch_predict(\n job_display_name=\"census_\" + TIMESTAMP,\n gcs_source=gcs_input_uri,\n gcs_destination_prefix=BUCKET_NAME,\n instances_format=\"jsonl\",\n predictions_format=\"jsonl\",\n model_parameters=None,\n machine_type=DEPLOY_COMPUTE,\n starting_replica_count=MIN_NODES,\n max_replica_count=MAX_NODES,\n sync=False,\n)\n\nprint(batch_predict_job)", "Example output:\nINFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob\n&lt;google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0&gt; is waiting for upstream dependencies to complete.\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296\nINFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:\nINFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296')\nINFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:\nhttps://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state:\nJobState.JOB_STATE_RUNNING\n\nWait for completion of batch prediction job\nNext, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.", "batch_predict_job.wait()", "Example Output:\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328\nINFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:\nINFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')\nINFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:\nhttps://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_SUCCEEDED\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328\n\nGet the predictions\nNext, get the results from the completed batch prediction job.\nThe results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:\n\ninstance: The prediction request.\nprediction: The prediction response.", "import json\n\nbp_iter_outputs = batch_predict_job.iter_outputs()\n\nprediction_results = list()\nfor blob in bp_iter_outputs:\n if blob.name.split(\"/\")[-1].startswith(\"prediction\"):\n prediction_results.append(blob.name)\n\ntags = list()\nfor prediction_result in prediction_results:\n gfile_name = f\"gs://{bp_iter_outputs.bucket.name}/{prediction_result}\"\n with tf.io.gfile.GFile(name=gfile_name, mode=\"r\") as gfile:\n for line in gfile.readlines():\n line = json.loads(line)\n print(line)\n break", "Example Output:\n{'instance': [25, 'Private', 226802, '11th', 7, 'Never-married', 'Machine-op-inspct', 'Own-child', 'Black', 'Male', 0, 0, 40, 'United-States'], 'prediction': False}\n\nMake online predictions\npredictions.deploy-model-api\nDeploy the model\nNext, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters:\n\ndeployed_model_display_name: A human readable name for the deployed model.\ntraffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.\nIf only one model, then specify as { \"0\": 100 }, where \"0\" refers to this model being uploaded and 100 means 100% of the traffic.\nIf there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { \"0\": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.\nmachine_type: The type of machine to use for training.\nstarting_replica_count: The number of compute instances to initially provision.\nmax_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.", "DEPLOYED_NAME = \"census-\" + TIMESTAMP\n\nTRAFFIC_SPLIT = {\"0\": 100}\n\nMIN_NODES = 1\nMAX_NODES = 1\n\nendpoint = model.deploy(\n deployed_model_display_name=DEPLOYED_NAME,\n traffic_split=TRAFFIC_SPLIT,\n machine_type=DEPLOY_COMPUTE,\n min_replica_count=MIN_NODES,\n max_replica_count=MAX_NODES,\n)", "Example output:\nINFO:google.cloud.aiplatform.models:Creating Endpoint\nINFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352\nINFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472\nINFO:google.cloud.aiplatform.models:To use this Endpoint in another session:\nINFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472')\nINFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472\nINFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480\nINFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472\n\npredictions.online-prediction-automl\nMake test item\nYou will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.", "INSTANCE = [\n 25,\n \"Private\",\n 226802,\n \"11th\",\n 7,\n \"Never-married\",\n \"Machine-op-inspct\",\n \"Own-child\",\n \"Black\",\n \"Male\",\n 0,\n 0,\n 40,\n \"United-States\",\n]", "Make the prediction\nNow that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.\nRequest\nThe format of each instance is:\n[feature_list]\n\nSince the predict() method can take multiple items (instances), send your single test item as a list of one test item.\nResponse\nThe response from the predict() call is a Python dictionary with the following entries:\n\nids: The internal assigned unique identifiers for each prediction request.\npredictions: The predicted confidence, between 0 and 1, per class label.\ndeployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.", "instances_list = [INSTANCE]\n\nprediction = endpoint.predict(instances_list)\nprint(prediction)", "Example output:\nPrediction(predictions=[False], deployed_model_id='7220545636163125248', explanations=None)\n\nUndeploy the model\nWhen you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.", "endpoint.undeploy_all()", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nAutoML Training Job\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket", "delete_all = True\n\nif delete_all:\n # Delete the dataset using the Vertex dataset object\n try:\n if \"dataset\" in globals():\n dataset.delete()\n except Exception as e:\n print(e)\n\n # Delete the model using the Vertex model object\n try:\n if \"model\" in globals():\n model.delete()\n except Exception as e:\n print(e)\n\n # Delete the endpoint using the Vertex endpoint object\n try:\n if \"endpoint\" in globals():\n endpoint.delete()\n except Exception as e:\n print(e)\n\n # Delete the AutoML or Pipeline trainig job\n try:\n if \"dag\" in globals():\n dag.delete()\n except Exception as e:\n print(e)\n\n # Delete the custom trainig job\n try:\n if \"job\" in globals():\n job.delete()\n except Exception as e:\n print(e)\n\n # Delete the batch prediction job using the Vertex batch prediction object\n try:\n if \"batch_predict_job\" in globals():\n batch_predict_job.delete()\n except Exception as e:\n print(e)\n\n # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object\n try:\n if \"hpt_job\" in globals():\n hpt_job.delete()\n except Exception as e:\n print(e)\n\n if \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
phoebe-project/phoebe2-docs
2.2/tutorials/optimizing.ipynb
gpl-3.0
[ "Advanced: Optimizing Performance with PHOEBE\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).", "!pip install -I \"phoebe>=2.2,<2.3\"\n\nimport phoebe\n\nb = phoebe.default_binary()", "Interactivity Options\nWhen running in an interactive Python session, PHOEBE updates all constraints and runs various checks after each command. Although this is convenient, it does take some time, and it can sometimes be advantageous to disable this to save computation time.\nInteractive Checks\nBy default, interactive checks are enabled when PHOEBE is being run in an interactive session (either an interactive python, IPython, or Jupyter notebook session), but disabled when PHOEBE is run as a script directly from the console. When enabled, PHOEBE will re-run the system checks after every single change to the bundle, raising warnings via the logger as soon as they occur.\nThis default behavior can be changed via phoebe.interactive_checks_on() or phoebe.interactive_checks_off(). The current value can be accessed via phoebe.conf.interactive_checks.", "print(phoebe.conf.interactive_checks)\n\nphoebe.interactive_checks_off()\n\nprint(phoebe.conf.interactive_checks)", "If disabled, you can always manually run the checks via b.run_checks().", "print(b.run_checks())\n\nb.set_value('requiv', component='primary', value=50)\n\nprint(b.run_checks())", "Interactive Constraints\nBy default, interactive constraints are always enabled in PHOEBE, unless explicitly disabled. Whenever a value is changed in the bundle that affects the value of a constrained value, that constraint is immediately executed and all applicable values updated. The ensures that all constrained values are \"up-to-date\".\nIf disabled, constraints are delayed and only executed when needed by PHOEBE (when calling run_compute, for example). This can save significant time, as each value that needs updating only needs to have its constraint executed once, instead of multiple times.\nThis default behavior can be changed via phoebe.interactive_constraints_on() or phoebe.interactive_constraints_off(). The current value can be accessed via phoebe.conf.interactive_constraints.\nLet's first look at the default behavior with interactive constraints on.", "print(phoebe.conf.interactive_constraints)\n\nprint(b.filter('mass', component='primary'))\n\nb.set_value('sma@binary', 10)\n\nprint(b.filter('mass', component='primary'))", "Note that the mass has already updated, according to the constraint, when the value of the semi-major axes was changed. If we disable interactive constraints this will not be the case.", "phoebe.interactive_constraints_off()\n\nprint(phoebe.conf.interactive_constraints)\n\nprint(b.filter('mass', component='primary'))\n\nb.set_value('sma@binary', 15)\n\nprint(b.filter('mass', component='primary'))", "No need to worry though - all constraints will be run automatically before passing to the backend. If you need to access the value of a constrained parameter, you can explicitly ask for all delayed constraints to be executed via b.run_delayed_constraints().", "b.run_delayed_constraints()\n\nprint(b.filter('mass', component='primary'))\n\nphoebe.reset_settings()", "Filtering Options\ncheck_visible\nBy default, everytime you call filter or set_value, PHOEBE checks to see if the current value is visible (meaning it is relevant given the value of other parameters). Although not terribly expensive, these checks can add up... so disabling these checks can save time. Note that these are automatically temporarily disabled during run_compute. If disabling these checks, be aware that changing the value of some parameters may have no affect on the resulting computations. You can always manually check the visibility/relevance of a parameter by calling parameter.is_visible.\nThis default behavior can be changed via phoebe.check_visible_on() or phoebe.check_visible_off().\nLet's first look at the default behavior with check_visible on.", "b.add_dataset('lc')\n\nprint(b.get_dataset())", "Now if we disable check_visible, we'll see the same thing as if we passed check_visible=False to any filter call.", "phoebe.check_visible_off()\n\nprint(b.get_dataset())", "Now the same filter is returning additional parameters. For example, ld_coeffs_source parameters were initially hidden because ld_mode is set to 'interp'. We can see the rules that are being followed:", "print(b.get_parameter(qualifier='ld_coeffs_source', component='primary').visible_if)", "and can still manually check to see that it shouldn't be visible (isn't currently relevant given the value of ld_func):", "print(b.get_parameter(qualifier='ld_coeffs_source', component='primary').is_visible)\n\nphoebe.reset_settings()", "check_default\nSimilarly, PHOEBE automatically excludes any parameter which is tagged with a '_default' tag. These parameters exist to provide default values when a new component or dataset are added to the bundle, but can usually be ignored, and so are excluded from any filter calls. Although not at all expensive, this too can be disabled at the settings level or by passing check_default=False to any filter call. \nThis default behavior can be changed via phoebe.check_default_on() or phoebe.check_default_off().", "print(b.get_dataset())\n\nprint(b.get_dataset(check_default=False))\n\nphoebe.check_default_off()\n\nprint(b.get_dataset())\n\nphoebe.reset_settings()", "Passband Options\nPHOEBE automatically fetches necessary tables from tables.phoebe-project.org. By default, only the necessary tables for each passband are fetched (except when calling download_passband manually) and the fits files are fetched uncompressed.\nFor more details, see the API docs on download_passband and update_passband as well as the passband updating tutorial.\nThe default values mentioned in the API docs for content and gzipped can be exposed via phoebe.get_download_passband_defaults and changed via phoebe.set_download_passband_defaults. Note that setting gzipped to True will minimize file storage for the passband files and will result in faster download speeds, but take significantly longer to load by PHOEBE as they have to be uncompressed each time they are loaded. If you have a large number of installed passbands, this could significantly slow importing PHOEBE.", "phoebe.get_download_passband_defaults()", "Environment Variables\nSome settings cannot be changed after importing PHOEBE, so they are available via environment variables. These can be set in a variety of ways:\nSetting inline before calling python will set for that single session of PHOEBE:\nPHOEBE_ENABLE_PLOTTING=FALSE python [script.py]\nSetting via the os package in python before importing PHOEBE allows you to set the setting everytime you run a given script:\npy\nimport os\nos.environ['PHOEBE_ENABLE_PLOTTING'] = 'FALSE'\nimport phoebe\nNote for all boolean settings, the string is converted to uppercase and compared to 'TRUE'.\nPHOEBE_ENABLE_PLOTTING\nPHOEBE_ENABLE_PLOTTING (TRUE by default) allows for disabling plotting within PHOEBE and therefore skipping the import of all plotting libraries (which take up a significant amount of the time it takes to import phoebe).\nPHOEBE_ENABLE_ONLINE_PASSBANDS\nPHOEBE_ENABLE_ONLINE_PASSBANDS (TRUE by default) dictates whether online passbands are queried and available for on-the-fly downloading. If you are sure you have all the local passbands you need, set this to False to save some time.\nPHOEBE_DOWNLOAD_PASSBAND_DEFAULTS_CONTENT\nPHOEBE_DOWNLOAD_PASSBAND_DEFAULTS_CONTENT ('all' by default, use comma separate for a list: 'ck2004,phoenix') allows setting the value for content in phoebe.set_download_passband_defaults. For more details, see the section above.\nPHOEBE_DOWNLOAD_PASSBAND_DEFAULTS_GZIPPED\nPHOEBE_DOWNLOAD_PASSBAND_DEFAULTS_GZIPPED (FALSE by default) allows setting the value for gzipped in phoebe.set_download_passband_defaults. For more details, see the section above." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
shikhar413/openmc
examples/jupyter/search.ipynb
mit
[ "Criticality Search\nThis notebook illustrates the usage of the OpenMC Python API's generic eigenvalue search capability. In this Notebook, we will do a critical boron concentration search of a typical PWR pin cell.\nTo use the search functionality, we must create a function which creates our model according to the input parameter we wish to search for (in this case, the boron concentration). \nThis notebook will first create that function, and then, run the search.", "# Initialize third-party libraries and the OpenMC Python API\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport openmc\nimport openmc.model\n\n%matplotlib inline", "Create Parametrized Model\nTo perform the search we will use the openmc.search_for_keff function. This function requires a different function be defined which creates an parametrized model to analyze. This model is required to be stored in an openmc.model.Model object. The first parameter of this function will be modified during the search process for our critical eigenvalue.\nOur model will be a pin-cell from the Multi-Group Mode Part II assembly, except this time the entire model building process will be contained within a function, and the Boron concentration will be parametrized.", "# Create the model. `ppm_Boron` will be the parametric variable.\n\ndef build_model(ppm_Boron):\n \n # Create the pin materials\n fuel = openmc.Material(name='1.6% Fuel')\n fuel.set_density('g/cm3', 10.31341)\n fuel.add_element('U', 1., enrichment=1.6)\n fuel.add_element('O', 2.)\n\n zircaloy = openmc.Material(name='Zircaloy')\n zircaloy.set_density('g/cm3', 6.55)\n zircaloy.add_element('Zr', 1.)\n\n water = openmc.Material(name='Borated Water')\n water.set_density('g/cm3', 0.741)\n water.add_element('H', 2.)\n water.add_element('O', 1.)\n\n # Include the amount of boron in the water based on the ppm,\n # neglecting the other constituents of boric acid\n water.add_element('B', ppm_Boron * 1e-6)\n \n # Instantiate a Materials object\n materials = openmc.Materials([fuel, zircaloy, water])\n \n # Create cylinders for the fuel and clad\n fuel_outer_radius = openmc.ZCylinder(r=0.39218)\n clad_outer_radius = openmc.ZCylinder(r=0.45720)\n\n # Create boundary planes to surround the geometry\n min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')\n max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')\n min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')\n max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')\n\n # Create fuel Cell\n fuel_cell = openmc.Cell(name='1.6% Fuel')\n fuel_cell.fill = fuel\n fuel_cell.region = -fuel_outer_radius\n\n # Create a clad Cell\n clad_cell = openmc.Cell(name='1.6% Clad')\n clad_cell.fill = zircaloy\n clad_cell.region = +fuel_outer_radius & -clad_outer_radius\n\n # Create a moderator Cell\n moderator_cell = openmc.Cell(name='1.6% Moderator')\n moderator_cell.fill = water\n moderator_cell.region = +clad_outer_radius & (+min_x & -max_x & +min_y & -max_y)\n\n # Create root Universe\n root_universe = openmc.Universe(name='root universe')\n root_universe.add_cells([fuel_cell, clad_cell, moderator_cell])\n\n # Create Geometry and set root universe\n geometry = openmc.Geometry(root_universe)\n \n # Instantiate a Settings object\n settings = openmc.Settings()\n \n # Set simulation parameters\n settings.batches = 300\n settings.inactive = 20\n settings.particles = 1000\n \n # Create an initial uniform spatial source distribution over fissionable zones\n bounds = [-0.63, -0.63, -10, 0.63, 0.63, 10.]\n uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)\n settings.source = openmc.source.Source(space=uniform_dist)\n \n # We dont need a tallies file so dont waste the disk input/output time\n settings.output = {'tallies': False}\n \n model = openmc.model.Model(geometry, materials, settings)\n \n return model", "Search for the Critical Boron Concentration\nTo perform the search we imply call the openmc.search_for_keff function and pass in the relvant arguments. For our purposes we will be passing in the model building function (build_model defined above), a bracketed range for the expected critical Boron concentration (1,000 to 2,500 ppm), the tolerance, and the method we wish to use. \nInstead of the bracketed range we could have used a single initial guess, but have elected not to in this example. Finally, due to the high noise inherent in using as few histories as are used in this example, our tolerance on the final keff value will be rather large (1.e-2) and the default 'bisection' method will be used for the search.", "# Perform the search\ncrit_ppm, guesses, keffs = openmc.search_for_keff(build_model, bracket=[1000., 2500.],\n tol=1e-2, print_iterations=True)\n\nprint('Critical Boron Concentration: {:4.0f} ppm'.format(crit_ppm))", "Finally, the openmc.search_for_keff function also provided us with Lists of the guesses and corresponding keff values generated during the search process with OpenMC. Let's use that information to make a quick plot of the value of keff versus the boron concentration.", "plt.figure(figsize=(8, 4.5))\nplt.title('Eigenvalue versus Boron Concentration')\n# Create a scatter plot using the mean value of keff\nplt.scatter(guesses, [keffs[i].nominal_value for i in range(len(keffs))])\nplt.xlabel('Boron Concentration [ppm]')\nplt.ylabel('Eigenvalue')\nplt.show()", "We see a nearly linear reactivity coefficient for the boron concentration, exactly as one would expect for a pure 1/v absorber at small concentrations." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
arnoldlu/lisa
ipynb/examples/energy_meter/EnergyMeter_AEP.ipynb
apache-2.0
[ "Energy Meter Examples\nARM Energy Probe\nNOTE: caiman is required to collect data from the probe. Instructions on how to install it can be found here https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirements#arm-energy-probe-aep.", "import logging\nfrom conf import LisaLogging\nLisaLogging.setup()", "Import required modules", "# Generate plots inline\n%matplotlib inline\n\nimport os\n\n# Support to access the remote target\nimport devlib\nfrom env import TestEnv\n\n# RTApp configurator for generation of PERIODIC tasks\nfrom wlgen import RTA, Ramp", "Target Configuration\nThe target configuration is used to describe and configure your test environment.\nYou can find more details in examples/utils/testenv_example.ipynb.", "# Setup target configuration\nmy_conf = {\n\n # Target platform and board\n \"platform\" : 'linux',\n \"board\" : 'juno',\n \"host\" : '192.168.0.1',\n\n # Folder where all the results will be collected\n \"results_dir\" : \"EnergyMeter_AEP\",\n\n # Define devlib modules to load\n \"modules\" : [\"cpufreq\"], # Required by rt-app calibration\n \"exclude_modules\" : [ 'hwmon' ],\n\n # Energy Meters Configuration for ARM Energy Probe\n \"emeter\" : {\n \"instrument\" : \"aep\",\n \"conf\" : {\n # Value of the shunt resistor in Ohm\n 'resistor_values' : [0.099],\n # Device entry assigned to the probe on the host\n 'device_entry' : '/dev/ttyACM0',\n },\n 'channel_map' : {\n 'BAT' : 'BAT'\n }\n },\n \n # Tools required by the experiments\n \"tools\" : [ 'trace-cmd', 'rt-app' ],\n \n # Comment this line to calibrate RTApp in your own platform\n # \"rtapp-calib\" : {\"0\": 360, \"1\": 142, \"2\": 138, \"3\": 352, \"4\": 352, \"5\": 353},\n}\n\n# Initialize a test environment using:\nte = TestEnv(my_conf, wipe=False, force_new=True)\ntarget = te.target", "Workload Execution and Power Consumptions Samping\nDetailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.\nEach EnergyMeter derived class has two main methods: reset and report.\n - The reset method will reset the energy meter and start sampling from channels specified in the target configuration. <br>\n - The report method will stop capture and will retrieve the energy consumption data. This returns an EnergyReport composed of the measured channels energy and the report file. Each of the samples can also be obtained, as you can see below.", "# Create and RTApp RAMP task\nrtapp = RTA(te.target, 'ramp', calibration=te.calibration())\nrtapp.conf(kind='profile',\n params={\n 'ramp' : Ramp(\n start_pct = 60,\n end_pct = 20,\n delta_pct = 5,\n time_s = 0.5).get()\n })\n\n# EnergyMeter Start\nte.emeter.reset()\n\nrtapp.run(out_dir=te.res_dir)\n\n# EnergyMeter Stop and samples collection\nnrg_report = te.emeter.report(te.res_dir)\n\nlogging.info(\"Collected data:\")\n!tree $te.res_dir", "Power Measurements Data", "logging.info(\"Measured channels energy:\")\nlogging.info(\"%s\", nrg_report.channels)\n\nlogging.info(\"Generated energy file:\")\nlogging.info(\" %s\", nrg_report.report_file)\n!cat $nrg_report.report_file\n\nlogging.info(\"Samples collected for the BAT channel (only first 10)\")\nsamples_file = os.path.join(te.res_dir, 'samples.csv')\n!head $samples_file" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/graphics
tensorflow_graphics/projects/radiance_fields/TFG_tiny_nerf.ipynb
apache-2.0
[ "Copyright 2021 Google LLC.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/projects/radiance_fields/tiny_nerf.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/projects/radiance_fields/tiny_nerf.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nSetup and imports", "%pip install tensorflow_graphics\n\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport tensorflow.keras.layers as layers\n\nimport tensorflow_graphics.projects.radiance_fields.data_loaders as data_loaders\nimport tensorflow_graphics.projects.radiance_fields.utils as utils\nimport tensorflow_graphics.rendering.camera.perspective as perspective\nimport tensorflow_graphics.geometry.representation.ray as ray\nimport tensorflow_graphics.math.feature_representation as feature_rep\nimport tensorflow_graphics.rendering.volumetric.ray_radiance as ray_radiance", "Please download the data from the original repository. In this tutorial we experimented with the synthetic data (lego, ship, boat, etc) that can be found here. Then, you can either point to them locally (if you run a custom kernel) or upload them to the google colab.", "DATASET_DIR = '/content/nerf_synthetic/'\n\n#@title Parameters\n\nbatch_size = 10 #@param {type:\"integer\"}\nn_posenc_freq = 6 #@param {type:\"integer\"}\nlearning_rate = 0.0005 #@param {type:\"number\"}\nn_filters = 256 #@param {type:\"integer\"}\n\n\nnum_epochs = 100 #@param {type:\"integer\"}\nn_rays = 512 #@param {type:\"integer\"}\nnear = 2.0 #@param {type:\"number\"}\nfar = 6.0 #@param {type:\"number\"}\nray_steps = 64 #@param {type:\"integer\"}", "Training a NeRF network", "#@title Load the lego dataset { form-width: \"350px\" }\n\ndataset, height, width = data_loaders.load_synthetic_nerf_dataset(\n dataset_dir=DATASET_DIR,\n dataset_name='lego',\n split='train',\n scale=0.125,\n batch_size=batch_size)\n\n#@title Prepare the NeRF model and optimizer { form-width: \"350px\" }\n\ninput_dim = n_posenc_freq * 2 * 3 + 3\n\n\ndef get_model():\n \"\"\"Tiny NeRF network.\"\"\"\n with tf.name_scope(\"Network/\"):\n input_features = layers.Input(shape=[input_dim])\n fc0 = layers.Dense(n_filters, activation=layers.ReLU())(input_features)\n fc1 = layers.Dense(n_filters, activation=layers.ReLU())(fc0)\n fc2 = layers.Dense(n_filters, activation=layers.ReLU())(fc1)\n fc3 = layers.Dense(n_filters, activation=layers.ReLU())(fc2)\n fc4 = layers.Dense(n_filters, activation=layers.ReLU())(fc3)\n fc4 = layers.concatenate([fc4, input_features], -1)\n fc5 = layers.Dense(n_filters, activation=layers.ReLU())(fc4)\n fc6 = layers.Dense(n_filters, activation=layers.ReLU())(fc5)\n fc7 = layers.Dense(n_filters, activation=layers.ReLU())(fc6)\n rgba = layers.Dense(4)(fc7)\n return tf.keras.Model(inputs=[input_features], outputs=[rgba])\n\n\nmodel = get_model()\noptimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)\n\n# @title Set up the training procedure { form-width: \"350px\" }\n\[email protected]\ndef network_inference_and_rendering(ray_points, model):\n \"\"\"Render the 3D ray points into rgb pixels.\n\n Args:\n ray_points: A tensor of shape `[A, B, C, 3]` where A is the batch size,\n B is the number of rays, C is the number of samples per ray.\n model: the NeRF model to run\n\n Returns:\n Two tensors of size `[A, B, 3]`.\n \"\"\"\n features_xyz = feature_rep.positional_encoding(ray_points, n_posenc_freq)\n features_xyz = tf.reshape(features_xyz, [-1, tf.shape(features_xyz)[-1]])\n rgba = model([features_xyz])\n target_shape = tf.concat([tf.shape(ray_points)[:-1], [4]], axis=-1)\n rgba = tf.reshape(rgba, target_shape)\n rgb, alpha = tf.split(rgba, [3, 1], axis=-1)\n rgb = tf.sigmoid(rgb)\n alpha = tf.nn.relu(alpha)\n rgba = tf.concat([rgb, alpha], axis=-1)\n dists = utils.get_distances_between_points(ray_points)\n rgb_render, _, _ = ray_radiance.compute_radiance(rgba, dists)\n return rgb_render\n\n\[email protected]\ndef train_step(ray_origin, ray_direction, gt_rgb):\n \"\"\"Training function for coarse and fine networks.\n\n Args:\n ray_origin: A tensor of shape `[A, B, 3]` where A is the batch size,\n B is the number of rays.\n ray_direction: A tensor of shape `[A, B, 3]` where A is the batch size,\n B is the number of rays.\n gt_rgb: A tensor of shape `[A, B, 3]` where A is the batch size,\n B is the number of rays.\n\n Returns:\n A scalar.\n \"\"\"\n with tf.GradientTape() as tape:\n ray_points, _ = ray.sample_1d(\n ray_origin,\n ray_direction,\n near=near,\n far=far,\n n_samples=ray_steps,\n strategy='stratified')\n\n rgb = network_inference_and_rendering(ray_points, model)\n total_loss = utils.l2_loss(rgb, gt_rgb)\n gradients = tape.gradient(total_loss, model.trainable_variables)\n optimizer.apply_gradients(zip(gradients, model.trainable_variables))\n return total_loss\n\nfor epoch in range(0, num_epochs):\n epoch_loss = 0.0\n for image, focal, principal_point, transform_matrix in dataset:\n # Prepare the rays\n random_rays, random_pixels_xy = perspective.random_rays(focal,\n principal_point,\n height,\n width,\n n_rays)\n # TF-Graphics camera rays to NeRF world rays\n random_rays = utils.change_coordinate_system(random_rays,\n (0., 0., 0.),\n (1., -1., -1.))\n rays_org, rays_dir = utils.camera_rays_from_transformation_matrix(\n random_rays,\n transform_matrix)\n random_pixels_yx = tf.reverse(random_pixels_xy, axis=[-1])\n pixels = tf.gather_nd(image, random_pixels_yx, batch_dims=1)\n pixels_rgb, _ = tf.split(pixels, [3, 1], axis=-1)\n dist_loss = train_step(rays_org, rays_dir, pixels_rgb)\n epoch_loss += dist_loss\n print('Epoch {0} loss: {1:.3f}'.format(epoch, epoch_loss))", "Testing", "# @title Load the test data\n\ntest_dataset, height, width = data_loaders.load_synthetic_nerf_dataset(\n dataset_dir=DATASET_DIR,\n dataset_name='lego',\n split='val',\n scale=0.125,\n batch_size=1,\n shuffle=False)\n\nfor testimg, focal, principal_point, transform_matrix in test_dataset.take(1):\n testimg = testimg[0, :, :, :3]\n\n img_rays, _ = perspective.random_patches(\n focal,\n principal_point,\n height,\n width,\n patch_height=height,\n patch_width=width,\n scale=1.0)\n\n # Break the test image into lines, so we don't run out of memory\n batch_rays = tf.split(img_rays, height, axis=1)\n output = []\n for random_rays in batch_rays:\n random_rays = utils.change_coordinate_system(random_rays,\n (0., 0., 0.),\n (1., -1., -1.))\n rays_org, rays_dir = utils.camera_rays_from_transformation_matrix(\n random_rays,\n transform_matrix)\n ray_points, _ = ray.sample_1d(\n rays_org,\n rays_dir,\n near=near,\n far=far,\n n_samples=ray_steps,\n strategy='stratified')\n rgb = network_inference_and_rendering(ray_points, model)\n output.append(rgb)\n final_image = tf.concat(output, axis=0)\n\n fig, ax = plt.subplots(1, 2)\n ax[0].imshow(final_image)\n ax[1].imshow(testimg)\n plt.show()\n loss = tf.reduce_mean(tf.square(final_image - testimg))\n psnr = -10. * tf.math.log(loss) / tf.math.log(10.)\n print(psnr.numpy())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bashtage/statsmodels
examples/notebooks/mstl_decomposition.ipynb
bsd-3-clause
[ "Multiple Seasonal-Trend decomposition using LOESS (MSTL)\nThis notebook illustrates the use of MSTL [1] to decompose a time series into a: trend component, multiple seasonal components, and a residual component. MSTL uses STL (Seasonal-Trend decomposition using LOESS) to iteratively extract seasonal components from a time series. The key inputs into MSTL are:\n\nperiods - The period of each seasonal component (e.g., for hourly data with daily and weekly seasonality we would have: periods=(24, 24*7).\nwindows - The lengths of each seasonal smoother with respect to each period. If these are large then the seasonal component will show less variability over time. Must be odd. If None a set of default values determined by experiments in the original paper [1] are used.\nlmbda - The lambda parameter for a Box-Cox transformation prior to decomposition. If None then no transformation is done. If \"auto\" then an appropriate value for lambda is automatically selected from the data.\niterate - Number of iterations to use to refine the seasonal component.\nstl_kwargs - All the other parameters which can be passed to STL (e.g., robust, seasonal_deg, etc.). See STL docs.\n\n[1] K. Bandura, R.J. Hyndman, and C. Bergmeir (2021)\n MSTL: A Seasonal-Trend Decomposition Algorithm for Time Series with Multiple\n Seasonal Patterns. arXiv preprint arXiv:2107.13462.\nNote there are some key differences in this implementation to 1. Missing data must be handled outside of the MSTL class. The algorithm proposed in the paper handles a case when there is no seasonality. This implementation assumes that there is at least one seasonal component.\nFirst we import the required packages, prepare the graphics environment, and prepare the data.", "import matplotlib.pyplot as plt\nimport datetime\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nfrom pandas.plotting import register_matplotlib_converters\n\nfrom statsmodels.tsa.seasonal import MSTL\nfrom statsmodels.tsa.seasonal import DecomposeResult\n\nregister_matplotlib_converters()\nsns.set_style(\"darkgrid\")\n\nplt.rc(\"figure\", figsize=(16, 12))\nplt.rc(\"font\", size=13)", "MSTL applied to a toy dataset\nCreate a toy dataset with multiple seasonalities\nWe create a time series with hourly frequency that has a daily and weekly seasonality which follow a sine wave. We demonstrate a more real world example later in the notebook.", "t = np.arange(1, 1000)\ndaily_seasonality = 5 * np.sin(2 * np.pi * t / 24)\nweekly_seasonality = 10 * np.sin(2 * np.pi * t / (24 * 7))\ntrend = 0.0001 * t**2\ny = trend + daily_seasonality + weekly_seasonality + np.random.randn(len(t))\nts = pd.date_range(start=\"2020-01-01\", freq=\"H\", periods=len(t))\ndf = pd.DataFrame(data=y, index=ts, columns=[\"y\"])\n\ndf.head()", "Let's plot the time series", "df[\"y\"].plot(figsize=[10, 5])", "Decompose the toy dataset with MSTL\nLet's use MSTL to decompose the time series into a trend component, daily and weekly seasonal component, and residual component.", "mstl = MSTL(df[\"y\"], periods=[24, 24 * 7])\nres = mstl.fit()", "If the input is a pandas dataframe then the output for the seasonal component is a dataframe. The period for each component is reflect in the column names.", "res.seasonal.head()\n\nax = res.plot()", "We see that the hourly and weekly seasonal components have been extracted.\nAny of the STL parameters other than period and seasonal (as they are set by periods and windows in MSTL) can also be set by passing arg:value pairs as a dictionary to stl_kwargs (we will show that in an example now).\nHere we show that we can still set the trend smoother of STL via trend and order of the polynomial for the seasonal fit via seasonal_deg. We will also explicitly set the windows, seasonal_deg, and iterate parameter explicitly. We will get a worse fit but this is just an example of how to pass these parameters to the MSTL class.", "mstl = MSTL(\n df,\n periods=[24, 24 * 7], # The periods and windows must be the same length and will correspond to one another.\n windows=[101, 101], # Setting this large along with `seasonal_deg=0` will force the seasonality to be periodic.\n iterate=3,\n stl_kwargs={\n \"trend\":1001, # Setting this large will force the trend to be smoother.\n \"seasonal_deg\":0, # Means the seasonal smoother is fit with a moving average.\n }\n)\nres = mstl.fit()\nax = res.plot()", "MSTL applied to electricity demand dataset\nPrepare the data\nWe will use the Victoria electricity demand dataset found here: \nhttps://github.com/tidyverts/tsibbledata/tree/master/data-raw/vic_elec. This dataset is used in the original MSTL paper [1]. It is the total electricity demand at a half hourly granularity for the state of Victora in Australia from 2002 to the start of 2015. A more detailed description of the dataset can be found here.", "url = \"https://raw.githubusercontent.com/tidyverts/tsibbledata/master/data-raw/vic_elec/VIC2015/demand.csv\"\ndf = pd.read_csv(url)\n\ndf.head()", "The date are integers representing the number of days from an origin date. The origin date for this dataset is determined from here and here and is \"1899-12-30\". The Period integers refer to 30 minute intervals in a 24 hour day, hence there are 48 for each day.\nLet's extract the date and date-time.", "df[\"Date\"] = df[\"Date\"].apply(lambda x: pd.Timestamp(\"1899-12-30\") + pd.Timedelta(x, unit=\"days\"))\ndf[\"ds\"] = df[\"Date\"] + pd.to_timedelta((df[\"Period\"]-1)*30, unit=\"m\")", "We will be interested in OperationalLessIndustrial which is the electricity demand excluding the demand from certain high energy industrial users. We will resample the data to hourly and filter the data to the same time period as original MSTL paper [1] which is the first 149 days of the year 2012.", "timeseries = df[[\"ds\", \"OperationalLessIndustrial\"]]\ntimeseries.columns = [\"ds\", \"y\"] # Rename to OperationalLessIndustrial to y for simplicity.\n\n# Filter for first 149 days of 2012.\nstart_date = pd.to_datetime(\"2012-01-01\")\nend_date = start_date + pd.Timedelta(\"149D\")\nmask = (timeseries[\"ds\"] >= start_date) & (timeseries[\"ds\"] < end_date)\ntimeseries = timeseries[mask]\n\n# Resample to hourly\ntimeseries = timeseries.set_index(\"ds\").resample(\"H\").sum()\ntimeseries.head()", "Decompose electricity demand using MSTL\nLet's apply MSTL to this dataset.\nNote: stl_kwargs are set to give results close to [1] which used R and therefore has a slightly different default settings for the underlying STL parameters. It would be rare to manually set inner_iter and outer_iter explicitly in practice.", "mstl = MSTL(timeseries[\"y\"], periods=[24, 24 * 7], iterate=3, stl_kwargs={\"seasonal_deg\": 0,\n \"inner_iter\": 2,\n \"outer_iter\": 0})\nres = mstl.fit() # Use .fit() to perform and return the decomposition\nax = res.plot()\nplt.tight_layout()", "The multiple seasonal components are stored as a pandas dataframe in the seasonal attribute:", "res.seasonal.head()", "Let's inspect the seasonal components in a bit more detail and look at the first few days and weeks to examine the daily and weekly seasonality.", "fig, ax = plt.subplots(nrows=2, figsize=[10,10])\nres.seasonal[\"seasonal_24\"].iloc[:24*3].plot(ax=ax[0])\nax[0].set_ylabel(\"seasonal_24\")\nax[0].set_title(\"Daily seasonality\")\n\nres.seasonal[\"seasonal_168\"].iloc[:24*7*3].plot(ax=ax[1])\nax[1].set_ylabel(\"seasonal_168\")\nax[1].set_title(\"Weekly seasonality\")\n\nplt.tight_layout()", "We can see that the daily seasonality of electricity demand is well captured. This is the first few days in January so during the summer months in Australia there is a peak in the afternoon most likely due to air conditioning use. \nFor the weekly seasonality we can see that there is less usage during the weekends.\nOne of the advantages of MSTL is that is allows us to capture seasonality which changes over time. So let's look at the seasonality during cooler months in May.", "fig, ax = plt.subplots(nrows=2, figsize=[10,10])\nmask = res.seasonal.index.month==5\nres.seasonal[mask][\"seasonal_24\"].iloc[:24*3].plot(ax=ax[0])\nax[0].set_ylabel(\"seasonal_24\")\nax[0].set_title(\"Daily seasonality\")\n\nres.seasonal[mask][\"seasonal_168\"].iloc[:24*7*3].plot(ax=ax[1])\nax[1].set_ylabel(\"seasonal_168\")\nax[1].set_title(\"Weekly seasonality\")\n\nplt.tight_layout()", "Now we can see an additional peak in the evening! This could be related to heating and lighting now required in the evenings. So this makes sense. We see that main weekly pattern of lower demand over the weekends continue.\nThe other components can also be extracted from the trend and resid attribute:", "display(res.trend.head()) # trend component\ndisplay(res.resid.head()) # residual component", "And that's it! Using MSTL we can perform time series decompostion on a multi-seasonal time series!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AtmaMani/pyChakras
udemy_ml_bootcamp/Python-for-Data-Visualization/Matplotlib/Matplotlib Exercises - Solutions.ipynb
mit
[ "<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nMatplotlib Exercises - Solutions\nWelcome to the exercises for reviewing matplotlib! Take your time with these, Matplotlib can be tricky to understand at first. These are relatively simple plots, but they can be hard if this is your first time with matplotlib, feel free to reference the solutions as you go along.\nAlso don't worry if you find the matplotlib syntax frustrating, we actually won't be using it that often throughout the course, we will switch to using seaborn and pandas built-in visualization capabilities. But, those are built-off of matplotlib, which is why it is still important to get exposure to it!\n * NOTE: ALL THE COMMANDS FOR PLOTTING A FIGURE SHOULD ALL GO IN THE SAME CELL. SEPARATING THEM OUT INTO MULTIPLE CELLS MAY CAUSE NOTHING TO SHOW UP. * \nExercises\nFollow the instructions to recreate the plots using this data:\nData", "import numpy as np\nx = np.arange(0,100)\ny = x*2\nz = x**2", "Import matplotlib.pyplot as plt and set %matplotlib inline if you are using the jupyter notebook. What command do you use if you aren't using the jupyter notebook?", "import matplotlib.pyplot as plt\n%matplotlib inline\n# plt.show() for non-notebook users", "Exercise 1\n Follow along with these steps: \n* Create a figure object called fig using plt.figure() \n* Use add_axes to add an axis to the figure canvas at [0,0,1,1]. Call this new axis ax. \n* Plot (x,y) on that axes and set the labels and titles to match the plot below:", "fig = plt.figure()\nax = fig.add_axes([0,0,1,1])\nax.plot(x,y)\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_title('title')", "Exercise 2\n Create a figure object and put two axes on it, ax1 and ax2. Located at [0,0,1,1] and [0.2,0.5,.2,.2] respectively.", "fig = plt.figure()\n\nax1 = fig.add_axes([0,0,1,1])\nax2 = fig.add_axes([0.2,0.5,.2,.2])", "Now plot (x,y) on both axes. And call your figure object to show it.", "ax1.plot(x,y)\nax1.set_xlabel('x')\nax1.set_ylabel('y')\n\n\nax2.plot(x,y)\nax2.set_xlabel('x')\nax2.set_ylabel('y')\n\nfig # Show figure object", "Exercise 3\n Create the plot below by adding two axes to a figure object at [0,0,1,1] and [0.2,0.5,.4,.4]", "fig = plt.figure()\n\nax = fig.add_axes([0,0,1,1])\nax2 = fig.add_axes([0.2,0.5,.4,.4])", "Now use x,y, and z arrays to recreate the plot below. Notice the xlimits and y limits on the inserted plot:", "ax.plot(x,z)\nax.set_xlabel('X')\nax.set_ylabel('Z')\n\n\nax2.plot(x,y)\nax2.set_xlabel('X')\nax2.set_ylabel('Y')\nax2.set_title('zoom')\nax2.set_xlim(20,22)\nax2.set_ylim(30,50)\n\nfig", "Exercise 4\n Use plt.subplots(nrows=1, ncols=2) to create the plot below.", "# Empty canvas of 1 by 2 subplots\nfig, axes = plt.subplots(nrows=1, ncols=2)", "Now plot (x,y) and (x,z) on the axes. Play around with the linewidth and style", "axes[0].plot(x,y,color=\"blue\", lw=3, ls='--')\naxes[1].plot(x,z,color=\"red\", lw=3, ls='-')\nfig", "See if you can resize the plot by adding the figsize() argument in plt.subplots() are copying and pasting your previous code.", "fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(12,2))\n\naxes[0].plot(x,y,color=\"blue\", lw=5)\naxes[0].set_xlabel('x')\naxes[0].set_ylabel('y')\n\naxes[1].plot(x,z,color=\"red\", lw=3, ls='--')\naxes[1].set_xlabel('x')\naxes[1].set_ylabel('z')", "Great Job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
turbomanage/training-data-analyst
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/3_keras_sequential_api.ipynb
apache-2.0
[ "Introducing the Keras Sequential API\nLearning Objectives\n 1. Learn how to use feature columns in a Keras model\n 1. Build a DNN model using the Keras Sequential API\n 1. Learn how to train a model with Keras\n 1. Learn how to save/load, and deploy a Keras model on GCP\n 1. Learn how to deploy and make predictions with at Keras model\nIntroduction\nThe Keras sequential API allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs. \nIn this lab, we'll see how to build a simple deep neural network model using the keras sequential api and feature columns. Once we have trained our model, we will deploy it using AI Platform and see how to call our model for online prediciton.", "# Ensure the right version of Tensorflow is installed.\n!pip freeze | grep tensorflow==2.0 || pip install tensorflow==2.0", "Start by importing the necessary libraries for this lab.", "import datetime\nimport os\nimport shutil\n\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\n\nfrom matplotlib import pyplot as plt\nfrom tensorflow import keras\n\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, DenseFeatures\nfrom tensorflow.keras.callbacks import TensorBoard\n\nprint(tf.__version__)\n%matplotlib inline", "Load raw data\nWe will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data.", "!ls -l ../data/*.csv\n\n!head ../data/taxi*.csv", "Use tf.data to read the CSV files\nWe wrote these functions for reading data from the csv files above in the previous notebook.", "CSV_COLUMNS = [\n 'fare_amount',\n 'pickup_datetime',\n 'pickup_longitude',\n 'pickup_latitude',\n 'dropoff_longitude',\n 'dropoff_latitude',\n 'passenger_count',\n 'key'\n]\nLABEL_COLUMN = 'fare_amount'\nDEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]\nUNWANTED_COLS = ['pickup_datetime', 'key']\n\n\ndef features_and_labels(row_data):\n label = row_data.pop(LABEL_COLUMN)\n features = row_data\n \n for unwanted_col in UNWANTED_COLS:\n features.pop(unwanted_col)\n\n return features, label\n\n\ndef create_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):\n dataset = tf.data.experimental.make_csv_dataset(\n pattern, batch_size, CSV_COLUMNS, DEFAULTS)\n\n dataset = dataset.map(features_and_labels)\n\n if mode == tf.estimator.ModeKeys.TRAIN:\n dataset = dataset.shuffle(buffer_size=1000).repeat()\n\n # take advantage of multi-threading; 1=AUTOTUNE\n dataset = dataset.prefetch(1)\n return dataset", "Build a simple keras DNN model\nWe will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow feature columns guide.\nIn our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use tf.feature_column.numeric_column()\nWe use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop.\nLab Task #1: Create a feature column dictionary that we will use when building our deep neural network below. The keys should be the element of the INPUT_COLS list, while the values should be numeric feature columns.", "INPUT_COLS = [\n 'pickup_longitude',\n 'pickup_latitude',\n 'dropoff_longitude',\n 'dropoff_latitude',\n 'passenger_count',\n]\n\n# Create input layer of feature columns\n# TODO 1\nfeature_columns = # TODO: Your code goes here.", "Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model. \nLab Task #2a: Create a deep neural network using Keras's Sequential API. In the cell below, use the tf.keras.layers library to create all the layers for your deep neural network.", "# Build a keras DNN model using Sequential API\n# TODO 2a\nmodel = # TODO: Your code goes here.", "Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments:\n\nAn optimizer. This could be the string identifier of an existing optimizer (such as rmsprop or adagrad), or an instance of the Optimizer class.\nA loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function from the Losses class (such as categorical_crossentropy or mse), or it can be a custom objective function.\nA list of metrics. For any machine learning problem you will want a set of metrics to evaluate your model. A metric could be the string identifier of an existing metric or a custom metric function.\n\nWe will add an additional custom metric called rmse to our list of metrics which will return the root mean square error. \nLab Task #2b: Compile the model you created above. Create a custom loss function called rmse which computes the root mean squared error between y_true and y_pred. Pass this function to the model as an evaluation metric.", "# TODO 2b\n# Create a custom evalution metric\ndef rmse(y_true, y_pred):\n return # TODO: Your code goes here\n\n\n# Compile the keras model\n# TODO: Your code goes here.", "Train the model\nTo train your model, Keras provides three functions that can be used:\n 1. .fit() for training a model for a fixed number of epochs (iterations on a dataset).\n 2. .fit_generator() for training a model on data yielded batch-by-batch by a generator\n 3. .train_on_batch() runs a single gradient update on a single batch of data. \nThe .fit() function works well for small datasets which can fit entirely in memory. However, for large datasets (or if you need to manipulate the training data on the fly via data augmentation, etc) you will need to use .fit_generator() instead. The .train_on_batch() method is for more fine-grained control over training and accepts only a single batch of data.\nThe taxifare dataset we sampled is small enough to fit in memory, so can we could use .fit to train our model. Our create_dataset function above generates batches of training examples, so we could also use .fit_generator. In fact, when calling .fit the method inspects the data, and if it's a generator (as our dataset is) it will invoke automatically .fit_generator for training. \nWe start by setting up some parameters for our training job and create the data generators for the training and validation data.\nWe refer you the the blog post ML Design Pattern #3: Virtual Epochs for further details on why express the training in terms of NUM_TRAIN_EXAMPLES and NUM_EVALS and why, in this training code, the number of epochs is really equal to the number of evaluations we perform.", "TRAIN_BATCH_SIZE = 1000\nNUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around\nNUM_EVALS = 50 # how many times to evaluate\nNUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample\n\ntrainds = create_dataset(\n pattern='../data/taxi-train*',\n batch_size=TRAIN_BATCH_SIZE,\n mode=tf.estimator.ModeKeys.TRAIN)\n\nevalds = create_dataset(\n pattern='../data/taxi-valid*',\n batch_size=1000,\n mode=tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)", "There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the callback argument we specify a Tensorboard callback so we can inspect Tensorboard after training. \nLab Task #3: In the cell below, you will train your model. First, define the steps_per_epoch then train your model using .fit(), saving the model training output to a variable called history.", "# TODO 3\n%time \nsteps_per_epoch = # TODO: Your code goes here. \n\nLOGDIR = \"./taxi_trained\"\nhistory = # TODO: Your code goes here. ", "High-level model evaluation\nOnce we've run data through the model, we can call .summary() on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above.", "model.summary()", "Running .fit (or .fit_generator) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.", "RMSE_COLS = ['rmse', 'val_rmse']\n\npd.DataFrame(history.history)[RMSE_COLS].plot()\n\nLOSS_COLS = ['loss', 'val_loss']\n\npd.DataFrame(history.history)[LOSS_COLS].plot()", "Making predictions with our model\nTo make predictions with our trained model, we can call the predict method, passing to it a dictionary of values. The steps parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set steps=1 (setting steps=None would also work). Note, however, that if x is a tf.data dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted.", "model.predict(x={\"pickup_longitude\": tf.convert_to_tensor([-73.982683]),\n \"pickup_latitude\": tf.convert_to_tensor([40.742104]),\n \"dropoff_longitude\": tf.convert_to_tensor([-73.983766]),\n \"dropoff_latitude\": tf.convert_to_tensor([40.755174]),\n \"passenger_count\": tf.convert_to_tensor([3.0])},\n steps=1)", "Export and deploy our model\nOf course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file. \nWe'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to \"serve\" the model, from a web application, from JavaScript, from mobile applications, etc.\nLab Task #4: Use tf.saved_model.save to export the trained model to a Tensorflow SavedModel format. Reference the documentation for tf.saved_model.save as you fill in the code for the cell below.\nNext, print the signature of your saved model using the SavedModel Command Line Interface command saved_model_cli. You can read more about the command line interface and the show and run commands it supports in the documentation here.", "# TODO 4a\nOUTPUT_DIR = \"./export/savedmodel\"\nshutil.rmtree(OUTPUT_DIR, ignore_errors=True)\nEXPORT_PATH = os.path.join(OUTPUT_DIR,\n datetime.datetime.now().strftime(\"%Y%m%d%H%M%S\"))\n\ntf.saved_model.save( # TODO: Your code goes here. \n\n# TODO 4b\n!saved_model_cli show \\\n --tag_set # TODO: Your code goes here.\n --signature_def # TODO: Your code goes here.\n --dir # TODO: Your code goes here.\n\n!find {EXPORT_PATH}\nos.environ['EXPORT_PATH'] = EXPORT_PATH", "Deploy our model to AI Platform\nFinally, we will deploy our trained model to AI Platform and see how we can make online predicitons. \nLab Task #5a: Complete the code in the cell below to deploy your trained model to AI Platform using the gcloud ai-platform versions create command. Have a look at the documentation for how to create model version with gcloud.", "%%bash\n\n# TODO 5a\n\nPROJECT= #TODO: Change this to your PROJECT\nBUCKET=${PROJECT}\nREGION=us-east1\nMODEL_NAME=taxifare\nVERSION_NAME=dnn\n\nif [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then\n echo \"$MODEL_NAME already exists\"\nelse\n echo \"Creating $MODEL_NAME\"\n gcloud ai-platform models create --regions=$REGION $MODEL_NAME\nfi\n\nif [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then\n echo \"Deleting already existing $MODEL_NAME:$VERSION_NAME ... \"\n echo yes | gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME\n echo \"Please run this cell again if you don't see a Creating message ... \"\n sleep 2\nfi\n\necho \"Creating $MODEL_NAME:$VERSION_NAME\"\ngcloud ai-platform versions create \\\n --model= #TODO: Your code goes here.\n --framework= #TODO: Your code goes here.\n --python-version= #TODO: Your code goes here.\n --runtime-version= #TODO: Your code goes here.\n --origin= #TODO: Your code goes here.\n --staging-bucket= #TODO: Your code goes here.\n\n%%writefile input.json\n{\"pickup_longitude\": -73.982683, \"pickup_latitude\": 40.742104,\"dropoff_longitude\": -73.983766,\"dropoff_latitude\": 40.755174,\"passenger_count\": 3.0} ", "Lab Task #5b: Complete the code in the cell below to call prediction on your deployed model for the example you just created in the input.json file above.", "# TODO 5b\n!gcloud ai-platform predict \\\n --model #TODO: Your code goes here.\n --json-instances #TODO: Your code goes here.\n --version #TODO: Your code goes here.", "Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
juditacs/labor
notebooks/Pandas_introduction.ipynb
lgpl-3.0
[ "Pandas introduction\nThis notebook is a supplementary material for the Business intelligence laboratory (VIAUMB00) course at BME AUT. Course page in Hungarian and in English.\nThe goal of this notebook is to provide a short introduction to the Pandas data manipulation library.\nThe main problem sheet is available through Github Classroom.\nYou can find further information about Pandas:\n\nA 10min introduction\nBrandon Rhodes' tutorial from PyCon 2015\nKevin Markham's tutorial from PyCon 2018\n\nWhat is pandas?\nPandas is a data analysis and manipulation library popular among data scientists and machine learning practitioners.\nIts name is derived from panel data. Pandas is built to handle tabular data with mixed data types.\nPandas is integrated with Python's main numerical computation library, NumPy and with the main visualization library, matplotlib.", "import pandas as pd # importing it as pd is a convention you'll see everywhere\n# this renders figures inside the notebook\n%matplotlib inline\nimport matplotlib\nimport numpy as np\n\n# this makes the figures a bit nicer\nimport seaborn as sns\nsns.set_context('notebook')\n\ngrades = pd.DataFrame(\n {\n 'subject': ['Calculus 1', 'Digital design 1', \n 'Physics 1i', 'System modeling', 'Basics of Programming 1', 'System theory',\n 'Introduction to the Theory of Computing 1', 'Introduction to the Theory of Computing 2'],\n 'grade': [3, 4, 3, 2, 5, 1, 4, 1],\n 'teacher': ['A. Smith', 'J. Doe', 'J. Smith', 'M. Jackson', 'T. Swift', 'A. Grande', 'J. Bieber', 'C. Cox'],\n 'semester': [1, 1, 1, 2, 1, 3, 1, 2],\n 'completion_date': [\n '2017-12-18',\n '2018-01-05',\n '2018-01-21',\n '2018-06-11',\n '2018-06-01',\n '2018-12-17',\n '2017-01-01',\n '2018-01-01',\n ]\n }\n)\ngrades['completion_date'] = pd.to_datetime(grades['completion_date'])\ngrades", ".shape is a tuple of the number of rows and the number of columns:", "grades.shape", ".head() returns the first 5 rows of a DataFrame, .tail() returns the last ones. These are very useful for manual data inspection. You should always check what the contents of your dataframes.", "grades.head()", "Printing other only the last two elements:", "grades.tail(2)", "Each of these operations return a new dataframe. We can confirm this via their object identity:", "id(grades), id(grades.tail(2))", "But these objects are not copied unless we explicitly ask for a copy:", "grades.tail(2).copy()", "Selecting rows, columns and cells\nThe first boldfaced column of the table is the index column. It's possible to use multiple columns as index (Multiindex).\nSelecting columns", "grades['teacher']", "The name of the column is also exposed as an attribute as long as it adheres to the naming limitations of attributes (no spaces, starts with a letter, doesn't crash with a method name):", "grades.teacher", "The type of a column is pd.Series, which is the type for a vector:", "type(grades.teacher)\n\ngrades.teacher.shape", "We can select multiple columns with a list of column names instead of a column name. Note the double square brackets.", "grades[['grade', 'teacher']]", "The return type of the operator [] depends on the type of the index. If it's a string, it returns a Series if it's a list, it returns a DataFrame:", "print(type(grades[['grade']]))\ngrades[['grade']]", "Selecting rows\nRows can be selected\n\nby their index or\nby their integer location.\n\nTo demonstrate this, we will use the subject name as the index. Note that it's now in bold:", "grades = grades.set_index('subject')\ngrades", "Selecting by index\nNote that you need to use [] not ():", "grades.loc['Physics 1i']", "The type of one row is a Series since it's a vector:", "type(grades.loc['Physics 1i'])", "Selecting by integer location", "grades.iloc[2]", "We can use ranges here as well. Note that the range is upper-bound exclusive, in other words, .iloc[i:j] will not include element j:", "grades.iloc[1:3]", "Selecting columns with iloc", "grades.iloc[:, [0, 2]]\n\ngrades.iloc[:, 1:-1]\n\ngrades.iloc[1:5, 1:2]", "Selecting a cell\nThere are multiple ways of selecting a single cell, this is perhaps the easiest one:", "grades.loc['Physics 1i', 'grade']", "Vectorized operations\nArithmetic operators are overloaded for DataFrames and Series allowing vectorized operations", "grades['grade'] + 1\n\ngrades[['grade', 'semester']] + 1", "Comparisions are overloaded as well:", "grades['semester'] == 1", "The index can be manipulated in a similar way but the return value is different:", "grades.index == 'System theory'", "It is generally used to override the index:", "old_index = grades.index.copy()\ngrades.index = grades.index.str.upper()\ngrades", "Changing it back:", "grades.index = old_index\ngrades", "Vectorized string operations\nString columns have a .str namespace with many string operations:", "grades['teacher'].str\n\ngrades['teacher'].str.contains('Smith')", "It also provides access to the character array:", "grades['teacher'].str[:5]", "apply\n.apply allows running arbitrary functions on each element of a Series (or a DataFrame):", "def get_last_name(name):\n return name.split(\" \")[1]\n\n\ngrades['teacher'].apply(get_last_name)", "The same with a lambda function:", "grades['teacher'].apply(lambda name: name.split(\" \")[1])", "apply on rows\napply also works on full dataframes. The parameter is a row (axis=1) or a column in this case.", "def format_grade_and_completion(row):\n grade = row['grade']\n completed = row['completion_date'].strftime(\"%b %Y\")\n return f\"grade: {grade}, completed: {completed}\"\n \ngrades.apply(format_grade_and_completion, axis=1)", "Vectorized date manipulation\nDate columns can be manipulated through the dt namespace:", "grades['completion_date'].dt\n\ngrades['completion_date'].dt.day_name()\n\ngrades['completion_date'].dt.year", "Filtering\nComparisons return a Series of True/False values", "grades.semester == 1", "which can be used for filtering rows:", "grades[grades.semester == 1]", "We can also use multiple conditions. Note the parentheses:", "grades[(grades.semester == 1) & (grades.teacher.str.contains('Smith'))]", "Handling multiple dataframes, merge\nLet's define a second Dataframe with the credit values of some classes:", "credits = pd.DataFrame(\n {\n 'subject': ['Calculus 1', 'Physics 1i', 'Physics 2i'],\n 'credit': [7, 5, 5]\n }\n)\ncredits", "What are the credit values of the classes we have in the grades table?", "d = grades.merge(credits, left_index=True, right_on='subject', how='outer')\nd", "Merge\nMerge has two operands, a left and a right DataFrame.\nParameters:\n1. left_index: merge on the index of the left Dataframe\n2. right_on: merge on one or more columns of the right Dataframe. This column is credits in this example.\n3. how: inner/outer. Exclude/include all rows even if the key of the merge is unmatched.\nWe can merge on two types of columns, index and non-index. left_index=True and right_index=True means that we merge on the index. left_on and right_on means that we merge on a column.", "grades.merge(credits, left_index=True, right_on='subject', how='inner')", "We can discard rows with NaN values with dropna. Be careful. It discards all rows with any NaN.\nThis has the same effect as an inner join:", "d = d.dropna()\nd", "Finding min/max rows\nmax and min return the highest and lowest values for each column. The return value is a Series with the column names as indices and the maximum/minimum values as the Series values:", "print(type(grades.max()))\ngrades.max()", "The location of the maximum/minimum is often more interesting. idxmax and idxmin return where the maximum is:", "# grades.idxmax() # we get an error because of the string and the date column\ngrades[['grade', 'semester']].idxmax()", "The return value(s) of idxmax can directly be used with loc:", "grades.loc[grades[['grade', 'semester']].idxmax()]", "idxmax works similarly for Series but the return value is a single scalar, the index of the maximum:", "grades.grade.idxmax()", "groupby\nGroupby allows grouping the rows of the Dataframe along any column(s):", "g = credits.groupby('credit')\n\ng.groups", "Or on multiple columns:", "grades.groupby(['grade', 'semester']).groups", "Or even on conditions:", "grades.groupby(grades['semester'] % 2).groups", "We can perform operations on the groups:", "grades.groupby('semester').mean()\n\ngrades.groupby('semester').std()", "size returns the number of elements in each group:", "grades.groupby('semester').size()", "stack and unstack\nGrouping on multiple columns and then aggregating results in a multiindex:", "grades.groupby(['grade', 'semester']).size().index\n\ngrades.groupby(['grade', 'semester']).size()", "unstack moves up the innermost index level to columns:", "grades.groupby(['grade', 'semester']).size().unstack()", "stack does the opposite:", "credits\n\ncredits.stack()", "Sorting\nWe can sort Dataframes by their index:", "grades.sort_index()", "Or by one or more columns:", "grades.sort_values(['grade', 'semester'])", "In ascending order:", "grades.sort_index(ascending=False)", "Miscellaneous operations\nvalue_counts\nvalue_counts returns the frequency of values in a column:", "grades['semester'].value_counts()", "It can't be used on multiple columns but groupby+size does the same:", "grades.groupby(['semester', 'grade']).size()", "We can also plot the histogram of the values with:\nHistogram", "grades['semester'].hist()", "Visualization\nPandas is integrated with matplotlib, the main plotting module of Python.", "grades.plot(y='grade')", "Or as a bar chart:", "grades.plot(y='grade', kind='bar')", "We can also specify both axes:", "grades.plot(x='semester', y='grade', kind='scatter')", "Combining groupby and visualization.\nPlotting the grade averages by semester:", "grades.groupby('semester').mean().plot(kind='bar')", "Or the number of classes per semester:", "grades.groupby('semester').size().plot(kind='pie', title=\"Classes per semester\", ylabel=\"Classes\")", "GOTCHAs\nNote that some operations work in a suprising way.\n\nEvery pandas operation returns a new Dataframe unless it's explicitly in place.\nJupyter outputs the return value of the last line unless it's None. This is not the same as printing." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kubeflow/pipelines
samples/contrib/pytorch-samples/Pipeline-Bert.ipynb
apache-2.0
[ "# Copyright (c) Facebook, Inc. and its affiliates.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Bert Pipeline : PyTorch BERT News Classfication\nThis notebook shows PyTorch BERT end-to-end news classification example using Kubeflow Pipelines.\nAn example notebook that demonstrates how to:\n\nGet different tasks needed for the pipeline\nCreate a Kubeflow pipeline\nInclude Pytorch KFP components to preprocess, train, visualize and deploy the model in the pipeline\nSubmit a job for execution\nQuery(prediction and explain) the final deployed model\nInterpretation of the model using the Captum Insights", "! pip uninstall -y kfp\n! pip install --no-cache-dir kfp\n\nimport kfp\nimport json\nimport os\nfrom kfp.onprem import use_k8s_secret\nfrom kfp import components\nfrom kfp.components import load_component_from_file, load_component_from_url\nfrom kfp import dsl\nfrom kfp import compiler\n\nkfp.__version__", "Enter your gateway and the cookie\nUse this extension on chrome to get token\n\nUpdate values for the ingress gateway and auth session", "INGRESS_GATEWAY='http://istio-ingressgateway.istio-system.svc.cluster.local'\nAUTH=\"<enter your token here>\"\nNAMESPACE=\"kubeflow-user-example-com\"\nCOOKIE=\"authservice_session=\"+AUTH\nEXPERIMENT=\"Default\"", "Set Log bucket and Tensorboard Image", "MINIO_ENDPOINT=\"http://minio-service.kubeflow:9000\"\nLOG_BUCKET=\"mlpipeline\"\nTENSORBOARD_IMAGE=\"public.ecr.aws/pytorch-samples/tboard:latest\"\n\nclient = kfp.Client(host=INGRESS_GATEWAY+\"/pipeline\", cookies=COOKIE)\n\nclient.create_experiment(EXPERIMENT)\nexperiments = client.list_experiments(namespace=NAMESPACE)\nmy_experiment = experiments.experiments[0]\nmy_experiment", "Set Inference parameters", "DEPLOY_NAME=\"bertserve\"\nMODEL_NAME=\"bert\"\n\n! python utils/generate_templates.py bert/template_mapping.json\n\nprepare_tensorboard_op = load_component_from_file(\"yaml/tensorboard_component.yaml\")\nprep_op = components.load_component_from_file(\n \"yaml/preprocess_component.yaml\"\n)\ntrain_op = components.load_component_from_file(\n \"yaml/train_component.yaml\"\n)\ndeploy_op = load_component_from_file(\"yaml/deploy_component.yaml\")\nminio_op = components.load_component_from_file(\n \"yaml/minio_component.yaml\"\n)", "Define pipeline", "@dsl.pipeline(name=\"Training pipeline\", description=\"Sample training job test\")\ndef pytorch_bert( # pylint: disable=too-many-arguments\n minio_endpoint=MINIO_ENDPOINT,\n log_bucket=LOG_BUCKET,\n log_dir=f\"tensorboard/logs/{dsl.RUN_ID_PLACEHOLDER}\",\n mar_path=f\"mar/{dsl.RUN_ID_PLACEHOLDER}/model-store\",\n config_prop_path=f\"mar/{dsl.RUN_ID_PLACEHOLDER}/config\",\n model_uri=f\"s3://mlpipeline/mar/{dsl.RUN_ID_PLACEHOLDER}\",\n tf_image=TENSORBOARD_IMAGE,\n deploy=DEPLOY_NAME,\n namespace=NAMESPACE,\n confusion_matrix_log_dir=f\"confusion_matrix/{dsl.RUN_ID_PLACEHOLDER}/\",\n num_samples=1000,\n max_epochs=1\n):\n \"\"\"Thid method defines the pipeline tasks and operations\"\"\"\n prepare_tb_task = prepare_tensorboard_op(\n log_dir_uri=f\"s3://{log_bucket}/{log_dir}\",\n image=tf_image,\n pod_template_spec=json.dumps({\n \"spec\": {\n \"containers\": [{\n \"env\": [\n {\n \"name\": \"AWS_ACCESS_KEY_ID\",\n \"valueFrom\": {\n \"secretKeyRef\": {\n \"name\": \"mlpipeline-minio-artifact\",\n \"key\": \"accesskey\",\n }\n },\n },\n {\n \"name\": \"AWS_SECRET_ACCESS_KEY\",\n \"valueFrom\": {\n \"secretKeyRef\": {\n \"name\": \"mlpipeline-minio-artifact\",\n \"key\": \"secretkey\",\n }\n },\n },\n {\n \"name\": \"AWS_REGION\",\n \"value\": \"minio\"\n },\n {\n \"name\": \"S3_ENDPOINT\",\n \"value\": f\"{minio_endpoint}\",\n },\n {\n \"name\": \"S3_USE_HTTPS\",\n \"value\": \"0\"\n },\n {\n \"name\": \"S3_VERIFY_SSL\",\n \"value\": \"0\"\n },\n ]\n }]\n }\n }),\n ).set_display_name(\"Visualization\")\n\n prep_task = (\n prep_op().after(prepare_tb_task\n ).set_display_name(\"Preprocess & Transform\")\n )\n confusion_matrix_url = f\"minio://{log_bucket}/{confusion_matrix_log_dir}\"\n script_args = f\"model_name=bert.pth,\" \\\n f\"num_samples={num_samples},\" \\\n f\"confusion_matrix_url={confusion_matrix_url}\"\n # For GPU , set gpus count and accelerator type\n ptl_args = f\"max_epochs={max_epochs},profiler=pytorch,gpus=0,accelerator=None\"\n train_task = (\n train_op(\n input_data=prep_task.outputs[\"output_data\"],\n script_args=script_args,\n ptl_arguments=ptl_args\n ).after(prep_task).set_display_name(\"Training\")\n )\n # For GPU uncomment below line and set GPU limit and node selector\n # ).set_gpu_limit(1).add_node_selector_constraint\n # ('cloud.google.com/gke-accelerator','nvidia-tesla-p4')\n\n (\n minio_op(\n bucket_name=\"mlpipeline\",\n folder_name=log_dir,\n input_path=train_task.outputs[\"tensorboard_root\"],\n filename=\"\",\n ).after(train_task).set_display_name(\"Tensorboard Events Pusher\")\n )\n minio_mar_upload = (\n minio_op(\n bucket_name=\"mlpipeline\",\n folder_name=mar_path,\n input_path=train_task.outputs[\"checkpoint_dir\"],\n filename=\"bert_test.mar\",\n ).after(train_task).set_display_name(\"Mar Pusher\")\n )\n (\n minio_op(\n bucket_name=\"mlpipeline\",\n folder_name=config_prop_path,\n input_path=train_task.outputs[\"checkpoint_dir\"],\n filename=\"config.properties\",\n ).after(train_task).set_display_name(\"Conifg Pusher\")\n )\n\n model_uri = str(model_uri)\n # pylint: disable=unused-variable\n isvc_yaml = \"\"\"\n apiVersion: \"serving.kubeflow.org/v1beta1\"\n kind: \"InferenceService\"\n metadata:\n name: {}\n namespace: {}\n spec:\n predictor:\n serviceAccountName: sa\n pytorch:\n storageUri: {}\n resources:\n requests: \n cpu: 4\n memory: 8Gi\n limits:\n cpu: 4\n memory: 8Gi\n \"\"\".format(deploy, namespace, model_uri)\n\n # For GPU inference use below yaml with gpu count and accelerator\n gpu_count = \"1\"\n accelerator = \"nvidia-tesla-p4\"\n isvc_gpu_yaml = \"\"\"\n apiVersion: \"serving.kubeflow.org/v1beta1\"\n kind: \"InferenceService\"\n metadata:\n name: {}\n namespace: {}\n spec:\n predictor:\n serviceAccountName: sa\n pytorch:\n storageUri: {}\n resources:\n requests: \n cpu: 4\n memory: 8Gi\n limits:\n cpu: 4\n memory: 8Gi\n nvidia.com/gpu: {}\n nodeSelector:\n cloud.google.com/gke-accelerator: {}\n\"\"\".format(deploy, namespace, model_uri, gpu_count, accelerator)\n # Update inferenceservice_yaml for GPU inference\n deploy_task = (\n deploy_op(action=\"apply\", inferenceservice_yaml=isvc_yaml\n ).after(minio_mar_upload).set_display_name(\"Deployer\")\n )\n\n dsl.get_pipeline_conf().add_op_transformer(\n use_k8s_secret(\n secret_name=\"mlpipeline-minio-artifact\",\n k8s_secret_key_to_env={\n \"secretkey\": \"MINIO_SECRET_KEY\",\n \"accesskey\": \"MINIO_ACCESS_KEY\",\n },\n )\n )\n\n# Compile pipeline\ncompiler.Compiler().compile(pytorch_bert, 'pytorch.tar.gz', type_check=True)\n\n# Execute pipeline\nrun = client.run_pipeline(my_experiment.id, 'pytorch-bert', 'pytorch.tar.gz')", "Wait for inference service below to go to READY True state.", "!kubectl get isvc $DEPLOY", "Get Inferenceservice name", "INFERENCE_SERVICE_LIST = ! kubectl get isvc {DEPLOY_NAME} -n {NAMESPACE} -o json | python3 -c \"import sys, json; print(json.load(sys.stdin)['status']['url'])\"| tr -d '\"' | cut -d \"/\" -f 3\nINFERENCE_SERVICE_NAME = INFERENCE_SERVICE_LIST[0]\nINFERENCE_SERVICE_NAME", "Prediction Request", "!curl -v -H \"Host: $INFERENCE_SERVICE_NAME\" -H \"Cookie: $COOKIE\" \"$INGRESS_GATEWAY/v1/models/$MODEL_NAME:predict\" -d @./bert/sample.txt > bert_prediction_output.json\n\n! cat bert_prediction_output.json", "Explanation Request", "!curl -v -H \"Host: $INFERENCE_SERVICE_NAME\" -H \"Cookie: $COOKIE\" \"$INGRESS_GATEWAY/v1/models/$MODEL_NAME:explain\" -d @./bert/sample.txt > bert_explaination_output.json\n\n! cat bert_explaination_output.json\n\nexplanations_json = json.loads(open(\"./bert_explaination_output.json\", \"r\").read())\nexplanations_json\n\nprediction_json = json.loads(open(\"./bert_prediction_output.json\", \"r\").read())\n\nimport torch\nattributions = explanations_json[\"explanations\"][0]['importances']\ntokens = explanations_json[\"explanations\"][0]['words']\ndelta = explanations_json[\"explanations\"][0]['delta']\n\nattributions = torch.tensor(attributions)\npred_prob = 0.75\npred_class = prediction_json[\"predictions\"][0]\ntrue_class = \"Business\"\nattr_class =\"world\"", "Visualization of Predictions", "from captum.attr import visualization\nvis_data_records =[]\nvis_data_records.append(visualization.VisualizationDataRecord(\n attributions,\n pred_prob,\n pred_class,\n true_class,\n attr_class,\n attributions.sum(), \n tokens,\n delta))\n\nvis = visualization.visualize_text(vis_data_records)", "visualization appreas as below\n\nCleanup Script", "! kubectl delete --all isvc -n $NAMESPACE\n\n! kubectl delete pod --field-selector=status.phase==Succeeded -n $NAMESPACE" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AndreySheka/dl_ekb
hw12/a2c_kungfu_dmia.ipynb
mit
[ "Playing atari with advantage actor-critic\nThis time we're going to learn something harder then CartPole :)\nGym atari games only allow raw image pixels as observation, hence demanding a more powerful agent network to find meaningful features. We shall use a convolutional neural network for such task.\nMost of the code in this notebook is written for you, however you are strongly encouraged to experiment with it to find better agent configuration and/or learning algorithm.", "import matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\n\n#setup theano/lasagne. Prefer GPU\n%env THEANO_FLAGS=device=gpu,floatX=float32\n\n#If you are running on a server, launch xvfb to record game videos\n#Please make sure you have xvfb installed (apt-get install xvfb, see gym readme on xvfb)\nimport os\nif os.environ.get(\"DISPLAY\") is str and len(os.environ.get(\"DISPLAY\"))!=0:\n !bash xvfb start\n %env DISPLAY=:1\n\n", "Processing game image\nRaw atari images are large, 210x160x3 by default. However, we don't need that level of detail in order to learn them.\nWe can thus save a lot of time by preprocessing game image, including\n* Resizing to a smaller shape\n* Converting to grayscale\n* Cropping irrelevant image parts", "from gym.core import ObservationWrapper\nfrom gym.spaces import Box\nfrom scipy.misc import imresize\n\nclass PreprocessAtari(ObservationWrapper):\n def __init__(self, env):\n \"\"\"A gym wrapper that crops, scales image into the desired shapes and optionally grayscales it.\"\"\"\n ObservationWrapper.__init__(self,env)\n \n self.img_size = (64, 64)\n self.observation_space = Box(0.0, 1.0, self.img_size)\n\n def _observation(self, img):\n \"\"\"what happens to each observation\"\"\"\n \n # Here's what you need to do:\n # * crop image, remove irrelevant parts\n # * resize image to self.img_size \n # (use imresize imported above or any library you want,\n # e.g. opencv, skimage, PIL, keras)\n # * cast image to grayscale\n # * convert image pixels to (0,1) range, float32 type\n \n <Your code here> \n return <...>\n\n\nimport gym\n#game maker consider https://gym.openai.com/envs\ndef make_env():\n env = gym.make(\"KungFuMaster-v0\")\n return PreprocessAtari(env)\n\n\n\n\n#spawn game instance\nenv = make_env()\nobservation_shape = env.observation_space.shape\nn_actions = env.action_space.n\n\nobs = env.reset()\n\nplt.imshow(obs[0],interpolation='none',cmap='gray')", "Basic agent setup\nHere we define a simple agent that maps game images into policy using simple convolutional neural network.", "import theano, lasagne\nimport theano.tensor as T\nfrom lasagne.layers import *\nfrom agentnet.memory import WindowAugmentation\n\n#observation goes here\nobservation_layer = InputLayer((None,)+observation_shape,)\n\n#4-tick window over images\nprev_wnd = InputLayer((None,4)+observation_shape,name='window from last tick')\nnew_wnd = WindowAugmentation(observation_layer,prev_wnd,name='updated window')\n \n#reshape to (frame, h,w). If you don't use grayscale, 4 should become 12.\nwnd_reshape = reshape(new_wnd, (-1,4*observation_shape[0])+observation_shape[1:])\n", "Network body\nHere will need to build a convolutional network that consists of 4 layers:\n* 3 convolutional layers with 32 filters, 5x5 window size, 2x2 stride\n * Choose any nonlinearity but for softmax\n * You may want to increase number of filters for the last layer\n* Dense layer on top of all convolutions\n * anywhere between 100 and 512 neurons\nYou may find a template for such network below", "from lasagne.nonlinearities import rectify,elu,tanh,softmax\n\n#network body\nconv0 = Conv2DLayer(wnd_reshape,<...>)\nconv1 = <another convolutional layer, growing from conv0>\nconv2 = <yet another layer...>\n \ndense = DenseLayer(<what is it's input?>,\n nonlinearity=tanh,\n name='dense \"neck\" layer')", "Network head\nYou will now need to build output layers.\nSince we're building advantage actor-critic algorithm, out network will require two outputs:\n* policy, $pi(a|s)$, defining action probabilities\n* state value, $V(s)$, defining expected reward from the given state\nBoth those layers will grow from final dense layer from the network body.", "#actor head\nlogits_layer = DenseLayer(dense,n_actions,nonlinearity=None) \n#^^^ separately define pre-softmax policy logits to regularize them later\npolicy_layer = NonlinearityLayer(logits_layer,softmax)\n\n#critic head\nV_layer = DenseLayer(dense,1,nonlinearity=None)\n\n#sample actions proportionally to policy_layer\nfrom agentnet.resolver import ProbabilisticResolver\naction_layer = ProbabilisticResolver(policy_layer)\n\n", "Finally, agent\nWe declare that this network is and MDP agent with such and such inputs, states and outputs", "from agentnet.agent import Agent\n#all together\nagent = Agent(observation_layers=observation_layer,\n policy_estimators=(logits_layer,V_layer),\n agent_states={new_wnd:prev_wnd},\n action_layers=action_layer)\n\n\n#Since it's a single lasagne network, one can get it's weights, output, etc\nweights = lasagne.layers.get_all_params([V_layer,policy_layer],trainable=True)\nweights", "Create and manage a pool of atari sessions to play with\n\nTo make training more stable, we shall have an entire batch of game sessions each happening independent of others\nWhy several parallel agents help training: http://arxiv.org/pdf/1602.01783v1.pdf\nAlternative approach: store more sessions: https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf", "from agentnet.experiments.openai_gym.pool import EnvPool\n\n#number of parallel agents \nN_AGENTS = 10\n\npool = EnvPool(agent,make_env, N_AGENTS) #may need to adjust\n\n\n%%time\n#interact for 7 ticks\n_,action_log,reward_log,_,_,_ = pool.interact(10)\n\nprint('actions:')\nprint(action_log[0])\nprint(\"rewards\")\nprint(reward_log[0])\n\n# batch sequence length (frames) \nSEQ_LENGTH = 25\n\n#load first sessions (this function calls interact and remembers sessions)\npool.update(SEQ_LENGTH)", "Advantage actor-critic\n\nAn agent has a method that produces symbolic environment interaction sessions\nSuch sessions are in sequences of observations, agent memory, actions, q-values,etc\n\none has to pre-define maximum session length.\n\n\nSessionPool also stores rewards, alive indicators, etc.\n\nCode mostly copied from here", "#get agent's Qvalues obtained via experience replay\n#we don't unroll scan here and propagate automatic updates\n#to speed up compilation at a cost of runtime speed\nreplay = pool.experience_replay\n\n_,_,_,_,(logits_seq,V_seq) = agent.get_sessions(\n replay,\n session_length=SEQ_LENGTH,\n experience_replay=True,\n unroll_scan=False,\n)\n\nauto_updates = agent.get_automatic_updates()\n\n\n\n# compute pi(a|s) and log(pi(a|s)) manually [use logsoftmax]\n# we can't guarantee that theano optimizes logsoftmax automatically since it's still in dev\nlogits_flat = logits_seq.reshape([-1,logits_seq.shape[-1]])\npolicy_seq = T.nnet.softmax(logits_flat).reshape(logits_seq.shape)\nlogpolicy_seq = T.nnet.logsoftmax(logits_flat).reshape(logits_seq.shape)\n \n# get policy gradient\nfrom agentnet.learning import a2c\nelwise_actor_loss,elwise_critic_loss = a2c.get_elementwise_objective(policy=logpolicy_seq,\n treat_policy_as_logpolicy=True,\n state_values=V_seq[:,:,0],\n actions=replay.actions[0],\n rewards=replay.rewards/100.,\n is_alive=replay.is_alive,\n gamma_or_gammas=0.99,\n n_steps=None,\n return_separate=True)\n \n# (you can change them more or less harmlessly, this usually just makes learning faster/slower)\n# also regularize to prioritize exploration\nreg_logits = T.mean(logits_seq**2)\nreg_entropy = T.mean(T.sum(policy_seq*logpolicy_seq,axis=-1))\n\n#add-up loss components with magic numbers \nloss = 0.1*elwise_actor_loss.mean() +\\\n 0.25*elwise_critic_loss.mean() +\\\n 1e-3*reg_entropy +\\\n 1e-3*reg_logits\n\n \n\n\n# Compute weight updates, clip by norm\ngrads = T.grad(loss,weights)\ngrads = lasagne.updates.total_norm_constraint(grads,10)\n\nupdates = lasagne.updates.adam(grads, weights,1e-4)\n\n#compile train function\ntrain_step = theano.function([],loss,updates=auto_updates+updates)", "Demo run", "untrained_reward = np.mean(pool.evaluate(save_path=\"./records\",\n record_video=True))\n\n#show video\nfrom IPython.display import HTML\nimport os\n\nvideo_names = list(filter(lambda s:s.endswith(\".mp4\"),os.listdir(\"./records/\")))\n\nHTML(\"\"\"\n<video width=\"640\" height=\"480\" controls>\n <source src=\"{}\" type=\"video/mp4\">\n</video>\n\"\"\".format(\"./records/\"+video_names[-1])) #this may or may not be _last_ video. Try other indices", "Training loop", "#starting epoch\nepoch_counter = 1\n\n#full game rewards\nrewards = {}\nloss,reward_per_tick,reward =0,0,0\n\nfrom tqdm import trange\nfrom IPython.display import clear_output\n\n#the algorithm almost converges by 15k iterations, 50k is for full convergence\nfor i in trange(150000): \n \n #play\n pool.update(SEQ_LENGTH)\n\n #train\n loss = 0.95*loss + 0.05*train_step()\n \n \n if epoch_counter%10==0:\n #average reward per game tick in current experience replay pool\n reward_per_tick = 0.95*reward_per_tick + 0.05*pool.experience_replay.rewards.get_value().mean()\n print(\"iter=%i\\tloss=%.3f\\treward/tick=%.3f\"%(epoch_counter,\n loss,\n reward_per_tick))\n \n ##record current learning progress and show learning curves\n if epoch_counter%100 ==0:\n reward = 0.95*reward + 0.05*np.mean(pool.evaluate(record_video=False))\n rewards[epoch_counter] = reward\n \n clear_output(True)\n plt.plot(*zip(*sorted(rewards.items(),key=lambda (t,r):t)))\n plt.show()\n \n\n \n epoch_counter +=1\n\n \n# Time to drink some coffee!", "Evaluating results\n\nHere we plot learning curves and sample testimonials", "import pandas as pd\nplt.plot(*zip(*sorted(rewards.items(),key=lambda k:k[0])))\n\nfrom agentnet.utils.persistence import save\nsave(action_layer,\"kung_fu.pcl\")\n\n###LOAD FROM HERE\nfrom agentnet.utils.persistence import load\nload(action_layer,\"kung_fu.pcl\")\n\nrw = pool.evaluate(n_games=20,save_path=\"./records\",record_video=True)\nprint(\"mean session score=%f.5\"%np.mean(rw))\n\n#show video\nfrom IPython.display import HTML\nimport os\n\nvideo_names = list(filter(lambda s:s.endswith(\".mp4\"),os.listdir(\"./records/\")))\n\nHTML(\"\"\"\n<video width=\"640\" height=\"480\" controls>\n <source src=\"{}\" type=\"video/mp4\">\n</video>\n\"\"\".format(\"./records/\"+video_names[-1])) #this may or may not be _last_ video. Try other indices", "How to enhance\n\nMore parallel agents\nDifferent constructs for recurrent memory\nTry PGQ-like algorithms\nMaybe tune parameters for regularization" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
xMyrst/BigData
python/ejercicios/ucm_diccionarios_02_ej.ipynb
gpl-3.0
[ "Ejercicios 3\n1 Ejercicio\nEscribe una expresión Python para recuperar el valor del elemento con clave 'Hola' del un diccionario d.\n* Comprueba que si d es {} la ejecución produce un error.\n* ¿ Y si d es {'Hola': ['Hi','Hello'], 'Adios': ['Bye']} ?", "d1 = { }\nd2 = {'Hola': ['Hi','Hello'], 'Adios': ['Bye'] }\n# Sol: \n\nd2['Hola']", "2 Ejercicio\nDados dos diccionarios d1 y d2, escribe una función en Python llamada fusion que realice la fusión de los dos diccionarios pasados como parámetros. Puedes utilizar la función update.\n\n\nPrueba la función con los diccionarios d1 = {1: 'A', 2:'B', 3:'C'}, \nd2 = {4: 'Aa', 5:'Ba', 6:'Ca'}\n\n\nUtiliza la función len para recuperar el número de elementos del nuevo diccionario\n\n\nPrueba la función con los diccionarios d1 = {1: 'A', 2:'B', 3:'C'}, \nd2 = {2: 'Aa', 3:'Ba'}\n\n\nUtiliza la función len para recuperar el número de elementos del nuevo diccionario", "# Sol:\ndef fusion():\n d1 = {1: 'A', 2:'B', 3:'C'}\n d2 = {4: 'Aa', 5:'Ba', 6:'Ca'}\n d1.update(d2)\n return d1\nfusion()", "3 Ejercicio\nDada la lista de las ciudades más pobladas de Italia it:\nit = [ 'Roma', 'Milán', 'Nápoles', 'Turín', 'Palermo' , 'Génova',\n 'Bolonia', 'Florencia', 'Bari', 'Catania']\n\n\nCrea un diccionario donde la clave sea la posición que ocupa cada ciudad en la lista. Para hacerlo sigue estas indicaciones:\n\nCrea una secuencia de enteros mediante la función range. El inicio de la secuencia es el cero y el fin de la secuencia es la longitud de la lista de poblaciones de Italia.\nCrea una lista m de tuplas del tipo (pos, ciudad). Utiliza la función zip.\nUtiliza la función dict para construir el diccionario a partir de la lista m.\n\n\n\nEscribe una expresión Python para recuperar la quinta ciudad italiana más poblada.", "# Sol:\n# Definimos la lista con las ciudades que aparece en el enunciado\nit = [ 'Roma', 'Milán', 'Nápoles', 'Turín', 'Palermo' , 'Génova', 'Bolonia', 'Florencia', 'Bari', 'Catania', 'Verona']\n# Definimos una variable para almacenar una lista que crearemos a partir de un rango [0, longitud de la lista)\n# Si no especificamos inicio, el rango comienza en 0 y termina en 10\n# Si se especifica el inicio en 1, hay que sumarle +1 a la longitud de la lista a modo de offset\npos_ciudad = range(1, len(it)+1)\nresultado = list(zip(pos_ciudad, it))\nresultado\n\ndic = dict(resultado)\ndic", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AllenDowney/ThinkBayes2
soln/chap19.ipynb
mit
[ "MCMC\nThink Bayes, Second Edition\nCopyright 2020 Allen B. Downey\nLicense: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)", "# If we're running on Colab, install libraries\n\nimport sys\nIN_COLAB = 'google.colab' in sys.modules\n\nif IN_COLAB:\n !pip install empiricaldist\n\n# Get utils.py\n\nfrom os.path import basename, exists\n\ndef download(url):\n filename = basename(url)\n if not exists(filename):\n from urllib.request import urlretrieve\n local, _ = urlretrieve(url, filename)\n print('Downloaded ' + local)\n \ndownload('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')\n\nfrom utils import set_pyplot_params\nset_pyplot_params()", "For most of this book we've been using grid methods to approximate posterior distributions.\nFor models with one or two parameters, grid algorithms are fast and the results are precise enough for most practical purposes.\nWith three parameters, they start to be slow, and with more than three they are usually not practical.\nIn the previous chapter we saw that we can solve some problems using conjugate priors.\nBut the problems we can solve this way tend to be the same ones we can solve with grid algorithms.\nFor problems with more than a few parameters, the most powerful tool we have is MCMC, which stands for \"Markov chain Monte Carlo\".\nIn this context, \"Monte Carlo\" refers to to methods that generate random samples from a distribution.\nUnlike grid methods, MCMC methods don't try to compute the posterior distribution; they sample from it instead.\nIt might seem strange that you can generate a sample without ever computing the distribution, but that's the magic of MCMC.\nTo demonstrate, we'll start by solving the World Cup problem.\nYes, again.\nThe World Cup Problem\nIn <<_PoissonProcesses>> we modeled goal scoring in football (soccer) as a Poisson process characterized by a goal-scoring rate, denoted $\\lambda$.\nWe used a gamma distribution to represent the prior distribution of $\\lambda$, then we used the outcome of the game to compute the posterior distribution for both teams.\nTo answer the first question, we used the posterior distributions to compute the \"probability of superiority\" for France.\nTo answer the second question, we computed the posterior predictive distributions for each team, that is, the distribution of goals we expect in a rematch.\nIn this chapter we'll solve this problem again using PyMC3, which is a library that provide implementations of several MCMC methods.\nBut we'll start by reviewing the grid approximation of the prior and the prior predictive distribution.\nGrid Approximation\nAs we did in <<_TheGammaDistribution>> we'll use a gamma distribution with parameter $\\alpha=1.4$ to represent the prior.", "from scipy.stats import gamma\n\nalpha = 1.4\nprior_dist = gamma(alpha)", "I'll use linspace to generate possible values for $\\lambda$, and pmf_from_dist to compute a discrete approximation of the prior.", "import numpy as np\nfrom utils import pmf_from_dist\n\nlams = np.linspace(0, 10, 101)\nprior_pmf = pmf_from_dist(prior_dist, lams)", "We can use the Poisson distribution to compute the likelihood of the data; as an example, we'll use 4 goals.", "from scipy.stats import poisson\n\ndata = 4\nlikelihood = poisson.pmf(data, lams)", "Now we can do the update in the usual way.", "posterior = prior_pmf * likelihood\nposterior.normalize()", "Soon we will solve the same problem with PyMC3, but first it will be useful to introduce something new: the prior predictive distribution.\nPrior Predictive Distribution\nWe have seen the posterior predictive distribution in previous chapters; the prior predictive distribution is similar except that (as you might have guessed) it is based on the prior.\nTo estimate the prior predictive distribution, we'll start by drawing a sample from the prior.", "sample_prior = prior_dist.rvs(1000)", "The result is an array of possible values for the goal-scoring rate, $\\lambda$.\nFor each value in sample_prior, I'll generate one value from a Poisson distribution.", "from scipy.stats import poisson\n\nsample_prior_pred = poisson.rvs(sample_prior)", "sample_prior_pred is a sample from the prior predictive distribution.\nTo see what it looks like, we'll compute the PMF of the sample.", "from empiricaldist import Pmf\n\npmf_prior_pred = Pmf.from_seq(sample_prior_pred)", "And here's what it looks like:", "from utils import decorate\n\npmf_prior_pred.bar()\ndecorate(xlabel='Number of goals',\n ylabel='PMF',\n title='Prior Predictive Distribution')", "One reason to compute the prior predictive distribution is to check whether our model of the system seems reasonable.\nIn this case, the distribution of goals seems consistent with what we know about World Cup football.\nBut in this chapter we have another reason: computing the prior predictive distribution is a first step toward using MCMC.\nIntroducing PyMC3\nPyMC3 is a Python library that provides several MCMC methods.\nTo use PyMC3, we have to specify a model of the process that generates the data.\nIn this example, the model has two steps:\n\n\nFirst we draw a goal-scoring rate from the prior distribution,\n\n\nThen we draw a number of goals from a Poisson distribution.\n\n\nHere's how we specify this model in PyMC3:", "import pymc3 as pm\n\nwith pm.Model() as model:\n lam = pm.Gamma('lam', alpha=1.4, beta=1.0)\n goals = pm.Poisson('goals', lam)", "After importing pymc3, we create a Model object named model.\nIf you are not familiar with the with statement in Python, it is a way to associate a block of statements with an object.\nIn this example, the two indented statements are associated with the new Model object. As a result, when we create the distribution objects, Gamma and Poisson, they are added to the Model.\nInside the with statement:\n\n\nThe first line creates the prior, which is a gamma distribution with the given parameters.\n\n\nThe second line creates the prior predictive, which is a Poisson distribution with the parameter lam.\n\n\nThe first parameter of Gamma and Poisson is a string variable name.\nPyMC3 provides a function that generates a visual representation of the model.", "pm.model_to_graphviz(model)", "In this visualization, the ovals show that lam is drawn from a gamma distribution and goals is drawn from a Poisson distribution.\nThe arrow shows that the values of lam are used as parameters for the distribution of goals.\nSampling the Prior\nPyMC3 provides a function that generates samples from the prior and prior predictive distributions.\nWe can use a with statement to run this function in the context of the model.", "with model:\n trace = pm.sample_prior_predictive(1000)", "The result is a dictionary-like object that maps from the variables, lam and goals, to the samples.\nWe can extract the sample of lam like this:", "sample_prior_pymc = trace['lam']\nsample_prior_pymc.shape", "The following figure compares the CDF of this sample to the CDF of the sample we generated using the gamma object from SciPy.", "from empiricaldist import Cdf\n\ndef plot_cdf(sample, **options):\n \"\"\"Plot the CDF of a sample.\n \n sample: sequence of quantities\n \"\"\"\n Cdf.from_seq(sample).plot(**options)\n\nplot_cdf(sample_prior, \n label='SciPy sample',\n color='C5')\nplot_cdf(sample_prior_pymc, \n label='PyMC3 sample',\n color='C0')\ndecorate(xlabel=r'Goals per game ($\\lambda$)',\n ylabel='CDF',\n title='Prior distribution')", "The results are similar, which confirms that the specification of the model is correct and the sampler works as advertised.\nFrom the trace we can also extract goals, which is a sample from the prior predictive distribution.", "sample_prior_pred_pymc = trace['goals']\nsample_prior_pred_pymc.shape", "And we can compare it to the sample we generated using the poisson object from SciPy.\nBecause the quantities in the posterior predictive distribution are discrete (number of goals) I'll plot the CDFs as step functions.", "def plot_pred(sample, **options):\n Cdf.from_seq(sample).step(**options)\n\nplot_pred(sample_prior_pred, \n label='SciPy sample', \n color='C5')\nplot_pred(sample_prior_pred_pymc, \n label='PyMC3 sample', \n color='C13')\ndecorate(xlabel='Number of goals',\n ylabel='PMF',\n title='Prior Predictive Distribution')", "Again, the results are similar, so we have some confidence we are using PyMC3 right.\nWhen Do We Get to Inference?\nFinally, we are ready for actual inference. We just have to make one small change.\nHere is the model we used to generate the prior predictive distribution:", "with pm.Model() as model:\n lam = pm.Gamma('lam', alpha=1.4, beta=1.0)\n goals = pm.Poisson('goals', lam)", "And here is the model we'll use to compute the posterior distribution.", "with pm.Model() as model2:\n lam = pm.Gamma('lam', alpha=1.4, beta=1.0)\n goals = pm.Poisson('goals', lam, observed=4)", "The difference is that we mark goals as observed and provide the observed data, 4.\nAnd instead of calling sample_prior_predictive, we'll call sample, which is understood to sample from the posterior distribution of lam.", "options = dict(return_inferencedata=False)\n\nwith model2:\n trace2 = pm.sample(500, **options)", "Although the specification of these models is similar, the sampling process is very different.\nI won't go into the details of how PyMC3 works, but here are a few things you should be aware of:\n\n\nDepending on the model, PyMC3 uses one of several MCMC methods; in this example, it uses the No U-Turn Sampler (NUTS), which is one of the most efficient and reliable methods we have.\n\n\nWhen the sampler starts, the first values it generates are usually not a representative sample from the posterior distribution, so these values are discarded. This process is called \"tuning\".\n\n\nInstead of using a single Markov chain, PyMC3 uses multiple chains. Then we can compare results from multiple chains to make sure they are consistent.\n\n\nAlthough we asked for a sample of 500, PyMC3 generated two samples of 1000, discarded half of each, and returned the remaining 1000.\nFrom trace2 we can extract a sample from the posterior distribution, like this:", "sample_post_pymc = trace2['lam']\n\nsample_post_pymc.shape", "And we can compare the CDF of this sample to the posterior we computed by grid approximation:", "posterior.make_cdf().plot(label='posterior grid', \n color='C5')\nplot_cdf(sample_post_pymc, \n label='PyMC3 sample',\n color='C4')\n\ndecorate(xlabel=r'Goals per game ($\\lambda$)',\n ylabel='CDF',\n title='Posterior distribution')", "The results from PyMC3 are consistent with the results from the grid approximation.\nPosterior Predictive Distribution\nFinally, to sample from the posterior predictive distribution, we can use sample_posterior_predictive:", "with model2:\n post_pred = pm.sample_posterior_predictive(trace2)", "The result is a dictionary that contains a sample of goals.", "sample_post_pred_pymc = post_pred['goals']\n\nsample_post_pred_pymc.shape", "I'll also generate a sample from the posterior distribution we computed by grid approximation.", "sample_post = posterior.sample(1000)\nsample_post_pred = poisson(sample_post).rvs()", "And we can compare the two samples.", "plot_pred(sample_post_pred, \n label='grid sample',\n color='C5')\nplot_pred(sample_post_pred_pymc, \n label='PyMC3 sample',\n color='C12')\n\ndecorate(xlabel='Number of goals',\n ylabel='PMF',\n title='Posterior Predictive Distribution')", "Again, the results are consistent.\nSo we've established that we can compute the same results using a grid approximation or PyMC3.\nBut it might not be clear why.\nIn this example, the grid algorithm requires less computation than MCMC, and the result is a pretty good approximation of the posterior distribution, rather than a sample.\nHowever, this is a simple model with just one parameter.\nIn fact, we could have solved it with even less computation, using a conjugate prior.\nThe power of PyMC3 will be clearer with a more complex model.\nHappiness\nRecently I read \"Happiness and Life Satisfaction\"\nby Esteban Ortiz-Ospina and Max Roser, which discusses (among many other things) the relationship between income and happiness, both between countries, within countries, and over time.\nIt cites the \"World Happiness Report\", which includes results of a multiple regression analysis that explores the relationship between happiness and six potentially predictive factors:\n\n\nIncome as represented by per capita GDP\n\n\nSocial support\n\n\nHealthy life expectancy at birth\n\n\nFreedom to make life choices\n\n\nGenerosity\n\n\nPerceptions of corruption\n\n\nThe dependent variable is the national average of responses to the \"Cantril ladder question\" used by the Gallup World Poll:\n\nPlease imagine a ladder with steps numbered from zero at the bottom to 10 at the top. The top of the ladder represents the best possible life for you and the bottom of the ladder represents the worst possible life for you. On which step of the ladder would you say you personally feel you stand at this time?\n\nI'll refer to the responses as \"happiness\", but it might be more precise to think of them as a measure of satisfaction with quality of life.\nIn the next few sections we'll replicate the analysis in this report using Bayesian regression.\nThe data from this report can be downloaded from here.", "# Get the data file\n\ndownload('https://happiness-report.s3.amazonaws.com/2020/WHR20_DataForFigure2.1.xls')", "We can use Pandas to read the data into a DataFrame.", "import pandas as pd\n\nfilename = 'WHR20_DataForFigure2.1.xls'\ndf = pd.read_excel(filename)\n\ndf.head(3)\n\ndf.shape", "The DataFrame has one row for each of 153 countries and one column for each of 20 variables.\nThe column called 'Ladder score' contains the measurements of happiness we will try to predict.", "score = df['Ladder score']", "Simple Regression\nTo get started, let's look at the relationship between happiness and income as represented by gross domestic product (GDP) per person.\nThe column named 'Logged GDP per capita' represents the natural logarithm of GDP for each country, divided by population, corrected for purchasing power parity (PPP).", "log_gdp = df['Logged GDP per capita']", "The following figure is a scatter plot of score versus log_gdp, with one marker for each country.", "import matplotlib.pyplot as plt\n\nplt.plot(log_gdp, score, '.')\n\ndecorate(xlabel='Log GDP per capita at PPP',\n ylabel='Happiness ladder score')", "It's clear that there is a relationship between these variables: people in countries with higher GDP generally report higher levels of happiness.\nWe can use linregress from SciPy to compute a simple regression of these variables.", "from scipy.stats import linregress\n\nresult = linregress(log_gdp, score)", "And here are the results.", "pd.DataFrame([result.slope, result.intercept],\n index=['Slope', 'Intercept'],\n columns=[''])", "The estimated slope is about 0.72, which suggests that an increase of one unit in log-GDP, which is a factor of $e \\approx 2.7$ in GDP, is associated with an increase of 0.72 units on the happiness ladder.\nNow let's estimate the same parameters using PyMC3.\nWe'll use the same regression model as in Section <<_RegressionModel>>:\n$$y = a x + b + \\epsilon$$\nwhere $y$ is the dependent variable (ladder score), $x$ is the predictive variable (log GDP) and $\\epsilon$ is a series of values from a normal distribution with standard deviation $\\sigma$.\n$a$ and $b$ are the slope and intercept of the regression line.\nThey are unknown parameters, so we will use the data to estimate them.\nThe following is the PyMC3 specification of this model.", "x_data = log_gdp\ny_data = score\n\nwith pm.Model() as model3:\n a = pm.Uniform('a', 0, 4)\n b = pm.Uniform('b', -4, 4)\n sigma = pm.Uniform('sigma', 0, 2)\n\n y_est = a * x_data + b\n y = pm.Normal('y', \n mu=y_est, sd=sigma, \n observed=y_data)", "The prior distributions for the parameters a, b, and sigma are uniform with ranges that are wide enough to cover the posterior distributions.\ny_est is the estimated value of the dependent variable, based on the regression equation.\nAnd y is a normal distribution with mean y_est and standard deviation sigma.\nNotice how the data are included in the model:\n\n\nThe values of the predictive variable, x_data, are used to compute y_est.\n\n\nThe values of the dependent variable, y_data, are provided as the observed values of y.\n\n\nNow we can use this model to generate a sample from the posterior distribution.", "with model3:\n trace3 = pm.sample(500, **options)", "When you run the sampler, you might get warning messages about \"divergences\" and the \"acceptance probability\".\nYou can ignore them for now.\nThe result is an object that contains samples from the joint posterior distribution of a, b, and sigma.", "trace3", "ArviZ provides plot_posterior, which we can use to plot the posterior distributions of the parameters.\nHere are the posterior distributions of slope, a, and intercept, b.", "import arviz as az\n\nwith model3:\n az.plot_posterior(trace3, var_names=['a', 'b']);", "The graphs show the distributions of the samples, estimated by KDE, and 94% credible intervals. In the figure, \"HDI\" stands for \"highest-density interval\".\nThe means of these samples are consistent with the parameters we estimated with linregress.", "print('Sample mean:', trace3['a'].mean())\nprint('Regression slope:', result.slope)\n\nprint('Sample mean:', trace3['b'].mean())\nprint('Regression intercept:', result.intercept)", "Finally, we can check the marginal posterior distribution of sigma", "az.plot_posterior(trace3['sigma']);", "The values in the posterior distribution of sigma seem plausible.\nThe simple regression model has only three parameters, so we could have used a grid algorithm.\nBut the regression model in the happiness report has six predictive variables, so it has eight parameters in total, including the intercept and sigma.\nIt is not practical to compute a grid approximation for a model with eight parameters.\nEven a coarse grid, with 20 points along each dimension, would have more than 25 billion points.\nAnd with 153 countries, we would have to compute almost 4 trillion likelihoods.\nBut PyMC3 can handle a model with eight parameters comfortably, as we'll see in the next section.", "20 ** 8 / 1e9\n\n153 * 20 ** 8 / 1e12", "Multiple Regression\nBefore we implement the multiple regression model, I'll select the columns we need from the DataFrame.", "columns = ['Ladder score',\n 'Logged GDP per capita',\n 'Social support',\n 'Healthy life expectancy',\n 'Freedom to make life choices',\n 'Generosity',\n 'Perceptions of corruption']\n\nsubset = df[columns]\n\nsubset.head(3)", "The predictive variables have different units: log-GDP is in log-dollars, life expectancy is in years, and the other variables are on arbitrary scales.\nTo make these factors comparable, I'll standardize the data so that each variable has mean 0 and standard deviation 1.", "standardized = (subset - subset.mean()) / subset.std()", "Now let's build the model.\nI'll extract the dependent variable.", "y_data = standardized['Ladder score']", "And the dependent variables.", "x1 = standardized[columns[1]]\nx2 = standardized[columns[2]]\nx3 = standardized[columns[3]]\nx4 = standardized[columns[4]]\nx5 = standardized[columns[5]]\nx6 = standardized[columns[6]]", "And here's the model. b0 is the intercept; b1 through b6 are the parameters associated with the predictive variables.", "with pm.Model() as model4:\n b0 = pm.Uniform('b0', -4, 4)\n b1 = pm.Uniform('b1', -4, 4)\n b2 = pm.Uniform('b2', -4, 4)\n b3 = pm.Uniform('b3', -4, 4)\n b4 = pm.Uniform('b4', -4, 4)\n b5 = pm.Uniform('b5', -4, 4)\n b6 = pm.Uniform('b6', -4, 4)\n sigma = pm.Uniform('sigma', 0, 2)\n\n y_est = b0 + b1*x1 + b2*x2 + b3*x3 + b4*x4 + b5*x5 + b6*x6\n y = pm.Normal('y', \n mu=y_est, sd=sigma, \n observed=y_data)", "We could express this model more concisely using a vector of predictive variables and a vector of parameters, but I decided to keep it simple.\nNow we can sample from the joint posterior distribution.", "with model4:\n trace4 = pm.sample(500, **options)", "Because we standardized the data, we expect the intercept to be 0, and in fact the posterior mean of b0 is close to 0.", "trace4['b0'].mean()", "We can also check the posterior mean of sigma:", "trace4['sigma'].mean()", "From trace4 we can extract samples from the posterior distributions of the parameters and compute their means.", "param_names = ['b1', 'b3', 'b3', 'b4', 'b5', 'b6']\n\nmeans = [trace4[name].mean() \n for name in param_names]", "We can also compute 94% credible intervals (between the 3rd and 97th percentiles).", "def credible_interval(sample):\n \"\"\"Compute 94% credible interval.\"\"\"\n ci = np.percentile(sample, [3, 97])\n return np.round(ci, 3)\n\ncis = [credible_interval(trace4[name])\n for name in param_names]", "The following table summarizes the results.", "index = columns[1:]\ntable = pd.DataFrame(index=index)\ntable['Posterior mean'] = np.round(means, 3)\ntable['94% CI'] = cis\ntable", "It looks like GDP has the strongest association with happiness (or satisfaction), followed by social support, life expectancy, and freedom.\nAfter controlling for those other factors, the parameters of the other factors are substantially smaller, and since the CI for generosity includes 0, it is plausible that generosity is not substantially related to happiness, at least as they were measured in this study.\nThis example demonstrates the power of MCMC to handle models with more than a few parameters.\nBut it does not really demonstrate the power of Bayesian regression.\nIf the goal of a regression model is to estimate parameters, there is no great advantage to Bayesian regression compared to conventional least squares regression.\nBayesian methods are more useful if we plan to use the posterior distribution of the parameters as part of a decision analysis process.\nSummary\nIn this chapter we used PyMC3 to implement two models we've seen before: a Poisson model of goal-scoring in soccer and a simple regression model.\nThen we implemented a multiple regression model that would not have been possible to compute with a grid approximation.\nMCMC is more powerful than grid methods, but that power comes with some disadvantages:\n\n\nMCMC algorithms are fiddly. The same model might behave well with some priors and less well with others. And the sampling process often produces warnings about tuning steps, divergences, \"r-hat statistics\", acceptance rates, and effective samples. It takes some expertise to diagnose and correct these issues.\n\n\nI find it easier to develop models incrementally using grid algorithms, checking intermediate results along the way. With PyMC3, it is not as easy to be confident that you have specified a model correctly.\n\n\nFor these reasons, I recommend a model development process that starts with grid algorithms and resorts to MCMC if necessary.\nAs we saw in the previous chapters, you can solve a lot of real-world problems with grid methods.\nBut when you need MCMC, it is useful to have a grid algorithm to compare to (even if it is based on a simpler model).\nAll of the models in this book can be implemented in PyMC3, but some of them are easier to translate than others.\nIn the exercises, you will have a chance to practice.\nExercises\nExercise: As a warmup, let's use PyMC3 to solve the Euro problem.\nSuppose we spin a coin 250 times and it comes up heads 140 times.\nWhat is the posterior distribution of $x$, the probability of heads?\nFor the prior, use a beta distribution with parameters $\\alpha=1$ and $\\beta=1$.\nSee the PyMC3 documentation for the list of continuous distributions.", "# Solution\n\nn = 250\nk_obs = 140\n\nwith pm.Model() as model5:\n x = pm.Beta('x', alpha=1, beta=1)\n k = pm.Binomial('k', n=n, p=x, observed=k_obs)\n trace5 = pm.sample(500, **options)\n az.plot_posterior(trace5)", "Exercise: Now let's use PyMC3 to replicate the solution to the Grizzly Bear problem in <<_TheGrizzlyBearProblem>>, which is based on the hypergeometric distribution.\nI'll present the problem with slightly different notation, to make it consistent with PyMC3.\nSuppose that during the first session, k=23 bears are tagged. During the second session, n=19 bears are identified, of which x=4 had been tagged.\nEstimate the posterior distribution of N, the number of bears in the environment.\nFor the prior, use a discrete uniform distribution from 50 to 500.\nSee the PyMC3 documentation for the list of discrete distributions.\nNote: HyperGeometric was added to PyMC3 after version 3.8, so you might need to update your installation to do this exercise.", "# Solution\n\nk = 23\nn = 19\nx = 4\n\nwith pm.Model() as model6:\n N = pm.DiscreteUniform('N', 50, 500)\n y = pm.HyperGeometric('y', N=N, k=k, n=n, observed=x)\n trace6 = pm.sample(1000, **options)\n az.plot_posterior(trace6)", "Exercise: In <<_TheWeibullDistribution>> we generated a sample from a Weibull distribution with $\\lambda=3$ and $k=0.8$.\nThen we used the data to compute a grid approximation of the posterior distribution of those parameters.\nNow let's do the same with PyMC3.\nFor the priors, you can use uniform distributions as we did in <<_SurvivalAnalysis>>, or you could use HalfNormal distributions provided by PyMC3.\nNote: The Weibull class in PyMC3 uses different parameters than SciPy. The parameter alpha in PyMC3 corresponds to $k$, and beta corresponds to $\\lambda$.\nHere's the data again:", "data = [0.80497283, 2.11577082, 0.43308797, 0.10862644, 5.17334866,\n 3.25745053, 3.05555883, 2.47401062, 0.05340806, 1.08386395]\n\n# Solution\n\nwith pm.Model() as model7:\n lam = pm.Uniform('lam', 0.1, 10.1)\n k = pm.Uniform('k', 0.1, 5.1)\n y = pm.Weibull('y', alpha=k, beta=lam, observed=data)\n trace7 = pm.sample(1000, **options)\n az.plot_posterior(trace7)", "Exercise: In <<_ImprovingReadingAbility>> we used data from a reading test to estimate the parameters of a normal distribution.\nMake a model that defines uniform prior distributions for mu and sigma and uses the data to estimate their posterior distributions.\nHere's the data again.", "download('https://github.com/AllenDowney/ThinkBayes2/raw/master/data/drp_scores.csv')\n\nimport pandas as pd\n\ndf = pd.read_csv('drp_scores.csv', skiprows=21, delimiter='\\t')\ndf.head()", "I'll use groupby to separate the treated group from the control group.", "grouped = df.groupby('Treatment')\nresponses = {}\n\nfor name, group in grouped:\n responses[name] = group['Response']", "Now estimate the parameters for the treated group.", "data = responses['Treated']\n\n# Solution\n\nwith pm.Model() as model8:\n mu = pm.Uniform('mu', 20, 80)\n sigma = pm.Uniform('sigma', 5, 30)\n y = pm.Normal('y', mu, sigma, observed=data)\n trace8 = pm.sample(500, **options)\n\n# Solution\n\nwith model8:\n az.plot_posterior(trace8)", "Exercise: In <<_TheLincolnIndexProblem>> we used a grid algorithm to solve the Lincoln Index problem as presented by John D. Cook:\n\n\"Suppose you have a tester who finds 20 bugs in your program. You want to estimate how many bugs are really in the program. You know there are at least 20 bugs, and if you have supreme confidence in your tester, you may suppose there are around 20 bugs. But maybe your tester isn't very good. Maybe there are hundreds of bugs. How can you have any idea how many bugs there are? There's no way to know with one tester. But if you have two testers, you can get a good idea, even if you don't know how skilled the testers are.\"\n\nSuppose the first tester finds 20 bugs, the second finds 15, and they\nfind 3 in common; use PyMC3 to estimate the number of bugs.\nNote: This exercise is more difficult that some of the previous ones. One of the challenges is that the data includes k00, which depends on N:\nk00 = N - num_seen\nSo we have to construct the data as part of the model.\nTo do that, we can use pm.math.stack, which makes an array:\ndata = pm.math.stack((k00, k01, k10, k11))\nFinally, you might find it helpful to use pm.Multinomial.\nI'll use the following notation for the data:\n\n\nk11 is the number of bugs found by both testers,\n\n\nk10 is the number of bugs found by the first tester but not the second,\n\n\nk01 is the number of bugs found by the second tester but not the first, and\n\n\nk00 is the unknown number of undiscovered bugs.\n\n\nHere are the values for all but k00:", "k10 = 20 - 3\nk01 = 15 - 3\nk11 = 3", "In total, 32 bugs have been discovered:", "num_seen = k01 + k10 + k11\nnum_seen\n\n# Solution\n\nwith pm.Model() as model9:\n p0 = pm.Beta('p0', alpha=1, beta=1)\n p1 = pm.Beta('p1', alpha=1, beta=1)\n N = pm.DiscreteUniform('N', num_seen, 350)\n \n q0 = 1-p0\n q1 = 1-p1\n ps = [q0*q1, q0*p1, p0*q1, p0*p1]\n \n k00 = N - num_seen\n data = pm.math.stack((k00, k01, k10, k11))\n y = pm.Multinomial('y', n=N, p=ps, observed=data)\n\n# Solution\n\nwith model9:\n trace9 = pm.sample(1000, **options)\n\n# Solution\n\nwith model9:\n az.plot_posterior(trace9)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nouiz/summerschool2015
convnets/lenet.ipynb
bsd-3-clause
[ "This notebook demonstrates the LeNet model.\nFirst we load some dependencies for our code.", "import numpy\n\nimport theano\nimport theano.tensor as T\n\nfrom logistic_sgd import LogisticRegression\nfrom mlp import HiddenLayer", "Now we can start to define the actual convolution code. We start by defining an object that represents a single layer of convolution that does the actual convolution operation followed by pooling over the output of that convolution. These layers will be stacked in the final model.", "from theano.tensor.signal import downsample\nfrom theano.tensor.nnet import conv\n\nclass LeNetConvPoolLayer(object):\n def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)):\n assert image_shape[1] == filter_shape[1]\n self.input = input\n\n # there are \"num input feature maps * filter height * filter width\"\n # inputs to each hidden unit\n fan_in = numpy.prod(filter_shape[1:])\n # each unit in the lower layer receives a gradient from:\n # \"num output feature maps * filter height * filter width\" / pooling size\n fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) /\n numpy.prod(poolsize))\n # initialize weights with random weights\n W_bound = numpy.sqrt(6. / (fan_in + fan_out))\n self.W = theano.shared(\n numpy.asarray(\n rng.uniform(low=-W_bound, high=W_bound, size=filter_shape),\n dtype=theano.config.floatX\n ),\n borrow=True\n )\n\n # the bias is a 1D tensor -- one bias per output feature map\n b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX)\n self.b = theano.shared(value=b_values, borrow=True)\n\n # convolve input feature maps with filters\n conv_out = conv.conv2d(\n input=input,\n filters=self.W,\n filter_shape=filter_shape,\n image_shape=image_shape\n )\n\n # downsample each feature map individually, using maxpooling\n pooled_out = downsample.max_pool_2d(\n input=conv_out,\n ds=poolsize,\n ignore_border=True\n )\n\n # add the bias term. Since the bias is a vector (1D array), we first\n # reshape it to a tensor of shape (1, n_filters, 1, 1). Each bias will\n # thus be broadcasted across mini-batches and feature map\n # width & height\n self.output = T.tanh(pooled_out + self.b.dimshuffle('x', 0, 'x', 'x'))\n\n # store parameters of this layer\n self.params = [self.W, self.b]", "This next method uses the convolution layer above to make a stack of them and adds a hidden layer followed by a logistic regression classification layer on top.", "import time\n\nimport fuel\nfrom fuel.streams import DataStream\nfrom fuel.schemes import SequentialScheme\nfrom fuel.transformers import Cast\n\nfuel.config.floatX = theano.config.floatX = 'float32'\n\n\ndef evaluate_lenet5(train, test, valid,\n learning_rate=0.1, n_epochs=200,\n nkerns=[20, 50], batch_size=500):\n rng = numpy.random.RandomState(23455)\n\n train_stream = DataStream.default_stream(\n train, iteration_scheme=SequentialScheme(train.num_examples,\n batch_size))\n valid_stream = DataStream.default_stream(\n valid, iteration_scheme=SequentialScheme(valid.num_examples,\n batch_size))\n test_stream = DataStream.default_stream(\n test, iteration_scheme=SequentialScheme(test.num_examples,\n batch_size))\n\n x = T.tensor4('x')\n yi = T.imatrix('y')\n y = yi.reshape((yi.shape[0],))\n\n # Construct the first convolutional pooling layer:\n # filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24)\n # maxpooling reduces this further to (24/2, 24/2) = (12, 12)\n # 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12)\n layer0 = LeNetConvPoolLayer(\n rng,\n input=x,\n image_shape=(batch_size, 1, 28, 28),\n filter_shape=(nkerns[0], 1, 5, 5),\n poolsize=(2, 2)\n )\n\n # Construct the second convolutional pooling layer\n # filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8)\n # maxpooling reduces this further to (8/2, 8/2) = (4, 4)\n # 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4)\n layer1 = LeNetConvPoolLayer(\n rng,\n input=layer0.output,\n image_shape=(batch_size, nkerns[0], 12, 12),\n filter_shape=(nkerns[1], nkerns[0], 5, 5),\n poolsize=(2, 2)\n )\n\n # the HiddenLayer being fully-connected, it operates on 2D matrices of\n # shape (batch_size, num_pixels) (i.e matrix of rasterized images).\n # This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4),\n # or (500, 50 * 4 * 4) = (500, 800) with the default values.\n layer2_input = layer1.output.flatten(2)\n\n # construct a fully-connected sigmoidal layer\n layer2 = HiddenLayer(\n rng,\n input=layer2_input,\n n_in=nkerns[1] * 4 * 4,\n n_out=500,\n activation=T.tanh\n )\n\n # classify the values of the fully-connected sigmoidal layer\n layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10)\n\n # the cost we minimize during training is the NLL of the model\n cost = layer3.negative_log_likelihood(y)\n\n # create a function to compute the mistakes that are made by the model\n model_errors = theano.function(\n [x, yi],\n layer3.errors(y)\n )\n\n # create a list of all model parameters to be fit by gradient descent\n params = layer3.params + layer2.params + layer1.params + layer0.params\n\n # create a list of gradients for all model parameters\n grads = T.grad(cost, params)\n\n # train_model is a function that updates the model parameters by\n # SGD Since this model has many parameters, it would be tedious to\n # manually create an update rule for each model parameter. We thus\n # create the updates list by automatically looping over all\n # (params[i], grads[i]) pairs.\n updates = [\n (param_i, param_i - learning_rate * grad_i)\n for param_i, grad_i in zip(params, grads)\n ]\n\n train_model = theano.function(\n [x, yi],\n cost,\n updates=updates\n )\n\n # early-stopping parameters\n patience = 10000 # look as this many examples regardless\n patience_increase = 2 # wait this much longer when a new best is found\n\n # a relative improvement of this much is considered significant\n improvement_threshold = 0.995\n\n n_train_batches = (train.num_examples + batch_size - 1) // batch_size\n \n # go through this many minibatches before checking the network on\n # the validation set; in this case we check every epoch\n validation_frequency = min(n_train_batches, patience / 2)\n\n best_validation_loss = numpy.inf\n best_iter = 0\n test_score = 0.\n start_time = time.clock()\n\n epoch = 0\n iter = 0\n done_looping = False\n\n while (epoch < n_epochs) and (not done_looping):\n epoch = epoch + 1\n\n minibatch_index = 0\n for minibatch in train_stream.get_epoch_iterator():\n iter += 1\n minibatch_index += 1\n if iter % 100 == 0:\n print('training @ iter = ', iter)\n\n error = train_model(minibatch[0], minibatch[1])\n\n if (iter + 1) % validation_frequency == 0:\n\n # compute zero-one loss on validation set\n validation_losses = [model_errors(vb[0], vb[1]) for vb\n in valid_stream.get_epoch_iterator()]\n this_validation_loss = numpy.mean(validation_losses)\n print('epoch %i, minibatch %i/%i, validation error %f %%' %\n (epoch, minibatch_index + 1, n_train_batches,\n this_validation_loss * 100.))\n\n # if we got the best validation score until now\n if this_validation_loss < best_validation_loss:\n\n # improve patience if loss improvement is good enough\n if this_validation_loss < best_validation_loss * improvement_threshold:\n patience = max(patience, iter * patience_increase)\n\n # save best validation score and iteration number\n best_validation_loss = this_validation_loss\n best_iter = iter\n\n # test it on the test set\n test_losses = [\n model_errors(tb[0], tb[1])\n for tb in test_stream.get_epoch_iterator()\n ]\n test_score = numpy.mean(test_losses)\n print((' epoch %i, minibatch %i/%i, test error of '\n 'best model %f %%') %\n (epoch, minibatch_index + 1, n_train_batches,\n test_score * 100.))\n\n if patience <= iter:\n done_looping = True\n break\n\n end_time = time.clock()\n print('Optimization complete.')\n print('Best validation score of %f %% obtained at iteration %i, '\n 'with test performance %f %%' %\n (best_validation_loss * 100., best_iter + 1, test_score * 100.))\n print('The code ran for %.2fm' % ((end_time - start_time) / 60.))\n\n # This is to make the pretty pictures in the cells below\n layer0_out = theano.function([x], layer0.output)\n layer1_out = theano.function([x], layer1.output)\n \n return params, layer0_out, layer1_out", "This cell runs the model and allows you to play with a few hyperparameters. The ones below take about 1 to 2 minutes to run.", "from fuel.datasets import MNIST\n\ntrain = MNIST(which_sets=('train',), subset=slice(0, 50000))\nvalid = MNIST(which_sets=('train',), subset=slice(50000, 60000))\ntest = MNIST(which_sets=('test',))\n\nparams, layer0_out, layer1_out = evaluate_lenet5(train, test, valid,\n learning_rate=0.1, n_epochs=10,\n nkerns=[10, 25], batch_size=50)", "For most convolution model it can be interesting to show what the trained filters look like. The code below does that from the parameters returned by the training function above. In this model there isn't much of an effect since the filters are 5x5 and we can't see much unfortunately.", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfrom utils import tile_raster_images\n\nfilts1 = params[6].get_value()\nfilts2 = params[4].get_value()\n\nplt.clf()\n\n# Increase the size of the figure\nplt.gcf().set_size_inches(15, 10)\n\n# Make a grid for the two layers\ngs = plt.GridSpec(1, 2, width_ratios=[1, 25], height_ratios=[1, 1])\na = plt.subplot(gs[0])\nb = plt.subplot(gs[1])\n\n\n# Show the first layer filters (the small column)\na.imshow(tile_raster_images(filts1.reshape(10, 25), img_shape=(5, 5), tile_shape=(10, 1), tile_spacing=(1,1)),\n cmap=\"Greys\", interpolation=\"none\")\na.axis('off')\n\n# Show the second layer filters (the large block)\nb.imshow(tile_raster_images(filts2.reshape(250, 25), img_shape=(5, 5), tile_shape=(10, 25), tile_spacing=(1,1)),\n cmap=\"Greys\", interpolation=\"none\")\nb.axis('off')\n", "What can also be interesting is to draw the outputs of the filters for an example. This works somewhat better for this model.", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfrom utils import tile_raster_images\n\n# Grab some input examples from the test set (we cheat a bit here)\nsample = test.get_data(None, slice(0, 50))[0]\n# We will print this example amongst the batch\nexample = 7\n\nplt.gcf()\n\n# Increase the size of the figure\nplt.gcf().set_size_inches(15, 10)\n\ngs = plt.GridSpec(1, 3, width_ratios=[1, 1, 1], height_ratios=[1, 1, 1])\n\n# Draw the input data\na = plt.subplot(gs[0])\na.imshow(sample[example, 0], cmap=\"Greys\", interpolation='none')\na.axis('off')\n\n# Compute first layer output\nout0 = layer0_out(sample)[example]\n\n# Draw its output\nb = plt.subplot(gs[1])\nb.imshow(tile_raster_images(out0.reshape(10, 144), img_shape=(12, 12), tile_shape=(5, 2), tile_spacing=(1, 1)),\n cmap=\"Greys\", interpolation='none')\nb.axis('off')\n\n# Compute the second layer output\nout1 = layer1_out(sample)[example]\n\n# Draw it\nc = plt.subplot(gs[2])\nc.imshow(tile_raster_images(out1.reshape(25, 16), img_shape=(4, 4), tile_shape=(5, 5), tile_spacing=(1, 1)),\n cmap=\"Greys\", interpolation='none')\nc.axis('off')", "Some things you can try with this model:\n- change the non linearity of the convolution to rectifier unit.\n- add an extra mlp layer.\nIf you break the code too much you can get back to the working initial code by loading the lenet.py file with the cell below. (Or just reset the git repo ...)", "%load lenet.py" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
YAtOff/python0-reloaded
week2/Expressions, variables and errors.ipynb
mit
[ "Изрази\nИзразите в Python са като изразите в математиката.\nВсеки изразе е изграден от сотйности (като напр. числата 1, 2, 3, ...) и оператори (+, -, ...).\nТипове\nВсяка стойност се характеризира с определн тип.\nА типът е:\n- Множеството от стойности\n- Множество от операции, които могат да се извършват с тези стойности\nЦелочислени числа (тип int)\nсотйности | операции\n--- | ---\n ..., -3, -2, -1, 0, 1, 2, 3, ...| +, -, , /, //, %, *\nРеални числа (Числа с плаваща запетая, float)\nсотйности | операции\n--- | ---\n -0.1, -0.11, ..., 0.0, ..., 0.1, ... | +, -, , /, //, %, *\n### Числови низове (тип str)\n сотйности | операции\n--- | ---\n\"hello\", \"goodbye\", ... | +\n## Приоритет на операциите\n1. *\n2. -\n3. , /, //, %\n4. +, -", "2 * 3 + 2\n\n2 * (3 + 2)", "Променливи\nПроменливата е име,с което се асоциира дадена стойност.\nВалидни имена на променливи\n\nИмето на променлива може да съдържа главни и малки букви, цифри и символът _.\nИмето на променлива трябва да започва с буква или _.\nЗа имена на променливи не може да се използват служебни думи от Python.\n\nПрепоръки за именуване на променливи\n\nИмената трябва да са описателни и да обясняват за какво служи\nдадената променлива. Например за име на човек подходящо име е\nperson_name, а неподходящо име е x.\nТрябва да се използват само латински букви.\nВ Python e прието променливите да започват винаги с малка буква и да\nсъдържат само малки букви, като всяка следваща дума в тях е разделе от\nпредходната със символа _.\nИмето на променливите трябва да не е нито много дълго, нито много\nкъсо – просто трябва да е ясно за какво служи променливата в\nконтекста, в който се използва.\nТрябва да се внимава за главни и малки букви, тъй като Python прави\nразлика между тях. Например age и Age са различни променливи.\n\nРабота с променливи", "c = 10 # number of coins - прекалени късо\nnumber_of_coins = 10 # прекалино детайлно име\ncoinsCount = 10 # ОК, но за Java\ncoins_count = 10 # OK\n\n# Задаването на стойност на променлива се нарича `присвояване`\ncount = 1\n\n# Когато Python срещне променлива в израз, той я заменя със стойността и\nprint(count + 1)\n\n# Променливите се наричат променливи, защото стойността им може да се променя\ncount = 2\nprint(count + 1)", "Какво трябва да напишем, за да увеличим стойността на count с 1 (приемете, че не знаем каква е стойността на count)?", "count = 1\ncount = count + 1\nprint(count)", "Грешки", "my var = 1\n\nprice = 1\nprint(pirce)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mndrake/PythonEuler
euler_021_030.ipynb
mit
[ "Amicable numbers\nProblem 21\nLet d(n) be defined as the sum of proper divisors of n (numbers less than n which divide evenly into n).\nIf d(a) = b and d(b) = a, where a ≠ b, then a and b are an amicable pair and each of a and b are called amicable numbers.\nFor example, the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110; therefore d(220) = 284. The proper divisors of 284 are 1, 2, 4, 71 and 142; so d(284) = 220.\nEvaluate the sum of all the amicable numbers under 10000.", "from euler import timer, Seq\nfrom math import sqrt\n\ndef d(n):\n return (range(2, int(sqrt(n))+1)\n >> Seq.filter (lambda x: n%x == 0)\n >> Seq.map (lambda x: x if x*x == n else n/x + x)\n >> Seq.sum) + 1\n\ndef isAmicable(a):\n b = d(a)\n return (a == d(b)) and (a <> b)\n\ndef p021():\n return (range(1, 10001)\n >> Seq.filter(isAmicable)\n >> Seq.sum)\n\ntimer(p021)", "Names scores\nProblem 22\nUsing names.txt (right click and 'Save Link/Target As...'), a 46K text file containing over five-thousand first names, begin by sorting it into alphabetical order. Then working out the alphabetical value for each name, multiply this value by its alphabetical position in the list to obtain a name score.\nFor example, when the list is sorted into alphabetical order, COLIN, which is worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the list. So, COLIN would obtain a score of 938 × 53 = 49714.\nWhat is the total of all the name scores in the file?", "from euler import timer, Seq\n\ndef score(s): \n return s >> Seq.map(lambda x: ord(x) - 64) >> Seq.sum\n \ndef p022():\n return (\n open('data/p022.txt').read().split(',')\n >> Seq.map(lambda x: x.strip('\"'))\n >> Seq.sort\n >> Seq.mapi(lambda (i,x): score(x)*(i+1))\n >> Seq.sum)\n\ntimer(p022)", "Non-abundant sums\nProblem 23\nA perfect number is a number for which the sum of its proper divisors is exactly equal to the number. For example, the sum of the proper divisors of 28 would be $1 + 2 + 4 + 7 + 14 = 28$, which means that 28 is a perfect number.\nA number n is called deficient if the sum of its proper divisors is less than n and it is called abundant if this sum exceeds n.\nAs 12 is the smallest abundant number, $1 + 2 + 3 + 4 + 6 = 16$, the smallest number that can be written as the sum of two abundant numbers is 24. By mathematical analysis, it can be shown that all integers greater than 28123 can be written as the sum of two abundant numbers. However, this upper limit cannot be reduced any further by analysis even though it is known that the greatest number that cannot be expressed as the sum of two abundant numbers is less than this limit.\nFind the sum of all the positive integers which cannot be written as the sum of two abundant numbers.", "from euler import FactorInteger, Seq, timer\nfrom operator import mul\n\ndef divisor_sum(n):\n return (\n FactorInteger(n) \n >> Seq.map(lambda (p,a): (p**(a+1) - 1)/(p-1)) \n >> Seq.reduce(mul)\n ) - n\n\ndef p023():\n max_n = 28123\n abundants = range(12, max_n+1) >> Seq.filter(lambda n: n < divisor_sum(n)) >> Seq.toList\n abundant_sums = (abundants \n >> Seq.collect(lambda a: abundants \n >> Seq.map(lambda b: a+b)\n >> Seq.takeWhile(lambda x: x < (max_n+1)))\n >> Seq.toSet)\n return max_n * (max_n + 1) / 2 - sum(abundant_sums)\n\ntimer(p023)", "Lexicographic permutations\nProblem 24\nA permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are:\n012 021 102 120 201 210\nWhat is the millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9?", "from math import factorial\nfrom euler import timer\n\ndef p024():\n numbers = range(10)\n\n def loop(remainder, acc):\n k = len(numbers) - 1\n if k==0:\n return acc + str(numbers[0])\n else:\n next = numbers[remainder / factorial(k)]\n numbers.remove(next)\n return loop((remainder%(factorial(k))),(acc + str(next)))\n\n return loop(999999,\"\")\n\ntimer(p024)", "1000-digit Fibonacci number\nProblem 25\nThe Fibonacci sequence is defined by the recurrence relation:\n$F_n = F_{n−1} + F_{n−2}$, where $F_1 = 1$ and $F_2 = 1$.\nHence the first 12 terms will be:\n$F_1 = 1$\n$F_2 = 1$\n$F_3 = 2$\n$F_4 = 3$\n$F_5 = 5$\n$F_6 = 8$\n$F_7 = 13$\n$F_8 = 21$\n$F_9 = 34$\n$F_{10} = 55$\n$F_{11} = 89$\n$F_{12} = 144$\nThe 12th term, $F_{12}$, is the first term to contain three digits. \nWhat is the first term in the Fibonacci sequence to contain 1000 digits?", "from math import log10\nfrom euler import timer, Seq\n\ndef p025():\n return (\n Seq.unfold(lambda (a,b):(b, (b,a+b)), (0,1)) \n >> Seq.findIndex(lambda x: log10(x) > 999)\n ) + 1\n\ntimer(p025)", "Reciprocal cycles\nProblem 26\nA unit fraction contains 1 in the numerator. The decimal representation of the unit fractions with denominators 2 to 10 are given: \n1/2 = 0.5\n1/3 = 0.(3)\n1/4 = 0.25\n1/5 = 0.2\n1/6 = 0.1(6)\n1/7 = 0.(142857)\n1/8 = 0.125\n1/9 = 0.(1)\n1/10 = 0.1\nWhere 0.1(6) means 0.166666..., and has a 1-digit recurring cycle. It can be seen that 1/7 has a 6-digit recurring cycle. \nFind the value of d < 1000 for which 1/d contains the longest recurring cycle in its decimal fraction part.", "from euler import timer, Seq\n\ndef cycle(denom):\n if denom==2 or denom==5:\n return 0\n elif denom%2==0:\n return cycle(denom/2)\n elif denom%5==0:\n return cycle(denom/5)\n else:\n return (\n Seq.initInfinite(lambda x: x+1)\n >> Seq.map (lambda x: 10 ** x - 1)\n >> Seq.findIndex(lambda x: x%denom==0)\n ) + 1\n\ndef p026():\n return range(1, 1001) >> Seq.maxBy(cycle)\n\ntimer(p026)", "Quadratic primes\nProblem 27\nEuler discovered the remarkable quadratic formula:\n$n^2 + n + 41$ \nIt turns out that the formula will produce $40$ primes for the consecutive values $n = 0$ to $39$. However, when $n = 40$, $40^2 + 40 + 41 = 40(40 + 1) + 41$ is divisible by $41$, and certainly when $n = 41$, $41^2 + 41 + 41$ is clearly divisible by $41$.\nThe incredible formula $n^2 − 79n + 1601$ was discovered, which produces $80$ primes for the consecutive values $n = 0$ to $79$. The product of the coefficients, $−79$ and $1601$, is $−126479$.\nConsidering quadratics of the form:\n$n^2 + an + b$, where $|a| < 1000$ and $|b| < 1000$\nwhere $|n|$ is the modulus/absolute value of $n$\ne.g. $|11| = 11$ and $|−4| = 4$\nFind the product of the coefficients, $a$ and $b$, for the quadratic expression that produces the maximum number of primes for consecutive values of $n$, starting with $n = 0$.", "from euler import is_prime, Seq, timer, primes\n\ndef primes_generated(x):\n a,b = x\n return (\n Seq.initInfinite(lambda n: n*n + a*n + b)\n >> Seq.takeWhile(is_prime)\n >> Seq.length)\n\ndef p027():\n primes_1000 = (primes() \n >> Seq.takeWhile(lambda x: x<1000) \n >> Seq.toList)\n a,b = ([(a,b) for a in range(-999,1000) \n for b in primes_1000]\n >> Seq.maxBy(primes_generated))\n return a*b\n \ntimer(p027)", "Number spiral diagonals\nProblem 28\nStarting with the number 1 and moving to the right in a clockwise direction a 5 by 5 spiral is formed as follows:\n<font color='red'>21</font> 22 23 24 <font color='red'>25</font>\n20 <font color='red'>07</font> 08 <font color='red'>09</font> 10\n19 06 <font color='red'>01</font> 02 11\n18 <font color='red'>05</font> 04 <font color='red'>03</font> 12\n<font color='red'>17</font> 16 15 14 <font color='red'>13</font> \nIt can be verified that the sum of the numbers on the diagonals is 101. \nWhat is the sum of the numbers on the diagonals in a 1001 by 1001 spiral formed in the same way?", "from euler import timer\n\ndef p028():\n n = 1001\n\n def collect(depth, start, acc):\n if (depth > n/2):\n return acc\n else:\n return collect(depth+1, start+8*depth, acc+4*start+20*depth)\n\n return collect(1,1,1)\n\ntimer(p028)", "Distinct powers\nProblem 29\nConsider all integer combinations of $a^b$ for $2 ≤ a ≤ 5$ and $2 ≤ b ≤ 5$:\n$2^2=4$, $2^3=8$, $2^4=16$, $2^5=32$\n$3^2=9$, $3^3=27$, $3^4=81$, $3^5=243$\n$4^2=16$, $4^3=64$, $4^4=256$, $4^5=1024$\n$5^2=25$, $5^3=125$, $5^4=625$, $5^5=3125$ \nIf they are then placed in numerical order, with any repeats removed, we get the following sequence of 15 distinct terms:\n$4, 8, 9, 16, 25, 27, 32, 64, 81, 125, 243, 256, 625, 1024, 3125$\nHow many distinct terms are in the sequence generated by $a^b$ for $2 ≤ a ≤ 100$ and $2 ≤ b ≤ 100$?", "from euler import timer\n\ndef p029():\n return (set(a **b for a in range(2,101) for b in range(2,101))\n >> Seq.length)\n\ntimer(p029)", "Digit fifth powers\nProblem 30\nSurprisingly there are only three numbers that can be written as the sum of fourth powers of their digits:\n$1634 = 1^4 + 6^4 + 3^4 + 4^4$\n$8208 = 8^4 + 2^4 + 0^4 + 8^4$\n$9474 = 9^4 + 4^4 + 7^4 + 4^4$\nAs $1 = 1^4$ is not a sum it is not included. \nThe sum of these numbers is $1634 + 8208 + 9474 = 19316$. \nFind the sum of all the numbers that can be written as the sum of fifth powers of their digits.", "from euler import timer\n\ndef p030():\n def is_sum(n):\n return (\n str(n)\n >> Seq.map(lambda x: int(x) ** 5)\n >> Seq.sum\n ) == n\n\n max_n = (\n ((Seq.unfold(lambda x: (x, x+1), 1)\n >> Seq.find(lambda x: 10 ** x - 1 > x * 9 ** 5)\n ) - 1) * 9 ** 5)\n\n return (\n range(2, max_n + 1)\n >> Seq.filter(is_sum)\n >> Seq.sum)\n \ntimer(p030)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/test-institute-2/cmip6/models/sandbox-2/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: TEST-INSTITUTE-2\nSource ID: SANDBOX-2\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:45\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-2', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Meteorological Forcings\n5. Key Properties --&gt; Resolution\n6. Key Properties --&gt; Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --&gt; Absorption\n12. Optical Radiative Properties --&gt; Mixtures\n13. Optical Radiative Properties --&gt; Impact Of H2o\n14. Optical Radiative Properties --&gt; Radiative Scheme\n15. Optical Radiative Properties --&gt; Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCharacteristics of the &quot;other method&quot; used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --&gt; Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --&gt; Mixtures\n**\n12.1. External\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --&gt; Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --&gt; Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --&gt; Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
HNoorazar/PyOpinionGame
Community_identity.ipynb
gpl-3.0
[ "Two Topics Coupled example\nImport Python built-in functions we need to run and plot the game", "import numpy as np\nimport pandas as pd\nfrom pandas import Series, DataFrame\n\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nimport matplotlib.image as mpimg\nfrom matplotlib import rcParams\nimport seaborn as sb", "Set up inline matplotlib", "%matplotlib inline\nrcParams['figure.figsize'] = 5, 4\nsb.set_style('whitegrid')", "Import Game Modules From a Given Path\nUser have to edit the path and put the correct one on his/her machine.", "import sys\n# search path for modules\nsys.path.append('/Users/hn/Documents/GitHub/PyOpinionGame/')\n\nimport opiniongame.config as og_cfg\nimport opiniongame.IO as og_io\nimport opiniongame.coupling as og_coupling\nimport opiniongame.state as og_state\nimport opiniongame.adjacency as og_adj\nimport opiniongame.selection as og_select\nimport opiniongame.potentials as og_pot\nimport opiniongame.core as og_core\nimport opiniongame.stopping as og_stop\nimport opiniongame.opinions as og_opinions", "Setting Up Game Parameters", "config = og_cfg.staticParameters()\n\npath = '/Users/hn/Documents/GitHub/PyOpinionGame/' # path to the 'staticParameters.cfg'\nstaticParameters = path + 'staticParameters.cfg'\n\nconfig.readFromFile(staticParameters) # Read static parameters\nconfig.threshold = 0.0001\nconfig.Kthreshold = 0.00001\nconfig.startingseed = 10\nconfig.learning_rate = 0.1\ntau = 0.62 #tip of the tent potential function\nconfig.printOut()", "seed PRNG: must do this before any random numbers are ever sampled during default generation", "print(\"SEEDING PRNG: \"+str(config.startingseed))\nnp.random.seed(config.startingseed)", "Set up the state of the system\nState of the system includes:\n\nWeight Matrix (Matrix of the coupling wieghts between topic)\nInitial Opinions of agents\nAdjacency matrix of the network\n\n This is just initialization of the state, later we update some elements of it.", "# These are the default matrices for the state of the system:\n# If you want to change them, you can generate a new one in the following cell\ndefault_weights = og_coupling.weights_no_coupling(config.popSize, config.ntopics)\ndefault_initialOpinions = og_opinions.initialize_opinions(config.popSize, config.ntopics)\ndefault_adj = og_adj.make_adj(config.popSize, 'full')\n\nstate = og_state.WorldState(adj=default_adj, \n couplingWeights=default_weights, \n initialOpinions=default_initialOpinions, \n initialHistorySize=100, \n historyGrowthScale=2)\nstate.validate()", "User Defined States and parameters Can go in the following cell:", "numberOfCommunities = 3\ncommunityPopSize = 25\nconfig.popSize = numberOfCommunities * communityPopSize\n\n# List of upper bound probability of interaction between communities\nuppBound_list = [0.0]\n\n# List of uniqueness Strength parameter\nindividStrength = [0.0]\n\nconfig.learning_rate = 0.1\nconfig.iterationMax = 10000\ntau = 0.62\nconfig.printOut()\n#\n# functions for use by the simulation engine\n#\nufuncs = og_cfg.UserFunctions(og_select.PickTwoWeighted,\n og_stop.iterationStop,\n og_pot.createTent(tau))\n \n\n# Number of different initial opinions, \n# i.e. number of different games with different initials.\nnoInitials = np.arange(1)\nnoGames = np.arange(1) # Number of different game orders.\n\n# Run experiments with different adjacencies, different initials, and different order of games.\nfor uniqForce in individStrength:\n config.uniqstrength = uniqForce\n for upperBound in uppBound_list:\n # Generate different adjacency matrix with different prob. of interaction\n # between different communities\n state.adj = og_adj.CommunitiesMatrix(communityPopSize, numberOfCommunities, upperBound)\n \n for countInitials in noInitials:\n # Pick three communities with similar opinions to begin with!\n state.initialOpinions = np.zeros((config.popSize, 1))\n state.initialOpinions[0:25] = np.random.uniform(low=0.0, high=.25, size=(25,1))\n state.initialOpinions[25:50] = np.random.uniform(low=0.41, high=.58, size=(25,1))\n state.initialOpinions[50:75] = np.random.uniform(low=0.74, high= 1, size=(25,1))\n \n state.couplingWeights = og_coupling.weights_no_coupling(config.popSize, config.ntopics)\n all_experiments_history = {}\n print \"(uniqForce, upperBound) = ({}, {})\".format(uniqForce, upperBound)\n print \"countInitials = {}\".format(countInitials + 1)\n \n for gameOrders in noGames:\n #cProfile.run('og_core.run_until_convergence(config, state, ufuncs)')\n state = og_core.run_until_convergence(config, state, ufuncs)\n print(\"One Experiment Done\" , \"gameOrders = \" , gameOrders+1)\n all_experiments_history[ 'experiment' + str(gameOrders+1)] = state.history[0:state.nextHistoryIndex,:,:]\n og_io.saveMatrix('uB' + str(upperBound) + '*uS' + str(config.uniqstrength) + \n '*initCount' + str(countInitials+21) + '.mat', all_experiments_history)\n\nprint all_experiments_history.keys()\nprint all_experiments_history['experiment1'].shape", "Plot the experiment done above:", "time, population_size, no_of_topics = evolution = all_experiments_history['experiment1'].shape\nevolution = all_experiments_history['experiment1'].reshape(time, population_size)\n\nfig = plt.figure()\nplt.plot(evolution)\nplt.xlabel('Time')\nplt.ylabel('Opinionds')\nplt.title('Evolution of Opinions')\nfig.set_size_inches(10,5)\nplt.show()", "Skew Uniqueness Tendency Driver:\nI observed when having tendency for uniqueness is drawn from normal distribution, we do not get an interesting result. For example, initial intuition was that uniqueness for tendency would delay stabilization of the network, however, it did not. So, here we draw uniqueness tendencies from skew normal distribution.\nWhen most neighbors tend to go in one directions, then probability of individuals to go to the opposite direction would be more than the niose in the same direction:", "state = og_state.WorldState(adj=default_adj, \n couplingWeights=default_weights, \n initialOpinions=default_initialOpinions, \n initialHistorySize=100, \n historyGrowthScale=2)\nstate.validate()\n\n#\n# load configuration\n#\nconfig = og_cfg.staticParameters()\nconfig.readFromFile('staticParameters.cfg')\nconfig.threshold = 0.01\nconfig.printOut()\n\n#\n# seed PRNG: must do this before any random numbers are\n# ever sampled during default generation\n#\nprint((\"SEEDING PRNG: \"+str(config.startingseed)))\nnp.random.seed(config.startingseed)", "Initiate State", "# These are the default matrices for the state of the system:\n# If you want to change them, you can generate a new one in the following cell\ndefault_weights = og_coupling.weights_no_coupling(config.popSize, config.ntopics)\ndefault_initialOpinions = og_opinions.initialize_opinions(config.popSize, config.ntopics)\ndefault_adj = og_adj.make_adj(config.popSize, 'full')\n\nstate = og_state.WorldState(adj=default_adj, \n couplingWeights=default_weights, \n initialOpinions=default_initialOpinions, \n initialHistorySize=100, \n historyGrowthScale=2)\nstate.validate()\n\n#\n# run\n#\nnumberOfCommunities = 3\ncommunityPopSize = 25\nconfig.popSize = numberOfCommunities * communityPopSize\n\n# List of upper bound probability of interaction between communities\nuppBound_list = np.array([.001, 0.004, 0.007, 0.01, 0.013, 0.016, 0.019])\n#\n# List of uniqueness Strength parameter\n#\nindividStrength = np.arange(0.00001, 0.000251, 0.00006)\nindividStrength = np.append(0, individStrength)\nindividStrength = np.array([0.0])\nskewstrength = 2.0\n\ntau = 0.62\nconfig.iterationMax = 30000\nconfig.printOut()\n\n#\n# functions for use by the simulation engine\n#\nufuncs = og_cfg.UserFunctions(og_select.PickTwoWeighted,\n og_stop.iterationStop,\n og_pot.createTent(tau))\n\n\n \nnoInitials = np.arange(1) # Number of different initial opinions.\nnoGames = np.arange(1) # Number of different game orders.\n# Run experiments with different adjacencies, different initials, and different order of games.\nfor uniqForce in individStrength:\n config.uniqstrength = uniqForce\n for upperBound in uppBound_list:\n \"\"\"\n Generate different adjacency matrix with different prob. of interaction\n between different communities\n \"\"\"\n state.adj = og_adj.CommunitiesMatrix(communityPopSize, numberOfCommunities, upperBound)\n print\"(upperBound, uniqForce) = (\", upperBound, \",\" , uniqForce , \")\" \n for countInitials in noInitials:\n \n # Pick three communities with similar opinions (stable state) to begin with!\n state.initialOpinions = np.zeros((config.popSize, 1))\n state.initialOpinions[0:25] = np.random.uniform(low=0.08, high=.1, size=(25,1))\n state.initialOpinions[25:50] = np.random.uniform(low=0.49, high=.51, size=(25,1))\n state.initialOpinions[50:75] = np.random.uniform(low=0.9, high= .92, size=(25,1))\n \n state.couplingWeights = og_coupling.weights_no_coupling(config.popSize, config.ntopics)\n all_experiments_history = {}\n\n print \"countInitials=\", countInitials + 1\n \n for gameOrders in noGames:\n #cProfile.run('og_core.run_until_convergence(config, state, ufuncs)')\n state = og_core.run_until_convergence(config, state, ufuncs)\n state.history = state.history[0:state.nextHistoryIndex,:,:]\n idx_IN_columns = [i for i in xrange(np.shape(state.history)[0]) if (i % (config.popSize)) == 0]\n state.history = state.history[idx_IN_columns,:,:]\n all_experiments_history[ 'experiment' + str(gameOrders+1)] = state.history\n og_io.saveMatrix('uB' + str(upperBound) + '*uS' + str(config.uniqstrength) + \n '*initCount' + str(countInitials+1) + '.mat', all_experiments_history)\n\n\nall_experiments_history.keys()\n\ntime, population_size, no_of_topics = all_experiments_history['experiment1'].shape\nevolution = all_experiments_history['experiment1'].reshape(time, population_size)\n\nfig = plt.figure()\nplt.plot(evolution)\nplt.xlabel('Time')\nplt.ylabel('Opinionds')\nplt.title('Evolution of Opinions of 3 communities')\nfig.set_size_inches(10, 5)\nplt.show()", "These codes were developed as a part of the following paper: Loss of community identity in opinion dynamics models as a function of inter-group interaction strength which was built on our opinion game model." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
SheffieldML/GPyOpt
manual/GPyOpt_constrained_optimization.ipynb
bsd-3-clause
[ "GPyOpt: Bayesian Optimization with fixed constraints\nWritten by Javier Gonzalez, University of Sheffield.\nReference Manual index\nLast updated Friday, 11 March 2016.\nIn this notebook we will learn how to solve optimization problems with fixed constraints. We will focus on problems where the goal is to find \n$$ x_{M} = \\arg \\min_{x \\in {\\mathcal X}} f(x) \\,\\, \\mbox{subject to}, $$\n$$c_1(x)\\leq 0 $$\n$$ \\dots $$\n$$c_m(x)\\leq 0 $$\nwhere $f: {\\mathcal X} \\to R$ be a L-Lipschitz continuous function defined on a compact subset ${\\mathcal X} \\subseteq R^d$ and $c_1,\\dots,c_m$ are a series of known constraints that determine the feasible region of the problem. We will see the syntax that we need to use to solve this problems with Bayesian Optimization using GPyOpt. First we start loading GPyOpt and GPy.", "%pylab inline\nimport GPyOpt\nimport GPy\nimport numpy as np", "In this example we will optimize the 2D Six-Hump Camel function (available in GPyOpt). We will assume that exact evaluations of the function are observed. The explicit form of the function is:\n$$f(x_1,x_2) =4x_1^2 – 2.1x_1^4 + x_1^6/3 + x_1x_2 – 4x_2^2 + 4x_2^4$$", "func = GPyOpt.objective_examples.experiments2d.sixhumpcamel()", "Imagine that we were optimizing the function in the intervals $(-1,1)\\times (-1.5,1.5)$. As usual, we can defined this box constraints as:", "space =[{'name': 'var_1', 'type': 'continuous', 'domain': (-1,1)},\n {'name': 'var_2', 'type': 'continuous', 'domain': (-1.5,1.5)}]", "This will be an standard case of optimizing the function in an hypercube. However in this case we are going to study how to solve optimization problems with arbitrary constraints. In particular, we consider the problem of finding the minimum of the function in the region defined by\n$$-x_2 - .5 + |x_1| -\\sqrt{1-x_1^2} \\leq 0 $$\n$$ x_2 + .5 + |x_1| -\\sqrt{1-x_1^2} \\leq 0 $$\nWe can define these constraints as", "constraints = [{'name': 'constr_1', 'constraint': '-x[:,1] -.5 + abs(x[:,0]) - np.sqrt(1-x[:,0]**2)'},\n {'name': 'constr_2', 'constraint': 'x[:,1] +.5 - abs(x[:,0]) - np.sqrt(1-x[:,0]**2)'}]", "And create the feasible region od the problem by writting:", "feasible_region = GPyOpt.Design_space(space = space, constraints = constraints)", "Now, let's have a look to what we have. Let's make a plot of the feasible region and the function with the original box-constraints. Note that the function .indicator_constrains(X) takes value 1 if we are in the feasible region and 0 otherwise.", "## Grid of points to make the plots\ngrid = 400\nbounds = feasible_region.get_continuous_bounds()\nX1 = np.linspace(bounds[0][0], bounds[0][1], grid)\nX2 = np.linspace(bounds[1][0], bounds[1][1], grid)\nx1, x2 = np.meshgrid(X1, X2)\nX = np.hstack((x1.reshape(grid*grid,1),x2.reshape(grid*grid,1)))\n\n## Check the points in the feasible region.\nmasked_ind = feasible_region.indicator_constraints(X).reshape(grid,grid)\nmasked_ind = np.ma.masked_where(masked_ind > 0.5, masked_ind)\nmasked_ind[1,1]=1\n\n## Make the plots\nplt.figure(figsize=(14,6))\n\n# Feasible region\nplt.subplot(121)\nplt.contourf(X1, X2, masked_ind ,100, cmap= plt.cm.bone, alpha=1,origin ='lower')\nplt.text(-0.25,0,'FEASIBLE',size=20)\nplt.text(-0.3,1.1,'INFEASIBLE',size=20,color='white')\n\nplt.subplot(122)\nplt.plot()\nplt.contourf(X1, X2, func.f(X).reshape(grid,grid),100, alpha=1,origin ='lower')\nplt.plot(np.array(func.min)[:,0], np.array(func.min)[:,1], 'r.', markersize=20, label=u'Minimum')\nplt.legend()\nplt.title('Six-Hump Camel function',size=20)", "The Six-Hump Camel function has two global minima. However, with the constraints that we are using, only one of the two is a valid one. We can see this by overlapping the two previous plots.", "plt.figure(figsize=(6.5,6))\nOB = plt.contourf(X1, X2, func.f(X).reshape(grid,grid),100,alpha=1)\nIN = plt.contourf(X1, X2, masked_ind ,100, cmap= plt.cm.bone, alpha=.5,origin ='lower')\nplt.text(-0.25,0,'FEASIBLE',size=20,color='white')\nplt.text(-0.3,1.1,'INFEASIBLE',size=20,color='white')\nplt.plot(np.array(func.min)[:,0], np.array(func.min)[:,1], 'r.', markersize=20, label=u'Minimum')\nplt.title('Six-Hump Camel with restrictions',size=20)\nplt.legend()", "We will use the modular iterface to solve this problem. We start by generating an random inital design of 5 points to start the optimization. We just need to do:", "# --- CHOOSE the intial design\nfrom numpy.random import seed # fixed seed\nseed(123456)\n\ninitial_design = GPyOpt.experiment_design.initial_design('random', feasible_region, 10)", "Importantly, the points are always generated within the feasible region as we can check here:", "plt.figure(figsize=(6.5,6))\nOB = plt.contourf(X1, X2, func.f(X).reshape(grid,grid),100,alpha=1)\nIN = plt.contourf(X1, X2, masked_ind ,100, cmap= plt.cm.bone, alpha=.5,origin ='lower')\nplt.text(-0.25,0,'FEASIBLE',size=20,color='white')\nplt.text(-0.3,1.1,'INFEASIBLE',size=20,color='white')\nplt.plot(np.array(func.min)[:,0], np.array(func.min)[:,1], 'r.', markersize=20, label=u'Minimum')\nplt.title('Six-Hump Camel with restrictions',size=20)\nplt.plot(initial_design[:,0],initial_design[:,1],'yx',label = 'Design')\nplt.legend()", "Now, we choose the rest of the objects that we need to run the optimization. We will use a Gaussian Process with parameters fitted using MLE and the Expected improvement. We use the default BFGS optimizer of the acquisition. Evaluations of the function are done sequentially.", "# --- CHOOSE the objective\nobjective = GPyOpt.core.task.SingleObjective(func.f)\n\n# --- CHOOSE the model type\nmodel = GPyOpt.models.GPModel(exact_feval=True,optimize_restarts=10,verbose=False)\n\n# --- CHOOSE the acquisition optimizer\naquisition_optimizer = GPyOpt.optimization.AcquisitionOptimizer(feasible_region)\n\n# --- CHOOSE the type of acquisition\nacquisition = GPyOpt.acquisitions.AcquisitionEI(model, feasible_region, optimizer=aquisition_optimizer)\n\n# --- CHOOSE a collection method\nevaluator = GPyOpt.core.evaluators.Sequential(acquisition)", "Next, we create the BO object to run the optimization.", "# BO object\nbo = GPyOpt.methods.ModularBayesianOptimization(model, feasible_region, objective, acquisition, evaluator, initial_design)", "We first run the optimization for 5 steps and check how the results looks.", "# --- Stop conditions\nmax_time = None \nmax_iter = 5\ntolerance = 1e-8 # distance between two consecutive observations \n\n# Run the optimization \nbo.run_optimization(max_iter = max_iter, max_time = max_time, eps = tolerance, verbosity=False) \nbo.plot_acquisition()", "See how the optimization is only done within the feasible region, out of it the value of the acquisition is zero, so no evaluation is selected in that region. We run 20 more iterations to see the acquisition and convergence.", "# Run the optimization \nmax_iter = 25\nbo.run_optimization(max_iter = max_iter, max_time = max_time, eps = tolerance, verbosity=False) \n\nbo.plot_acquisition()\nbo.plot_convergence()\n\n# Best found value\nnp.round(bo.x_opt,2)\n\n# True min\nnp.round(func.min[0],2)", "Done! problem solved within the fixed domain!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
TheMitchWorksPro/DataTech_Playground
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
mit
[ "<div align=\"right\">Python 2.7</div>\n\nIndexing and Related Experiments in Python 2.7\nThough this content is in Python 2.7, most if not all of it should work the same in Python 3.x.\nTOC\n\nIndexing Experiments - Explores different complex structures and how to index into them\nMutation sidebar - looks at mutation using our crazyList example\nFinding the Index of a Known Value within Complex Data Structures - explores .index(), np.where(), and Related concerns\n\nIndexing Experiments in Python\nWe start with a simple nested list showing how to get at an element within it:", "stupidList = [[1,2,3],[4,5,6]]\nprint(stupidList)\nstupidList[0][1]", "Now we build something more complicated to show where indexing can get tricky ...", "import numpy as np\nimport pandas as pd\n\nm3d=np.random.rand(3,4,5)\nm3d\n\n# how does Pandas arrange the data?\nn3d=m3d.reshape(4,3,5)\nn3d", "Notice which numbers moved where. This would seem to indicate that in shape(a,b,c):\n- a is like the object's depth (how many groupings of rows/columns are there?)\n- b is like the object's rows per grouping (how many rows in each subgroup)\n- c is like the object's columns\nWhat if the object had 4 dimensions?", "o3d=np.random.rand(2,3,4,5)\no3d", "Just analyzing how the numbers are arranged, we see that in shape(a,b,c,d), it just added the new extra dimensional layer to the front of the list so that now:\n- a = larger hyper grouping (2 of them)\n- b = first subgroup within (3 of them)\n- c = rows within these groupings (4 of them)\n- d = columns within these groupings (5 of them)\nIt appears that rows always come before columns, and then it looks like groupings of rows and columns and groupings or groupings, etc. . . are added to the front of the index chain.\nBuilding something complex just to drill in more on how to access sub-elements:", "# some simple arrays:\nsimp1=np.array([[1,2,3,4,5]])\nsimp2=np.array([[10,9,8,7,6]])\nsimp3=[11,12,13]\n\n# a dictionary\ndfrm1 = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],\n 'year': [2000, 2001, 2002, 2001, 2002],\n 'population': [1.5, 1.7, 3.6, 2.4, 2.9]}\n# convert dictionary to DataFrame\ndfrm1 = pd.DataFrame(dfrm1)\ndfrm1\n\n# pandas indexing works a little differently:\n# * column headers are keys\n# * as shown here, can ask for columns, rows, and a filter based on values in the columns \n# in any order and the indexing will still work\n\nprint(dfrm1[\"population\"][dfrm1[\"population\"] > 1.5][2:4]) # all of these return values from \"population\" column only\nprint(\"---\") # where \"population\" > 1.5\nprint(dfrm1[\"population\"][2:4][dfrm1[\"population\"] > 1.5]) # and row index is between 2 and 4\nprint(\"---\")\nprint(dfrm1[dfrm1[\"population\"] > 1.5][\"population\"][2:4])\nprint(\"---\")\nprint(dfrm1[dfrm1[\"population\"] > 1.5][2:4][\"population\"])\nprint(\"---\")\nprint(dfrm1[2:4][\"population\"][dfrm1[\"population\"] > 1.5])\nprint(\"---\")\nprint(dfrm1[2:4][dfrm1[\"population\"] > 1.5][\"population\"]) # this last one triggers a warning\n\n# breaking the above apart:\nprint(dfrm1[dfrm1[\"population\"] > 1.5]) # all rows and columns filtered by \"population\" values > 1.5\nprint(\"---\")\nprint(dfrm1[\"population\"]) # return whole \"population\" column\nprint(\"---\")\nprint(dfrm1[2:4]) # return whole rows 2 to 4\n\ncrazyList = [simp1, m3d, simp2, n3d, simp3, dfrm1, o3d]\n\n# Accessing the dataframe inside the list now that it is a sub element:\ncrazyList[5][\"population\"][crazyList[5][\"population\"] > 1.5][2:4]", "Now let's access other stuff in the list ...", "crazyList[1] # this is the second object of the list (Python like many languages starts indicies at 0)\n # this is the full output of m3d\n\ncrazyList[0] # after the above demo, no surprises here ... simp1 was the first object we added to the list", "In the tests that follow ... anything that does not work is wrapped in exception handling (that displays the error) so this notebook can be run from start to finish ... Note that it is not good practice to use a catch all for all errors. In real coding errors should be handled individually by type.\nHow do we access the first index (element 2) of the first array object in our complex list (which resides at index 0)?", "try: # not this way ...\n crazyList[0][1]\nexcept Exception as ex:\n print(\"%s%s %s\" %(type(ex), \":\", ex))\n\n# let's look at what we built: all the objects are here but are no longer named so we need to get indices right\ncrazyList\n\n# note that both of these get the same data, but also note the difference in the format: \"[[]]\" and array([])\".\n# look at the source and you will see we are drilling in at different levels of \"[]\"\n# there can be situations in real coding where extra layers are created by accident so this example is good to know\n\nprint(crazyList[0])\ncrazyList[0][0]", "Sub element 4 is a simple list nested within caryList: crazyList [ ... [content at index position 4] ...]", "print(crazyList[4])\ncrazyList[4][1] # get 2nd element in the list within a list at position 4 (object 4 in the list)", "So what about the array? The array was originally built in \"simp1\" and then added to crazyList. Its source looks like this:", "print(type(simp1))\nprint(simp1.shape)\n\nprint(simp1)\nprint(simp1[0]) # note that the first two give us the same thing (whole array)\nsimp1[0][1]", "Note the [] versus the [[]] ... our \"simple arrays\" were copied from an example, but are actually nested objects of 1 list of 5 elements forming the first object inside the array. A true simple array would like this:", "trueSimp1=np.array([10,9,8,7,6])\nprint(trueSimp1.shape) # note: output shows that Python thinks this is 5 rows, 1 column\ntrueSimp1", "Let's add the true simple array to our crazy object and then create working examples of accessing everything ...", "crazyList.append(trueSimp1) # append mutates so this changes the original list\ncrazyList # Warning! if you re-run this cell, you will keep adding more copies of the last object\n # to the end of this object. To be consistent with content in this NB\n # clear and re-run the whole notebook should that happen\n\n# The elements at either end of crazyList:\nprint(crazyList[0])\nprint(crazyList[-1]) # ask for last item by counting backwards from the end\n\n# get a specific value by index from within the subelements at either end:\nprint(crazyList[0][0][2]) # extra zero for the extra [] .. structurally this is really [0 [0 ], [1] ] but 1 does not exist\nprint(crazyList[-1][2])", "Looking at just that first element again:", "crazyList[0] # first array to change", "remember that this object if it were not in a list would be accessed like so:", "simp1[0][1] # second element inside it", "... so inside crazyList ? The answer is that the list is one level deep and the elements are yet another level in:", "crazyList[0]\n\ncrazyList[0][0][1]", "<a id=\"mutation\" name=\"mutation\"></a>\nSidebar: Mutation and Related Concerns\nTry this test and you will see it does not work:\ncrazyList2 = crazyList.append(trueSimp1)\nWhat it did: crazyList got an element appended to the end and crazyList2 came out the other side empty. This is because append() returns None and operates on the original. The copy then gets nothing and the original gets an element added to it.\nTo set up crazyList2 to append to only it, we might be tempted to try something like what is shown below, but if we do, note how it mutates:", "aList = [1,2,3]\nbList = aList\nprint(aList)\nprint(bList)", "Note how the second is really a reference to the first so changing one changes the other:", "aList[0] = 0\nbList[1] = 1\nbList.append(4)\nprint(aList)\nprint(bList)", "For a simple list ... we can fix that by simply using list() during our attempt to create the copy:", "bList = list(aList)\nbList[0] = 999\naList[1] = 998\nprint(aList)\nprint(bList)\n\nbList.append(19)\nprint(aList)\nprint(bList)", "Mutation is avoided. Now we can change our two objects independantly. However, with complex objects like crazyList, this does not work.\nThe following will illustrate the problem and later, options to get around it are presented.", "crazyList2 = list(crazyList)\n\ncrazyList2", "Now we make some changes:", "len(crazyList2)-1 # this is the position of the object we want to change\n\ncrazyList2[7][1] = 13 # this will change element 2 of last object in crazyList2", "Now we'll look at just the last object in both \"crazyLists\" showing what changed:", "print(crazyList[7])\nprint(crazyList2[7])", "The \"13\" replaced the value at this location in both crazyList and crazyList2. We are not dealing with true copies but rather references to the same data as further illustrated here:", "crazyList[7][1] = 9 # change on of them again and both change\nprint(crazyList[7])\nprint(crazyList2[7])", "So ... how to make a copy that does not mutate? (we can change one without changing the other)?<br/>\nLet's look at some things that don't work first ...", "crazyList3 = crazyList[:] # according to online topics ... this was supposed to work for the reason outlined below\n # it probably works with some complex objects but does not work with this one\n \n# some topics online indicate this should have worked because:\n# * the problem is avoided by \"slicing\" the original so Python behaves as if the thing you are copying is different\n# * if you used crazyList[2:3] ==> you would get a slice of the original you could store in the copy\n# * [:] utilizes slicing syntax but indicates \"give me the whole thing\" since by default, empty values are the min and max\n# indexing limits\n\ncrazyList3[7][1] = 13 # this will change element 2 of the last object\nprint(crazyList[7])\nprint(crazyList3[7])\n\n# what if we do this? (slice it and then add back a missing element)\ncrazyList3 = crazyList[:-1]\nprint(len(crazyList3))\nprint(len(crazyList)) # crazyList 3 is now one element shorter than crazyList\n\ncrazyList3.append(crazyList[7]) # add back missing element from crazyList\nprint(len(crazyList3))\nprint(len(crazyList))\n\ncrazyList3[7][1] = 9 # this will change element 2 of the last object\nprint(crazyList[7]) # note how again, both lists change\nprint(crazyList3[7])", "Python is hard to fool ... At first, I considered that we might now have two lists, but w/ just element 7 passed in by reference and so it mutates. But this shows our whole lists are still mutating:", "print(\"before:\")\nprint(crazyList[4])\nprint(crazyList3[4])\ncrazyList3[4][0] = 14\nprint(\"after:\")\nprint(crazyList[4])\nprint(crazyList3[4]) # try other tests of other elements and you will get same results", "deepcopy() comes from the copy library and the commands are documented at Python.org. For this situation, this solution seems to work for when mutation is undesirable:", "import copy\ncrazyList4 = copy.deepcopy(crazyList)\n\nprint(\"before:\")\nprint(crazyList[4])\nprint(crazyList4[4])\ncrazyList4[4][0] = 15\nprint(\"\")\nprint(\"after:\")\nprint(crazyList[4])\nprint(crazyList4[4])", "Should even deepcopy() not work, this topic online may prove helpful in these situations: Stack Overflow: When Deep Copy is not Enough.\n<a id=\"indexing\" name=\"indexing\"></a>\nFinding The Index of a Value\nSuppose we didn't know how to find the element but we knew the value we were looking for? How to get its index?", "print(stupidList)\nprint(stupidList[1].index(5)) # this works on lists\n\n# but for nested lists, you would need to loop through each sublist and handle the error that \n# gets thrown each time it does not find the answer\n\nfor element in stupidList:\n try: \n test_i = element.index(5)\n except Exception as ex:\n print(\"%s%s %s\" %(type(ex), \":\", ex))\n \nprint(test_i)\n\n# this strategy will not work on numpy arrays though\ntry:\n crazyList[0].index(2)\nexcept Exception as anyE:\n print(type(anyE), anyE)\n\n# because we have a list containing numpy arrays, we could look in each one like this:\nprint(crazyList[0])\nnp.where(crazyList[0]==2)\n\n# the above indicates that 2 lives here:\ncrazyList[0][0][1] # started with crazyList[0], then found it at [0][1] inside the data structure\n\n# For floating point numbers, the level of precision matters\n# details on how this works are presented in this notebook: TMWP_np_where_and_floatingPoint_numbers.ipynb\n\n# the simple test in the cells that follow should help illustrate the problem and what to do, but \n# see aforementioned notebook for more detail\n\n# to perform a where() test on a structure like this, it is important to note that print()\n# rounds the result to 8 decimal places. The real underlying numbers have more decimal places\n\nprint(crazyList2[1]); print(\"\")\nprint(crazyList2[1][2][3][4]) # get a number to test with\nprint(\"{0:.20}\".format(crazyList2[1][2][3][4])) # show more decimal places of the test number\n\n# Warning! If you re-run this notebook, new random nubers are generated and the value used for the test in this\n# cell will probably then fail. To fix this, re-run previous cell and copy in the final number shown\n# above up to at least 17 decimal places.\n\nprint(np.where(crazyList2[1]==0.95881217854380618)) # number copied from output of previous line up to 17 decimal places\n # np.where() can find this, but will also return other values\n # that match up to the first 16 decimal places (if they exist)\n # precision appears to be up to 16 decimal places on a 32 bit machine\n\n# np.isclose\n# for finding less precise answers: finds numbers that \"are close\"\n\nprint(np.isclose(crazyList2[1], 0.95881))\nprint(\"\")\nprint(np.where(np.isclose(crazyList2[1], 0.95881))) # note that when numbers are \"close\" this returns multiple values\n # in this case (crazyList2) only one number was \"close\"\n # more detailed testing is provided in: \n # TMWP_np_where_and_floatingPoint_numbers.ipynb", "Related help topics for additional research and reading:\n - Finding the Index - some options on Stack Overflow\n - numpy.where()\n - numpy.isclose()\n - Pandas Dataframe Indexing Tutorial\nThe End ..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]