Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
5,500 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Earth Engine and TensorFlow in Cloud Datalab
This notebook walks you through a simple example of using Earth Engine and TensorFlow together in Cloud Datalab.
Specifically, we will train a neural network to recognize cloudy pixels in a Landsat scene. For this simple example we will use the output of the Fmask cloud detection algorithm as training data.
Configure the Environment
We begin by importing a number of useful libraries.
Step1: Initialize the Earth Engine client. This assumes that you have already configured Earth Engine credentials in this Datalab instance. If not, see the "Earth Engine Datalab Initialization.ipynb" notebook.
Step2: Inspect the Input Data
Load a Landsat image with corresponding Fmask label data.
Step3: Let's define a helper function to make it easier to print thumbnails of Earth Engine images. (We'll be adding a library with utility functions like this one to the Earth Engine Python SDK, but for now we can do it by hand.)
Step4: Now we can use our helper function to quickly visualize the image and label data. The Fmask values are
Step5: Fetch the Input Data
First we define some helper functions to download raw data from Earth Engine as numpy arrays.
We use the getDownloadId() function, which only works for modestly sized datasets. For larger datasets, a better approach would be to initiate a batch Export from Earth Engine to Cloud Storage, which you could easily manage right here in Datalab too.
Step6: Now we can use that function to load the data from Earth Engine, including a valid data band, as a numpy array. This may take a few seconds. We also convert the Fmask band to a binary cloud label (i.e. fmask=4).
Step7: Display the local data. This time, for variety, we display it as an NRG false-color image. We can use pyplot to display local numpy arrays.
Step8: Preprocess the Input Data
Select the valid pixels and hold out a fraction for use as validation data. Compute per-band means and standard deviations of the training data for normalization.
Step9: Build the TensorFlow Model
We start with a helper function to build a simple TensorFlow neural network layer.
Step10: Here we define our TensorFlow model, a neural network with two hidden layers with tanh() nonlinearities. The main network has two outputs, continuous-valued “logits” representing non-cloud and cloud, respectively. The binary output is intepreted as the argmax of these outputs.
We define a training step, which uses Kingma and Ba's Adam algorithm to minimize the cross-entropy between the logits and the training data. Finally, we define a simple overall percentage accuracy measure.
Step11: Train the Neural Network
Now train the neural network, using batches of training data drawn randomly from the training data pool. We periodically compute the accuracy against the validation data. When we're done training, we apply the model to the complete input data set.
This simple notebook performs all TensorFlow operations locally. However, for larger analyses you could bring up a cluster of TensorFlow workers to parallelize the computation, all controlled from within Datalab.
Step12: Inspect the Results
Here we dislay the results. The red band corresponds to the TensorFlow output and the blue band corresponds to the labeled training data, so pixels that are red and blue correspond to disagreements between the model and the training data. (There aren't many
Step13: We can zoom in on a particular region over on the right side of the image to see some of the disagreements. Red pixels represent comission errors and blue pixels represent omission errors relative to the labeled input data. | Python Code:
import ee
from IPython import display
import math
from matplotlib import pyplot
import numpy
from osgeo import gdal
import tempfile
import tensorflow as tf
import urllib
import zipfile
Explanation: Introduction to Earth Engine and TensorFlow in Cloud Datalab
This notebook walks you through a simple example of using Earth Engine and TensorFlow together in Cloud Datalab.
Specifically, we will train a neural network to recognize cloudy pixels in a Landsat scene. For this simple example we will use the output of the Fmask cloud detection algorithm as training data.
Configure the Environment
We begin by importing a number of useful libraries.
End of explanation
ee.Initialize()
Explanation: Initialize the Earth Engine client. This assumes that you have already configured Earth Engine credentials in this Datalab instance. If not, see the "Earth Engine Datalab Initialization.ipynb" notebook.
End of explanation
input_image = ee.Image('LANDSAT/LT5_L1T_TOA_FMASK/LT50100551998003CPE00')
Explanation: Inspect the Input Data
Load a Landsat image with corresponding Fmask label data.
End of explanation
def print_image(image):
display.display(display.Image(ee.data.getThumbnail({
'image': image.serialize(),
'dimensions': '360',
})))
Explanation: Let's define a helper function to make it easier to print thumbnails of Earth Engine images. (We'll be adding a library with utility functions like this one to the Earth Engine Python SDK, but for now we can do it by hand.)
End of explanation
print_image(input_image.visualize(
bands=['B3', 'B2', 'B1'],
min=0,
max=0.3,
))
print_image(input_image.visualize(
bands=['fmask'],
min=0,
max=4,
palette=['808080', '0000C0', '404040', '00FFFF', 'FFFFFF'],
))
Explanation: Now we can use our helper function to quickly visualize the image and label data. The Fmask values are:
0 | 1 | 2 | 3 | 4
:---:|:---:|:---:|:---:|:---:
Clear | Water | Shadow | Snow | Cloud
End of explanation
def download_tif(image, scale):
url = ee.data.makeDownloadUrl(ee.data.getDownloadId({
'image': image.serialize(),
'scale': '%d' % scale,
'filePerBand': 'false',
'name': 'data',
}))
local_zip, headers = urllib.urlretrieve(url)
with zipfile.ZipFile(local_zip) as local_zipfile:
return local_zipfile.extract('data.tif', tempfile.mkdtemp())
def load_image(image, scale):
local_tif_filename = download_tif(image, scale)
dataset = gdal.Open(local_tif_filename, gdal.GA_ReadOnly)
bands = [dataset.GetRasterBand(i + 1).ReadAsArray() for i in range(dataset.RasterCount)]
return numpy.stack(bands, 2)
Explanation: Fetch the Input Data
First we define some helper functions to download raw data from Earth Engine as numpy arrays.
We use the getDownloadId() function, which only works for modestly sized datasets. For larger datasets, a better approach would be to initiate a batch Export from Earth Engine to Cloud Storage, which you could easily manage right here in Datalab too.
End of explanation
mask = input_image.mask().reduce('min')
data = load_image(input_image.addBands(mask), scale=240)
data[:,:,7] = numpy.equal(data[:,:,7], 4)
Explanation: Now we can use that function to load the data from Earth Engine, including a valid data band, as a numpy array. This may take a few seconds. We also convert the Fmask band to a binary cloud label (i.e. fmask=4).
End of explanation
pyplot.imshow(numpy.clip(data[:,:,[3,2,1]] * 3, 0, 1))
pyplot.show()
Explanation: Display the local data. This time, for variety, we display it as an NRG false-color image. We can use pyplot to display local numpy arrays.
End of explanation
HOLDOUT_FRACTION = 0.1
# Reshape into a single vector of pixels.
data_vector = data.reshape([data.shape[0] * data.shape[1], data.shape[2]])
# Select only the valid data and shuffle it.
valid_data = data_vector[numpy.equal(data_vector[:,8], 1)]
numpy.random.shuffle(valid_data)
# Hold out a fraction of the labeled data for validation.
training_size = int(valid_data.shape[0] * (1 - HOLDOUT_FRACTION))
training_data = valid_data[0:training_size,:]
validation_data = valid_data[training_size:-1,:]
# Compute per-band means and standard deviations of the input bands.
data_mean = training_data[:,0:7].mean(0)
data_std = training_data[:,0:7].std(0)
valid_data.shape
Explanation: Preprocess the Input Data
Select the valid pixels and hold out a fraction for use as validation data. Compute per-band means and standard deviations of the training data for normalization.
End of explanation
def make_nn_layer(input, output_size):
input_size = input.get_shape().as_list()[1]
weights = tf.Variable(tf.truncated_normal(
[input_size, output_size],
stddev=1.0 / math.sqrt(float(input_size))))
biases = tf.Variable(tf.zeros([output_size]))
return tf.matmul(input, weights) + biases
Explanation: Build the TensorFlow Model
We start with a helper function to build a simple TensorFlow neural network layer.
End of explanation
NUM_INPUT_BANDS = 7
NUM_HIDDEN_1 = 20
NUM_HIDDEN_2 = 20
NUM_CLASSES = 2
input = tf.placeholder(tf.float32, shape=[None, NUM_INPUT_BANDS])
labels = tf.placeholder(tf.float32, shape=[None])
normalized = (input - data_mean) / data_std
hidden1 = tf.nn.tanh(make_nn_layer(normalized, NUM_HIDDEN_1))
hidden2 = tf.nn.tanh(make_nn_layer(hidden1, NUM_HIDDEN_2))
logits = make_nn_layer(hidden2, NUM_CLASSES)
outputs = tf.argmax(logits, 1)
int_labels = tf.to_int64(labels)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, int_labels, name='xentropy')
train_step = tf.train.AdamOptimizer().minimize(cross_entropy)
correct_prediction = tf.equal(outputs, int_labels)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: Here we define our TensorFlow model, a neural network with two hidden layers with tanh() nonlinearities. The main network has two outputs, continuous-valued “logits” representing non-cloud and cloud, respectively. The binary output is intepreted as the argmax of these outputs.
We define a training step, which uses Kingma and Ba's Adam algorithm to minimize the cross-entropy between the logits and the training data. Finally, we define a simple overall percentage accuracy measure.
End of explanation
BATCH_SIZE = 1000
NUM_BATCHES = 1000
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
validation_dict = {
input: validation_data[:,0:7],
labels: validation_data[:,7],
}
for i in range(NUM_BATCHES):
batch = training_data[numpy.random.choice(training_size, BATCH_SIZE, False),:]
train_step.run({input: batch[:,0:7], labels: batch[:,7]})
if i % 100 == 0 or i == NUM_BATCHES - 1:
print('Accuracy %.2f%% at step %d' % (accuracy.eval(validation_dict) * 100, i))
output_data = outputs.eval({input: data_vector[:,0:7]})
Explanation: Train the Neural Network
Now train the neural network, using batches of training data drawn randomly from the training data pool. We periodically compute the accuracy against the validation data. When we're done training, we apply the model to the complete input data set.
This simple notebook performs all TensorFlow operations locally. However, for larger analyses you could bring up a cluster of TensorFlow workers to parallelize the computation, all controlled from within Datalab.
End of explanation
output_image = output_data.reshape([data.shape[0], data.shape[1]])
red = numpy.where(data[:,:,8], output_image, 0.5)
blue = numpy.where(data[:,:,8], data[:,:,7], 0.5)
green = numpy.minimum(red, blue)
comparison_image = numpy.dstack((red, green, blue))
pyplot.figure(figsize = (12,12))
pyplot.imshow(comparison_image)
pyplot.show()
Explanation: Inspect the Results
Here we dislay the results. The red band corresponds to the TensorFlow output and the blue band corresponds to the labeled training data, so pixels that are red and blue correspond to disagreements between the model and the training data. (There aren't many: look carefully around the fringes of the clouds.)
End of explanation
pyplot.figure(figsize = (12,12))
pyplot.imshow(comparison_image[300:500,600:,:], interpolation='nearest')
pyplot.show()
Explanation: We can zoom in on a particular region over on the right side of the image to see some of the disagreements. Red pixels represent comission errors and blue pixels represent omission errors relative to the labeled input data.
End of explanation |
5,501 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started
During this tutorial, we are using IPython/Jupyter Notebooks. Jupyter notebooks are a web based Python development environment allowing you to combine documentation (markdown), code, and their results into a single document. This follows a similar idea to Mathematica.
Installation
Anaconda is a free Python distribution including the most common Python packages for data analysis out of the box. Prebuild packages for the different platforms make it simply to use it and getting started quickly.
TASK
Step1: Interactive Python basics
Python is an untyped dynamic language. The last output of a cell line will be printed. Individual values can also be printed using the print(...) function. Variables are just declared and assigned. Function are first level objects and Python can be used to program in a functional style. Some simple examples | Python Code:
#include some package which we use later on
import numpy as np
#test np.ar -> tab
a = np.array([1,2,3,4])
#test np.array -> shift-tab or np.array?
Explanation: Getting Started
During this tutorial, we are using IPython/Jupyter Notebooks. Jupyter notebooks are a web based Python development environment allowing you to combine documentation (markdown), code, and their results into a single document. This follows a similar idea to Mathematica.
Installation
Anaconda is a free Python distribution including the most common Python packages for data analysis out of the box. Prebuild packages for the different platforms make it simply to use it and getting started quickly.
TASK: Install Anaconda on your machine (Please select the Python 3 download and in the lab the 32-bit version)
alternative use use the deployed version directly:
We use different frameworks/libraries in this tutorial:
* Numpy and Pandas for data manipulation
* Matplotlib and Seaborn for visualization
* Scikit-learn for simple machine learning
Everything except for Seaborn is already included in Anaconda by default. So, lets install it.
TASK: Open a shell (bash, cmd) and execute:
bash
conda install seaborn
Deployment
Deploying Jupyter notebooks is quite simple. mybinder.org provides you with a free service that turns a Github repository into an collection of interactive notebooks accessible online.
Hint: If you have an index.ipynb notebook inside of a directory, this will be the default one.
Usage
Launching Jupyter is simple, just go to a command line, navigate to your desired directory and execute:
bash
ipython notebook
This will open your webbrowser with the IPython dev environment in the current working directory. We are using this interactive tutorial as starting point. Clone it, navigate to it and launch ipython notebooks.
TASK clone this tutorial repository and launch the ipython environment inside of it
bash
git clone https://github.com/sgratzl/ipython-tutorial-VA2015.git
cd ipython-tutorial-VA2015
ipython notebook
First Steps
Juypter notebooks consists of individual cells. There are two major cell types: Code and Markdown.
Useful keyboard shortcuts:
* Enter: enter edit mode of the selected cell
* Shift-Enter: run cell, select below
* Ctrl-Enter: run cell
* Alt-Enter: run cell, insert a new cell below
Getting Help:
Tab code completion or indent
Shift-Tab for a function, e.g. argument list
function? query the python docstring for the given function
End of explanation
1+2
3+4
10/2
print(5+2)
3+2
a = 5+2
b = 9
a/b
def sum(a,b): #indent is important in Python!
return a+b
sum(4,4)
def sub(arg1,arg2):
return arg1-arg2
def calc(f, a, b):
return f(a,b)
#functions are first level objects, e.g., can be passed as argument to another function
print('sum ', calc(sum, a, b))
print('sub', calc(sub, a, b))
#array
arr = [1,2,3,4]
#maps aka dictionaries/dicts
dictionary = { 'a': 'Alpha', 'b': 'Beta'}
#array transformation
arr2 = [ a * 2 for a in arr]
dict2 = { k : v.upper() for k,v in dictionary.items()}
print(arr2)
print(dict2)
if a < 5:
print ('small')
else:
print ('large')
c = 'small' if a < 5 else 'large'
c
#what else: generators, iterators, classes, tuples, ...
Explanation: Interactive Python basics
Python is an untyped dynamic language. The last output of a cell line will be printed. Individual values can also be printed using the print(...) function. Variables are just declared and assigned. Function are first level objects and Python can be used to program in a functional style. Some simple examples:
End of explanation |
5,502 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing the Rating System
Let's generate some bernoulli based data based on my soon to be rating model and see if we can recover the parameters via pymc
Step1: Generate players and Data
Step2: Non-MCMC Model
Step3: Pymc Modelling | Python Code:
import pandas as pd
import numpy as np
from scipy.stats import norm, bernoulli
%matplotlib inline
Explanation: Testing the Rating System
Let's generate some bernoulli based data based on my soon to be rating model and see if we can recover the parameters via pymc
End of explanation
true_team_skill = {
'A': {
'A Good': [500, 70],
'A Decent': [400, 60],
'A Ok': [350, 40],
},
'B': {
'B Amazing': [600, 50],
'B Decent': [400, 60],
'B Bad': [150, 70],
},
'C': {
'C Good': [350, 70],
'C Good': [350, 70],
'C Good': [350, 70],
},
'D': {
'D Good': [350, 70],
'D Inconsistent': [550, 200],
'D Consistent': [450, 40],
},
'E': {
'E Bad': [250, 70],
'E Inconsistent': [400, 220],
'E Bad': [150, 70],
},
}
beta = 50
num_matches = 300
n_teams = len(true_team_skill)
match_ups = np.random.choice(list(true_team_skill.keys()), (num_matches, 2), p=[0.1, 0.25, 0.25, 0.35, 0.05])
match_ups = match_ups[match_ups[:,0] != match_ups[:,1]]
print("%i Number of Generated Matchups" % len(match_ups))
winner = []
for match in match_ups:
team_one = true_team_skill[match[0]]
team_two = true_team_skill[match[1]]
team_skill_one = np.sum([x[0] for i,x in team_one.items()])
team_skill_two = np.sum([x[0] for i,x in team_two.items()])
team_sigma_one = np.sum([x[1]**2 for i,x in team_one.items()])
team_sigma_two = np.sum([x[1]**2 for i,x in team_two.items()])
break
p_one = 1.-norm.cdf(0, loc=team_skill_one[0]-team_skill_two[0], scale=np.sqrt(param_one[1]**2 + param_two[1]**2 + beta**2))
res = bernoulli.rvs(p_one)
#print('%s vs %s - p: %.1f - %s won' % (match[0], match[1], p_one*100., match[int(np.logical_not(res))]))
winner.append(match[int(np.logical_not(res))])
np.sqrt(team_sigma_two)
obs = pd.DataFrame(match_ups, columns=['Team 1', 'Team 2'])
obs['winner'] = winner
obs.head()
Explanation: Generate players and Data
End of explanation
from scipy.optimize import minimize
skills_0 = np.array([1000.]*5 + [150.]*5 + [50.])
def loglike(y,p):
return -1.*(np.sum(y*np.log(p)+(1.-y)*np.log(1-p)))
def obj(skills):
beta = 50.
mean_diff = skills[obs['Team 1'].map(mapping).values] - skills[obs['Team 2'].map(mapping).values]
var_diff = skills[obs['Team 1'].map(mapping).values + 5]**2 + skills[obs['Team 1'].map(mapping).values + 5]**2 + skills[-1]**2
p = 1.-norm.cdf(0., loc=mean_diff, scale = np.sqrt(var_diff))
return loglike((obs['Team 1'] == obs['winner']).values, p)
g = minimize(obj, x0=skills_0)
opt_skill = g.x
print(opt_skill)
plots = norm.rvs(opt_skill[:5], opt_skill[5:-1], size=(2000,5))
f, ax = plt.subplots(figsize=(12,8))
[sns.kdeplot(plots[:,i], shade=True, alpha=0.55, legend=True, ax=ax, label=i) for i in range(5)]
infer_mean = opt_skill[:5]
infer_std = opt_skill[5:-1]
infer_beta = opt_skill[-1]
err = {'mcmc': [], 'opt': []}
for pair, k in obs.groupby(['Team 1', 'Team 2']).count().itertuples():
param_one = true_team_skill[pair[0]]
param_two = true_team_skill[pair[1]]
p_one_true = 1.-norm.cdf(0, loc=param_one[0]-param_two[0], scale=np.sqrt(param_one[1]**2 + param_two[1]**2 + 50.**2))
p_one_opt = 1.-norm.cdf(0, loc=infer_mean[mapping[pair[0]]]-infer_mean[mapping[pair[1]]], scale=np.sqrt(infer_std[mapping[pair[0]]]**2 + infer_std[mapping[pair[1]]]**2 + infer_beta**2))
p_one_mcmc = (1./(1+np.exp(-0.005*(trace['rating'][:,mapping[pair[0]]] - trace['rating'][:,mapping[pair[1]]])))).mean()
err['mcmc'].append(p_one_true-p_one_mcmc); err['opt'].append(p_one_true-p_one_opt);
print('%s vs %s : true - %.2f pct | optim - %.2f pct | mcmc - %.2f pct' %
(pair[0], pair[1], p_one_true*100., p_one_opt*100., p_one_mcmc*100.))
np.mean(np.power(err['mcmc'],2))
np.mean(np.power(err['opt'],2))
Explanation: Non-MCMC Model
End of explanation
import pymc3 as pm
import theano.tensor as tt
mapping = {v:k for k,v in dict(enumerate(['A', 'B', 'C', 'D', 'E'])).items()}
with pm.Model() as rating_model:
#beta = pm.Normal('beta', 50., 10.)
skills = pm.Normal('rating', 1000., 150., shape=n_teams)
performance = pm.Normal('performance', skills, 50., shape=n_teams)
diff = performance[obs['Team 1'].map(mapping).values] - performance[obs['Team 2'].map(mapping).values]
p = tt.nnet.sigmoid(0.005*diff)
err = pm.DensityDist('observed', lambda x: tt.sum(x*tt.log(p)+(1.-x)*tt.log(1-p)), observed=(obs['Team 1'] == obs['winner']).values)
#err = pm.Bernoulli('observed', p=p, observed=(obs['Team 1'] == obs['winner']).values)
with rating_model:
#start = pm.find_MAP()
trace = pm.sample(10000, tune=0) #step=pm.Metropolis(), start=start,
pm.traceplot(trace)
pm.plot_posterior(trace)
infer_mean = trace['rating'].mean(axis=0)
infer_std = trace['rating'].std(axis=0)
for pair, k in obs.groupby(['Team 1', 'Team 2']).count().itertuples():
param_one = true_team_skill[pair[0]]
param_two = true_team_skill[pair[1]]
p_one_true = 1.-norm.cdf(0, loc=param_one[0]-param_two[0], scale=np.sqrt(param_one[1]**2 + param_two[1]**2 + beta**2))
p_one_infer = (1./(1+np.exp(-0.005*(trace['rating'][:,mapping[pair[0]]] - trace['rating'][:,mapping[pair[1]]])))).mean()
#p_one_infer = 1.-norm.cdf(0, loc=infer_one[0]-infer_two[0], scale=np.sqrt(infer_one[1]**2 + infer_two[1]**2 + 50.**2))
print('%s vs %s : true - %.2f pct | infer - %.2f pct' % (pair[0], pair[1], p_one_true*100., p_one_infer*100.))
Explanation: Pymc Modelling
End of explanation |
5,503 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
%matplotlib inline
Example
Step1: Backpropagation
Now that feedforward can be done, the next step is to decide how the parameters should change such that they minimize the cost function.
Recall that the chosen cost function for this problem is
$$
c(x, P) = \sum_i \big(g_t'(x_i, P) - ( -\gamma g_t(x_i, P) \big)^2
$$
In order to minimize it, an optimalization method must be chosen.
Here, gradient descent with a constant step size has been chosen.
Before looking at the gradient descent method, let us set up the cost function along with the right ride of the ODE and trial solution.
Step2: Gradient Descent
The idea of the gradient descent algorithm is to update parameters in direction where the cost function decreases goes to a minimum.
In general, the update of some parameters $\vec \omega$ given a cost function defined by some weights $\vec \omega$, $c(x, \vec \omega)$, goes as follows
Step3: An implementation of a Deep Neural Network
As previously stated, a Deep Neural Network (DNN) follows the same concept of a neural network, but having more than one hidden layer. Suppose that the network has $N_{\text{hidden}}$ hidden layers where the $l$-th layer has $N_{\text{hidden}}^{(l)}$ neurons. The input is still assumed to be an array of size $1 \times N$. The network must now try to optimalize its output with respect to the collection of weigths and biases $P = \big{P_{\text{input} }, \ P_{\text{hidden} }^{(1)}, \ P_{\text{hidden} }^{(2)}, \ \dots , \ P_{\text{hidden} }^{(N_{\text{hidden}})}, \ P_{\text{output} }\big}$.
Feedforward
The feedforward step is similar to as for the neural netowork, but now considering more than one hidden layer.
The $i$-th neuron at layer $l$ recieves the result $\vec{x}j^{(l-1),\text{hidden} }$ from the $j$-th neuron at layer $l-1$. The $i$-th neuron at layer $l$ weights all of the elements in $\vec{x}_j^{(l-1),\text{hidden} }$ with a weight vector $\vec w{i,j}^{(l), \ \text{hidden} }$ with as many weigths as there are elements in$\vec{x}_j^{(l-1),\text{hidden} }$, and adds a bias $b_i^{(l), \ \text{hidden} }$
Step4: Backpropagation
This step is very similar for the neural network. The idea in this step is the same as for the neural network, but with more parameters to update for. Again there is no need for computing the gradients analytically since Autograd does the work for us.
Step5: Solving the ODE
Finally, having set up the networks we are ready to use them to solve the ODE problem.
If possible, it is always useful to have an analytical solution at hand to test if the implementations gives reasonable results.
As a recap, the equation to solve is
$$
g'(x) = -\gamma g(x)
$$
where $g(0) = g_0$ with $\gamma$ and $g_0$ being some chosen values.
Solving this analytically yields
$$
g(x) = g_0\exp(-\gamma x)
$$
By making the analytical solution availible in our program, it is possible to check the persomance of our neural networks.
Step6: Using neural network
The code below solves the ODE using a neural network. The number of values for the input $\vec x$ is 10, number of hidden neurons in the hidden layer being 10 and th step size used in gradien descent $\lambda = 0.001$. The program updates the weights and biases in the network num_iter times. Finally, it plots the results from using the neural network along with the analytical solution. Feel free to experiment with different values and see how the performance of the network is!
Step7: Using a deep neural network | Python Code:
# Autograd will be used for later, so the numpy wrapper for Autograd must be imported
import autograd.numpy as np
from autograd import grad, elementwise_grad
import autograd.numpy.random as npr
from matplotlib import pyplot as plt
def sigmoid(z):
return 1/(1 + np.exp(-z))
def neural_network(params, x):
# Find the weights (including and biases) for the hidden and output layer.
# Assume that params is a list of parameters for each layer.
# The biases are the first element for each array in params,
# and the weights are the remaning elements in each array in params.
w_hidden = params[0]
w_output = params[1]
# Assumes input x being an one-dimensional array
num_values = np.size(x)
x = x.reshape(-1, num_values)
# Assume that the input layer does nothing to the input x
x_input = x
## Hidden layer:
# Add a row of ones to include bias
x_input = np.concatenate((np.ones((1,num_values)), x_input ), axis = 0)
z_hidden = np.matmul(w_hidden, x_input)
x_hidden = sigmoid(z_hidden)
## Output layer:
# Include bias:
x_hidden = np.concatenate((np.ones((1,num_values)), x_hidden ), axis = 0)
z_output = np.matmul(w_output, x_hidden)
x_output = z_output
return x_output
Explanation: %matplotlib inline
Example : Exponential decay in one dimension
In this notebook we will see how a neural network performs when solving the equation
$$
\label{eq:ode}
g'(x) = -\gamma g(x)
$$
where $g(0) = g_0$ with $\gamma$ and $g_0$ being some chosen values. This equation is an ordinary differential equation since the function we have to solve for, $g(x)$, is of one variable.
In this example, $\gamma = 2$ and $g_0 = 10$ but feel free to change them and see how the neural network performs.
Trial solution
To begin with, a trial solution $g_t(t)$ must be chosen. A general trial solution for ordinary differential equations could be
$$
g_t(x, P) = h_1(x) + h_2(x, N(x, P))
$$
with $h_1(x)$ ensuring that $g_t(x)$ satisfies some conditions and $h_2(x,N(x, P))$ an expression involving $x$ and the output from the neural network $N(x,P)$ with $P $ being the collection of the weights and biases for each layer. It is assumed that there are no weights and bias at the input layer, so $P = { P_{\text{hidden}}, P_{\text{output}} }$. If there are $N_{\text{hidden} }$ neurons in the hidden layer, then $P_{\text{hidden}}$ is a $N_{\text{hidden} } \times 2$ matrix. The first column in $P_{\text{hidden} }$ represents the bias for each neuron in the hidden layer and the second column represents the weigths for each neuron. If there are $N_{\text{output} }$ neurons in the output layer, then $P_{\text{output}} $ is a $N_{\text{output} } \times (1 + N_{\text{hidden} })$ matrix. Its first column represents the bias of each neuron and the remaining columns represents the weights to each neuron.
It is given that $g(0) = g_0$. The trial solution must fulfill this condition to be a proper solution of \eqref{eq:ode}. A possible way to ensure that $g_t(0, P) = g_0$, is to let $F(N(x,P)) = x\cdot N(x,P)$ and $A(x) = g_0$. This gives the following trial solution:
$$
\label{eq:trial}
g_t(x, P) = g_0 + x \cdot N(x, P)
$$
Reformulating the problem
Often, the role of a neural network is to minimize its parameters with respect to some given error criteria. This criteria, the cost or loss function, is a measure of how much error the output of the network has compared to some given known answers. A reformulation of \eqref{eq:ode} must therefore be done, such that it describes the problem a neural network can solve.
The neural network must find the set of weigths and biases $P$ such that the trial solution in \eqref{eq:trial} satisfies \eqref{eq:ode}. The trial solution has been chosen such that it already solves the condition $g(0) = g_0$. What remains, is to find $P$ such that
$$
\label{eq:nnmin}
g_t'(x, P) = - \gamma g_t(x, P)
$$
is fulfilled as best as possible. The left hand and right side of \eqref{eq:nnmin} must be computed seperately, and then the neural network will choose which weights and biases in $P$ makes the sides as equal as possible. Having two sides of an equation as equal as possible, means that the absolute or squared difference between the sides must be as close to zero as small. In this case, the difference squared is an appropiate measurement of how errorneous the trial solution is with respect to $P$ of the neural network. Therefore, the problem our network must solve, is
$$
\min_{P}\Big{ \big(g_t'(x, P) - ( -\gamma g_t(x, P) \big)^2 \Big}
$$
or, in terms of weights and biases for each layer:
$$
\min_{P_{\text{hidden} }, \ P_{\text{output} }}\Big{ \big(g_t'(x, { P_{\text{hidden} }, P_{\text{output} }}) - ( -\gamma g_t(x, { P_{\text{hidden} }, P_{\text{output} }}) \big)^2 \Big}
$$
for an input value $x$.
If the neural network evaluates $g_t(x, P)$ at more avalues for $x$, say $N$ values $x_i$ for $i = 1, \dots, N$, then the total error to minimize is
$$ \label{eq:min}
\min_{P}\Big{\sum_i \big(g_t'(x_i, P) - ( -\gamma g_t(x_i, P) \big)^2 \Big}
$$
Letting $c(x, P) = \sum_i \big(g_t'(x_i, P) - ( -\gamma g_t(x_i, P) \big)^2$ denote the cost function, the minimization problem of which our network must solve, is
$$
\min_{P} c(x, P)
$$
or in terms of $P_{\text{hidden} }$ and $P_{\text{output} }$
$$
\min_{P_{\text{hidden} }, \ P_{\text{output} }} c(x, {P_{\text{hidden} }, P_{\text{output} }})
$$
Creating a simple Deep Neural Net
The next step is to decide how the neural net $N(x, P)$ in \eqref{eq:trial} should be. In this case, the neural network is made from scratch to understand better how a neural network works, gain more control over its architecture, and see how Autograd can be used to simplify the implementation.
An implementation of a Neural Network
Since a deep neural network (DNN) is a neural network with more than one hidden layer, we can first look on how to implement a neural network. Having an implementation of a neural network at hand, an extension of it into a deep neural network would (hopefully) be painless.
For simplicity, it is assumed that the input is an array $\vec x = (x_1, \dots, x_N)$ with $N$ elements. It is at these points the neural network should find $P$ such that it fulfills \eqref{eq:min}.
Feedforward
First, a feedforward of the inputs must be done. This means that $\vec x$ must be passed through an input layer, a hidden layer and a output layer. The input layer in this case, does not need to process the data any further. The input layer will consist of $N_{\text{input} }$ neurons, passing its element to each neuron in the hidden layer. The number of neurons in the hidden layer will be $N_{\text{hidden} }$.
For the $i$-th in the hidden layer with weight $w_i^{\text{hidden} }$ and bias $b_i^{\text{hidden} }$, the weighting from the $j$-th neuron at the input layer is:
$$
\begin{aligned}
z_{i,j}^{\text{hidden}} &= b_i^{\text{hidden}} + w_i^{\text{hidden}}x_j \
&=
\begin{pmatrix}
b_i^{\text{hidden}} & w_i^{\text{hidden}}
\end{pmatrix}
\begin{pmatrix}
1 \
x_j
\end{pmatrix}
\end{aligned}
$$
The result after weighting the input at the $i$-th hidden neuron can be written as a vector:
$$
\begin{aligned}
\vec{z}{i}^{\text{hidden}} &= \Big( b_i^{\text{hidden}} + w_i^{\text{hidden}}x_1 , \ b_i^{\text{hidden}} + w_i^{\text{hidden}} x_2, \ \dots \, , \ b_i^{\text{hidden}} + w_i^{\text{hidden}} x_N\Big) \
&=
\begin{pmatrix}
b_i^{\text{hidden}} & w_i^{\text{hidden}}
\end{pmatrix}
\begin{pmatrix}
1 & 1 & \dots & 1 \
x_1 & x_2 & \dots & x_N
\end{pmatrix} \
&= \vec{p}{i, \text{hidden}}^T X
\end{aligned}
$$
It is the vector $\vec{p}{i, \text{hidden}}^T$ that defines each row in $P{\text{hidden} }$, which contains the weights for the neural network to minimize according to \eqref{eq:min}.
After having found $\vec{z}_{i}^{\text{hidden}} $ for every neuron $i$ in the hidden layer, the vector will be sent to an activation function $a_i(\vec{z})$.
In this example, the sigmoid function has been used:
$$
f(z) = \frac{1}{1 + \exp{(-z)}}
$$
but feel free to chose any activation function you like.
The output $\vec{x}_i^{\text{hidden} }$from each $i$-th hidden neuron is:
$$
\vec{x}i^{\text{hidden} } = f\big( \vec{z}{i}^{\text{hidden}} \big)
$$
The outputs $\vec{x}_i^{\text{hidden} } $ are then sent to the output layer.
The output layer consist of one neuron in this case, and combines the output from each of the neurons in the hidden layers. The output layer combines the results from the hidden layer using some weights $ w_i^{\text{output}}$ and biases $b_i^{\text{output}}$. In this case, it is assumes that the number of neurons in the output layer is one.
The procedure of weigthing the output neuron $j$ in the hidden layer to the $i$-th neuron in the output layer is similar as for the hidden layer described previously.
$$
\begin{aligned}
z_{1,j}^{\text{output}} & =
\begin{pmatrix}
b_1^{\text{output}} & \vec{w}_1^{\text{output}}
\end{pmatrix}
\begin{pmatrix}
1 \
\vec{x}_j^{\text{hidden}}
\end{pmatrix}
\end{aligned}
$$
Expressing $z_{1,j}^{\text{output}}$ as a vector gives the following procedure of weighting the inputs from the hidden layer:
$$
\vec{z}_{1}^{\text{output}} =
\begin{pmatrix}
b_1^{\text{output}} & \vec{w}_1^{\text{output}}
\end{pmatrix}
\begin{pmatrix}
1 & 1 & \dots & 1 \
\vec{x}_1^{\text{hidden}} & \vec{x}_2^{\text{hidden}} & \dots & \vec{x}_N^{\text{hidden}}
\end{pmatrix}
$$
In this case we seek a continous range of values since we are approximating a function. This means that after computing $\vec{z}{1}^{\text{output}}$ the neural network has finished its feedforward step, and $\vec{z}{1}^{\text{output}}$ is the final output of the network.
End of explanation
# The trial solution using the deep neural network:
def g_trial(x,params, g0 = 10):
return g0 + x*neural_network(params,x)
# The right side of the ODE:
def g(x, g_trial, gamma = 2):
return -gamma*g_trial
# The cost function:
def cost_function(P, x):
# Evaluate the trial function with the current parameters P
g_t = g_trial(x,P)
# Find the derivative w.r.t x of the neural network
d_net_out = elementwise_grad(neural_network,1)(P,x)
# Find the derivative w.r.t x of the trial function
d_g_t = elementwise_grad(g_trial,0)(x,P)
# The right side of the ODE
func = g(x, g_t)
err_sqr = (d_g_t - func)**2
cost_sum = np.sum(err_sqr)
return cost_sum
Explanation: Backpropagation
Now that feedforward can be done, the next step is to decide how the parameters should change such that they minimize the cost function.
Recall that the chosen cost function for this problem is
$$
c(x, P) = \sum_i \big(g_t'(x_i, P) - ( -\gamma g_t(x_i, P) \big)^2
$$
In order to minimize it, an optimalization method must be chosen.
Here, gradient descent with a constant step size has been chosen.
Before looking at the gradient descent method, let us set up the cost function along with the right ride of the ODE and trial solution.
End of explanation
def solve_ode_neural_network(x, num_neurons_hidden, num_iter, lmb):
## Set up initial weigths and biases
# For the hidden layer
p0 = npr.randn(num_neurons_hidden, 2 )
# For the output layer
p1 = npr.randn(1, num_neurons_hidden + 1 ) # +1 since bias is included
P = [p0, p1]
print('Initial cost: %g'%cost_function(P, x))
## Start finding the optimal weigths using gradient descent
# Find the Python function that represents the gradient of the cost function
# w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer
cost_function_grad = grad(cost_function,0)
# Let the update be done num_iter times
for i in range(num_iter):
# Evaluate the gradient at the current weights and biases in P.
# The cost_grad consist now of two arrays;
# one for the gradient w.r.t P_hidden and
# one for the gradient w.r.t P_output
cost_grad = cost_function_grad(P, x)
P[0] = P[0] - lmb * cost_grad[0]
P[1] = P[1] - lmb * cost_grad[1]
print('Final cost: %g'%cost_function(P, x))
return P
Explanation: Gradient Descent
The idea of the gradient descent algorithm is to update parameters in direction where the cost function decreases goes to a minimum.
In general, the update of some parameters $\vec \omega$ given a cost function defined by some weights $\vec \omega$, $c(x, \vec \omega)$, goes as follows:
$$
\vec \omega_{\text{new} } = \vec \omega - \lambda \nabla_{\vec \omega} c(x, \vec \omega)
$$
for a number of iterations or until $ \big|\big| \vec \omega_{\text{new} } - \vec \omega \big|\big|$ is smaller than some given tolerance.
The value of $\lambda$ decides how large steps the algorithm must take in the direction of $ \nabla_{\vec \omega} c(x, \vec \omega)$. The notatation $\nabla_{\vec \omega}$ denotes the gradient with respect to the elements in $\vec \omega$.
In our case, we have to minimize the cost function $c(x, P)$ with respect to the two sets of weights and bisases, that is for the hidden layer $P_{\text{hidden} }$ and for the ouput layer $P_{\text{output} }$ .
This means that $P_{\text{hidden} }$ and $P_{\text{output} }$ is updated by
$$
\begin{aligned}
P_{\text{hidden},\text{new}} &= P_{\text{hidden}} - \lambda \nabla_{P_{\text{hidden}}} c(x, P) \
P_{\text{output},\text{new}} &= P_{\text{output}} - \lambda \nabla_{P_{\text{output}}} c(x, P)
\end{aligned}
$$
This might look like a cumberstone to set up the correct expression for finding the gradients. Luckily, Autograd comes to the rescue.
End of explanation
def deep_neural_network(deep_params, x):
# N_hidden is the number of hidden layers
N_hidden = np.size(deep_params) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer
# Assumes input x being an one-dimensional array
num_values = np.size(x)
x = x.reshape(-1, num_values)
# Assume that the input layer does nothing to the input x
x_input = x
# Due to multiple hidden layers, define a variable referencing to the
# output of the previous layer:
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = deep_params[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_values)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = deep_params[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_values)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output
Explanation: An implementation of a Deep Neural Network
As previously stated, a Deep Neural Network (DNN) follows the same concept of a neural network, but having more than one hidden layer. Suppose that the network has $N_{\text{hidden}}$ hidden layers where the $l$-th layer has $N_{\text{hidden}}^{(l)}$ neurons. The input is still assumed to be an array of size $1 \times N$. The network must now try to optimalize its output with respect to the collection of weigths and biases $P = \big{P_{\text{input} }, \ P_{\text{hidden} }^{(1)}, \ P_{\text{hidden} }^{(2)}, \ \dots , \ P_{\text{hidden} }^{(N_{\text{hidden}})}, \ P_{\text{output} }\big}$.
Feedforward
The feedforward step is similar to as for the neural netowork, but now considering more than one hidden layer.
The $i$-th neuron at layer $l$ recieves the result $\vec{x}j^{(l-1),\text{hidden} }$ from the $j$-th neuron at layer $l-1$. The $i$-th neuron at layer $l$ weights all of the elements in $\vec{x}_j^{(l-1),\text{hidden} }$ with a weight vector $\vec w{i,j}^{(l), \ \text{hidden} }$ with as many weigths as there are elements in$\vec{x}_j^{(l-1),\text{hidden} }$, and adds a bias $b_i^{(l), \ \text{hidden} }$:
$$
\begin{aligned}
z_{i,j}^{(l),\ \text{hidden}} &= b_i^{(l), \ \text{hidden}} + \big(\vec{w}{i}^{(l), \ \text{hidden}}\big)^T\vec{x}_j^{(l-1),\text{hidden} } \
&=
\begin{pmatrix}
b_i^{(l), \ \text{hidden}} & \big(\vec{w}{i}^{(l), \ \text{hidden}}\big)^T
\end{pmatrix}
\begin{pmatrix}
1 \
\vec{x}_j^{(l-1),\text{hidden} }
\end{pmatrix}
\end{aligned}
$$
The output from the $i$-th neuron at the hidden layer $l$ becomes a vector $\vec{z}_{i}^{(l),\ \text{hidden}}$:
$$
\begin{aligned}
\vec{z}{i}^{(l),\ \text{hidden}} &= \Big( b_i^{(l), \ \text{hidden}} + \big(\vec{w}{i}^{(l), \ \text{hidden}}\big)^T\vec{x}1^{(l-1),\text{hidden} }, \ \dots \ , \ b_i^{(l), \ \text{hidden}} + \big(\vec{w}{i}^{(l), \ \text{hidden}}\big)^T\vec{x}{N{hidden}^{(l-1)}}^{(l-1),\text{hidden} } \Big) \
&=
\begin{pmatrix}
b_i^{(l), \ \text{hidden}} & \big(\vec{w}{i}^{(l), \ \text{hidden}}\big)^T
\end{pmatrix}
\begin{pmatrix}
1 & 1 & \dots & 1 \
\vec{x}{1}^{(l-1),\text{hidden} } & \vec{x}{2}^{(l-1),\text{hidden} } & \dots & \vec{x}{N_{hidden}^{(l-1)}}^{(l-1),\text{hidden} }
\end{pmatrix}
\end{aligned}
$$
End of explanation
# The trial solution using the deep neural network:
def g_trial_deep(x,params, g0 = 10):
return g0 + x*deep_neural_network(params,x)
# The same cost function as for the neural network, but calls deep_neural_network instead.
def cost_function_deep(P, x):
# Evaluate the trial function with the current parameters P
g_t = g_trial_deep(x,P)
# Find the derivative w.r.t x of the neural network
d_net_out = elementwise_grad(deep_neural_network,1)(P,x)
# Find the derivative w.r.t x of the trial function
d_g_t = elementwise_grad(g_trial_deep,0)(x,P)
# The right side of the ODE
func = g(x, g_t)
err_sqr = (d_g_t - func)**2
cost_sum = np.sum(err_sqr)
return cost_sum
def solve_ode_deep_neural_network(x, num_neurons, num_iter, lmb):
# num_hidden_neurons is now a list of number of neurons within each hidden layer
# Find the number of hidden layers:
N_hidden = np.size(num_neurons)
## Set up initial weigths and biases
# Initialize the list of parameters:
P = [None]*(N_hidden + 1) # + 1 to include the output layer
P[0] = npr.randn(num_neurons[0], 2 )
for l in range(1,N_hidden):
P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias
# For the output layer
P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included
print('Initial cost: %g'%cost_function_deep(P, x))
## Start finding the optimal weigths using gradient descent
# Find the Python function that represents the gradient of the cost function
# w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer
cost_function_deep_grad = grad(cost_function_deep,0)
# Let the update be done num_iter times
for i in range(num_iter):
# Evaluate the gradient at the current weights and biases in P.
# The cost_grad consist now of N_hidden + 1 arrays; the gradient w.r.t the weights and biases
# in the hidden layers and output layers evaluated at x.
cost_deep_grad = cost_function_deep_grad(P, x)
for l in range(N_hidden+1):
P[l] = P[l] - lmb * cost_deep_grad[l]
print('Final cost: %g'%cost_function_deep(P, x))
return P
Explanation: Backpropagation
This step is very similar for the neural network. The idea in this step is the same as for the neural network, but with more parameters to update for. Again there is no need for computing the gradients analytically since Autograd does the work for us.
End of explanation
def g_analytic(x, gamma = 2, g0 = 10):
return g0*np.exp(-gamma*x)
Explanation: Solving the ODE
Finally, having set up the networks we are ready to use them to solve the ODE problem.
If possible, it is always useful to have an analytical solution at hand to test if the implementations gives reasonable results.
As a recap, the equation to solve is
$$
g'(x) = -\gamma g(x)
$$
where $g(0) = g_0$ with $\gamma$ and $g_0$ being some chosen values.
Solving this analytically yields
$$
g(x) = g_0\exp(-\gamma x)
$$
By making the analytical solution availible in our program, it is possible to check the persomance of our neural networks.
End of explanation
npr.seed(15)
## Decide the vales of arguments to the function to solve
N = 10
x = np.linspace(0, 1, N)
## Set up the initial parameters
num_hidden_neurons = 10
num_iter = 10000
lmb = 0.001
P = solve_ode_neural_network(x, num_hidden_neurons, num_iter, lmb)
res = g_trial(x,P)
res_analytical = g_analytic(x)
plt.figure(figsize=(10,10))
plt.title('Performance of neural network solving an ODE compared to the analytical solution')
plt.plot(x, res_analytical)
plt.plot(x, res[0,:])
plt.legend(['analytical','nn'])
plt.xlabel('x')
plt.ylabel('g(x)')
plt.show()
Explanation: Using neural network
The code below solves the ODE using a neural network. The number of values for the input $\vec x$ is 10, number of hidden neurons in the hidden layer being 10 and th step size used in gradien descent $\lambda = 0.001$. The program updates the weights and biases in the network num_iter times. Finally, it plots the results from using the neural network along with the analytical solution. Feel free to experiment with different values and see how the performance of the network is!
End of explanation
npr.seed(15)
## Decide the vales of arguments to the function to solve
N = 10
x = np.linspace(0, 1, N)
## Set up the initial parameters
num_hidden_neurons = np.array([10,10])
num_iter = 10000
lmb = 0.001
P = solve_ode_deep_neural_network(x, num_hidden_neurons, num_iter, lmb)
res = g_trial_deep(x,P)
res_analytical = g_analytic(x)
plt.figure(figsize=(10,10))
plt.title('Performance of a deep neural network solving an ODE compared to the analytical solution')
plt.plot(x, res_analytical)
plt.plot(x, res[0,:])
plt.legend(['analytical','dnn'])
plt.ylabel('g(x)')
plt.show()
Explanation: Using a deep neural network
End of explanation |
5,504 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Run a GOEA. Print study genes as either IDs symbols
We use data from a 2014 Nature paper
Step1: 1b. Download Associations, if necessary
Step2: 2. Load Ontologies, Associations and Background gene set
2a. Load Ontologies
Step3: 2b. Load Associations
Step4: 2c. Load Background gene set
In this example, the background is all mouse protein-codinge genes.
Follow the instructions in the background_genes_ncbi notebook to download a set of background population genes from NCBI.
Step5: 3. Initialize a GOEA object
The GOEA object holds the Ontologies, Associations, and background.
Numerous studies can then be run withough needing to re-load the above items.
In this case, we only run one GOEA.
Step6: 4. Read study genes
~400 genes from the Nature paper supplemental table 4
Step7: 5. Run Gene Ontology Enrichment Analysis (GOEA)
You may choose to keep all results or just the significant results. In this example, we choose to keep only the significant results.
Step8: 6. Write results to an Excel file and to a text file | Python Code:
# Get http://geneontology.org/ontology/go-basic.obo
from goatools.base import download_go_basic_obo
obo_fname = download_go_basic_obo()
Explanation: Run a GOEA. Print study genes as either IDs symbols
We use data from a 2014 Nature paper:
Computational analysis of cell-to-cell heterogeneity
in single-cell RNA-sequencing data reveals hidden
subpopulations of cells
Note: you must have the Python package, xlrd, installed to run this example.
1. Download Ontologies and Associations
1a. Download Ontologies, if necessary
End of explanation
# Get ftp://ftp.ncbi.nlm.nih.gov/gene/DATA/gene2go.gz
from goatools.base import download_ncbi_associations
file_gene2go = download_ncbi_associations()
Explanation: 1b. Download Associations, if necessary
End of explanation
from goatools.obo_parser import GODag
obodag = GODag("go-basic.obo")
Explanation: 2. Load Ontologies, Associations and Background gene set
2a. Load Ontologies
End of explanation
from __future__ import print_function
from goatools.anno.genetogo_reader import Gene2GoReader
# Read NCBI's gene2go. Store annotations in a list of namedtuples
objanno = Gene2GoReader(file_gene2go, taxids=[10090])
# Get associations for each branch of the GO DAG (BP, MF, CC)
ns2assoc = objanno.get_ns2assc()
for nspc, id2gos in ns2assoc.items():
print("{NS} {N:,} annotated mouse genes".format(NS=nspc, N=len(id2gos)))
Explanation: 2b. Load Associations
End of explanation
from genes_ncbi_10090_proteincoding import GENEID2NT as GeneID2nt_mus
Explanation: 2c. Load Background gene set
In this example, the background is all mouse protein-codinge genes.
Follow the instructions in the background_genes_ncbi notebook to download a set of background population genes from NCBI.
End of explanation
from goatools.goea.go_enrichment_ns import GOEnrichmentStudyNS
goeaobj = GOEnrichmentStudyNS(
GeneID2nt_mus, # List of mouse protein-coding genes
ns2assoc, # geneid/GO associations
obodag, # Ontologies
propagate_counts = False,
alpha = 0.05, # default significance cut-off
methods = ['fdr_bh']) # defult multipletest correction method
Explanation: 3. Initialize a GOEA object
The GOEA object holds the Ontologies, Associations, and background.
Numerous studies can then be run withough needing to re-load the above items.
In this case, we only run one GOEA.
End of explanation
# Data will be stored in this variable
import os
geneid2symbol = {}
# Get xlsx filename where data is stored
ROOT = os.path.dirname(os.getcwd()) # go up 1 level from current working directory
din_xlsx = os.path.join(ROOT, "goatools/test_data/nbt_3102/nbt.3102-S4_GeneIDs.xlsx")
# Read data
if os.path.isfile(din_xlsx):
import xlrd
book = xlrd.open_workbook(din_xlsx)
pg = book.sheet_by_index(0)
for r in range(pg.nrows):
symbol, geneid, pval = [pg.cell_value(r, c) for c in range(pg.ncols)]
if geneid:
geneid2symbol[int(geneid)] = symbol
print('READ: {XLSX}'.format(XLSX=din_xlsx))
else:
raise RuntimeError('CANNOT READ: {XLSX}'.format(XLSX=fin_xlsx))
Explanation: 4. Read study genes
~400 genes from the Nature paper supplemental table 4
End of explanation
# 'p_' means "pvalue". 'fdr_bh' is the multipletest method we are currently using.
geneids_study = geneid2symbol.keys()
goea_results_all = goeaobj.run_study(geneids_study)
goea_results_sig = [r for r in goea_results_all if r.p_fdr_bh < 0.05]
Explanation: 5. Run Gene Ontology Enrichment Analysis (GOEA)
You may choose to keep all results or just the significant results. In this example, we choose to keep only the significant results.
End of explanation
goeaobj.wr_xlsx("nbt3102_symbols.xlsx", goea_results_sig, itemid2name=geneid2symbol)
goeaobj.wr_xlsx("nbt3102_geneids.xlsx", goea_results_sig)
Explanation: 6. Write results to an Excel file and to a text file
End of explanation |
5,505 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DES Y6 Deep Field Exposures
Step1: 2. User Input
2.1. General User Input
Step2: 2.2. Logical Variables to Indicate which Code Cells to Run
Step3: 2.3. Sky Region Definitions
Step4: 2.4. Check on Location of TMPDIR...
Step5: 2.5. Create Main Zeropoints Directory (if it does not already exist)...
Step17: 3. Useful Modules
Step24: 4. Zeropoints by tying to DES-transformed SDSS DR13 Stars
We will first work with the DES data, and then we will repeat for the DECADE data.
DES
Step32: Combine region-by-region results into a single file...
Step39: DECADE
Step47: Combine region-by-region results into a single file... | Python Code:
import numpy as np
import pandas as pd
from scipy import interpolate
import glob
import math
import os
import subprocess
import sys
import gc
import glob
import pickle
import easyaccess as ea
#import AlasBabylon
import fitsio
from astropy.io import fits
import astropy.coordinates as coord
from astropy.coordinates import SkyCoord
import astropy.units as u
from astropy.table import Table, vstack
import tempfile
import matplotlib.pyplot as plt
%matplotlib inline
# Useful class to stop "Run All" at a cell
# containing the command "raise StopExecution"
class StopExecution(Exception):
def _render_traceback_(self):
pass
Explanation: DES Y6 Deep Field Exposures: Photometric Zeropoints tied to SDSS DR13
1. Setup
End of explanation
verbose = 1
tag_des = 'Y6A2_FINALCUT' # Official tag for DES Y6A2_FINALCUT
tag_decade = 'DECADE_FINALCUT' # Tag for DECADE
rawdata_dir = '../RawData'
zeropoints_dir='../Zeropoints'
#bandList = ['u', 'g', 'r', 'i', 'z'] # Oops! Missing transformation eqn for Y-band...
bandList = ['u', 'g', 'r', 'i', 'z']
Explanation: 2. User Input
2.1. General User Input
End of explanation
do_calc_sdss_zps = True
Explanation: 2.2. Logical Variables to Indicate which Code Cells to Run
End of explanation
region_name_list = [
'VVDSF14',
'VVDSF22',
'DEEP2',
'SN-E',
'SN-X_err',
'SN-X',
'ALHAMBRA2',
'SN-S',
'SN-C',
'EDFS',
'MACS0416',
'SN-S_err',
'COSMOS'
]
region_ramin = {
'VVDSF14':208.,
'VVDSF22':333.,
'DEEP2':351.,
'SN-E':6.,
'SN-X_err':13.,
'SN-X':32.,
'ALHAMBRA2':35.,
'SN-S':39.5,
'SN-C':50.,
'EDFS':55.,
'MACS0416':62.,
'SN-S_err':83.5,
'COSMOS':148.
}
region_ramax = {
'VVDSF14':212.,
'VVDSF22':337.,
'DEEP2':354.,
'SN-E':12.,
'SN-X_err':17.,
'SN-X':38.,
'ALHAMBRA2':39.,
'SN-S':44.5,
'SN-C':56.,
'EDFS':67.,
'MACS0416':66.,
'SN-S_err':88.,
'COSMOS':153.
}
region_decmin = {
'VVDSF14':3.,
'VVDSF22':-1.5,
'DEEP2':-1.5,
'SN-E':-46.,
'SN-X_err':-32.,
'SN-X':-8.,
'ALHAMBRA2':-0.5,
'SN-S':-2.5,
'SN-C':-31.,
'EDFS':-51.,
'MACS0416':-25.5,
'SN-S_err':-38.,
'COSMOS':0.5
}
region_decmax = {
'VVDSF14':7.,
'VVDSF22':1.5,
'DEEP2':1.5,
'SN-E':-41.,
'SN-X_err':-29.,
'SN-X':-3.,
'ALHAMBRA2':2.5,
'SN-S':1.5,
'SN-C':-25.,
'EDFS':-46.,
'MACS0416':-22.5,
'SN-S_err':-35.,
'COSMOS':4.0
}
for regionName in region_name_list:
print regionName, region_ramin[regionName], region_ramax[regionName], region_decmin[regionName], region_decmax[regionName]
Explanation: 2.3. Sky Region Definitions
End of explanation
# Check on TMPDIR...
tempfile.gettempdir()
# Set tmpdir variable to $TMPDIR (for future reference)...
tmpdir = os.environ['TMPDIR']
Explanation: 2.4. Check on Location of TMPDIR...
End of explanation
# Create main Zeropoints directory, if it does not already exist...
if not os.path.exists(zeropoints_dir):
os.makedirs(zeropoints_dir)
Explanation: 2.5. Create Main Zeropoints Directory (if it does not already exist)...
End of explanation
def DECam_tie_to_sdss(inputFile, outputFile, band, fluxObsColName, fluxerrObsColName, aggFieldColName, verbose):
import numpy as np
import os
import sys
import datetime
import pandas as pd
from astropy.table import Table, vstack
#validBandsList = ['u', 'g', 'r', 'i', 'z', 'Y'] # Oops! Missing transform eqn for Y-band!
validBandsList = ['u', 'g', 'r', 'i', 'z']
if band not in validBandsList:
print Filter band %s is not currently handled... Exiting now! % (band)
return 1
reqColList = ['psfMag_u','psfMagErr_u',
'psfMag_g','psfMagErr_g',
'psfMag_r','psfMagErr_r',
'psfMag_i','psfMagErr_i',
'psfMag_z','psfMagErr_z',
fluxObsColName,fluxerrObsColName,aggFieldColName]
# Does the input file exist?
if os.path.isfile(inputFile)==False:
print DECam_tie_to_sdss input file %s does not exist. Exiting... % (inputFile)
return 1
# Read inputFile into a pandas DataFrame...
print datetime.datetime.now()
print Reading in %s as a pandas DataFrame... % (inputFile)
t = Table.read(inputFile)
dataFrame = t.to_pandas()
print datetime.datetime.now()
print
reqColFlag = 0
colList = dataFrame.columns.tolist()
for reqCol in reqColList:
if reqCol not in colList:
print ERROR: Required column %s is not in the header % (reqCol)
reqColFlag = 1
if reqColFlag == 1:
print Missing required columns in header of %s... Exiting now! (inputFile)
return 1
# Identify column of the standard magnitude for the given band...
magStdColName = MAG_STD_%s % (band.upper())
magerrStdColName = MAGERR_STD_%s % (band.upper())
# Transform SDSS mags to DES mags for the given band...
dataFrame = transSDSStoDES(dataFrame, band, magStdColName, magerrStdColName)
# Add a 'MAG_OBS' column and a 'MAG_DIFF' column to the pandas DataFrame...
dataFrame['MAG_OBS'] = -2.5*np.log10(dataFrame[fluxObsColName])
dataFrame['MAG_DIFF'] = dataFrame[magStdColName]-dataFrame['MAG_OBS']
###############################################
# Aggregate by aggFieldColName
###############################################
# Make a copy of original dataFrame...
df = dataFrame.copy()
# Create an initial mask...
mask1 = ( (df[magStdColName] >= 0.) & (df[magStdColName] <= 25.) )
mask1 = ( mask1 & (df[fluxObsColName] > 10.) & (df['FLAGS'] < 2) & (np.abs(df['SPREAD_MODEL']) < 0.01))
if magerrStdColName != 'None':
mask1 = ( mask1 & (df[magerrStdColName] < 0.1) )
magDiffGlobalMedian = df[mask1]['MAG_DIFF'].median()
magDiffMin = magDiffGlobalMedian - 5.0
magDiffMax = magDiffGlobalMedian + 5.0
mask2 = ( (df['MAG_DIFF'] > magDiffMin) & (df['MAG_DIFF'] < magDiffMax) )
mask = mask1 & mask2
# Iterate over the copy of dataFrame 3 times, removing outliers...
# We are using "Method 2/Group by item" from
# http://nbviewer.jupyter.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/07%20-%20Lesson.ipynb
print "Sigma-clipping..."
niter = 0
for i in range(3):
niter = i + 1
print iter%d... % ( niter )
# make a copy of original df, and then delete the old one...
newdf = df[mask].copy()
del df
# group by aggFieldColName...
grpnewdf = newdf.groupby([aggFieldColName])
# add/update new columns to newdf
print datetime.datetime.now()
newdf['Outlier'] = grpnewdf['MAG_DIFF'].transform( lambda x: abs(x-x.mean()) > 3.00*x.std() )
#newdf['Outlier'] = grpnewdf['MAG_DIFF'].transform( lambda x: abs(x-x.mean()) > 2.00*x.std() )
print datetime.datetime.now()
del grpnewdf
print datetime.datetime.now()
#print newdf
nrows = newdf['MAG_DIFF'].size
print Number of rows remaining: %d % ( nrows )
df = newdf
mask = ( df['Outlier'] == False )
# Perform pandas grouping/aggregating functions on sigma-clipped Data Frame...
print datetime.datetime.now()
print 'Performing grouping/aggregating functions on sigma-clipped pandas DataFrame...'
groupedDataFrame = df.groupby([aggFieldColName])
magZeroMedian = groupedDataFrame['MAG_DIFF'].median()
magZeroMean = groupedDataFrame['MAG_DIFF'].mean()
magZeroStd = groupedDataFrame['MAG_DIFF'].std()
magZeroNum = groupedDataFrame['MAG_DIFF'].count()
magZeroErr = magZeroStd/np.sqrt(magZeroNum-1)
print datetime.datetime.now()
print
# Rename these pandas series...
magZeroMedian.name = 'MAG_ZERO_MEDIAN'
magZeroMean.name = 'MAG_ZERO_MEAN'
magZeroStd.name = 'MAG_ZERO_STD'
magZeroNum.name = 'MAG_ZERO_NUM'
magZeroErr.name = 'MAG_ZERO_MEAN_ERR'
# Also, calculate group medians for all columns in df that have a numerical data type...
numericalColList = df.select_dtypes(include=[np.number]).columns.tolist()
groupedDataMedian = {}
for numericalCol in numericalColList:
groupedDataMedian[numericalCol] = groupedDataFrame[numericalCol].median()
groupedDataMedian[numericalCol].name = %s_MEDIAN % (numericalCol)
# Create new data frame containing all the relevant aggregate quantities
#newDataFrame = pd.concat( [magZeroMedian, magZeroMean, magZeroStd, \
# magZeroErr, magZeroNum], \
# join='outer', axis=1 )
seriesList = []
for numericalCol in numericalColList:
seriesList.append(groupedDataMedian[numericalCol])
seriesList.extend([magZeroMedian, magZeroMean, magZeroStd, \
magZeroErr, magZeroNum])
#print seriesList
newDataFrame = pd.concat( seriesList, join='outer', axis=1 )
#newDataFrame.index.rename('FILENAME', inplace=True)
# Saving catname-based results to output files...
print datetime.datetime.now()
print Writing %s output file (using pandas to_csv method)... % (outputFile)
newDataFrame.to_csv(outputFile, float_format='%.4f')
print datetime.datetime.now()
print
return 0
# Transform SDSS mags into DES mags for this filter band...
def transSDSStoDES(dataFrame, band, magStdColName, magerrStdColName):
import numpy as np
import pandas as pd
from collections import OrderedDict as odict
# Transformation coefficients (updated based on fit to DES).
# mag_des = mag_sdss + A[mag][0]*color_sdss + A[mag][1]
# (based on A. Drlica-Wagner's https://github.com/kadrlica/desqr/blob/master/desqr/calibrate.py).
# Values come from DES-doc#9420 (u-band) and the DES DR2 paper (g-,r-,i-,z-bands).
SDSS = odict([
('u', [-0.350, +0.466, -0.479, ]), # [+0.2 < (g-r)_sdss <= 1.4]
('g', [-0.061, +0.008]), # [-1.0 < (g-i)_sdss <= 3.5]
('r', [-0.155, -0.007]), # [-0.4 < (r-i)_sdss <= 2.0]
('i', [-0.166, +0.032]), # [-0.4 < (r-i)_sdss <= 2.0]
('z', [-0.056, +0.027]), # [-0.4 < (r-i)_sdss <= 2.0]
])
A = SDSS
if band is 'u':
# u_des = u_sdss - 0.350*(g-r)_sdss**2 + 0.466*(g-r)_sdss - 0.479 [+0.2 < (g-r)_sdss <= 1.4]
dataFrame[magStdColName] = dataFrame['psfMag_u']+\
A[band][0]*(dataFrame['psfMag_g']-dataFrame['psfMag_r'])*(dataFrame['psfMag_g']-dataFrame['psfMag_r'])+\
A[band][1]*(dataFrame['psfMag_g']-dataFrame['psfMag_r'])+\
A[band][2]
dataFrame[magerrStdColName] = dataFrame['psfMagErr_u'] # temporary
mask = ( (dataFrame['psfMag_g']-dataFrame['psfMag_r']) > +0.2)
mask &= ( (dataFrame['psfMag_g']-dataFrame['psfMag_r']) <= 1.4)
elif band is 'g':
# g_des = g_sdss - 0.061*(g-i)_sdss + 0.008 [-1.0 < (g-i)_sdss <= 3.5]
dataFrame[magStdColName] = dataFrame['psfMag_g']+\
A[band][0]*(dataFrame['psfMag_g']-dataFrame['psfMag_i'])+A[band][1]
dataFrame[magerrStdColName] = dataFrame['psfMagErr_g'] # temporary
mask = ( (dataFrame['psfMag_g']-dataFrame['psfMag_i']) > -1.0)
mask &= ( (dataFrame['psfMag_g']-dataFrame['psfMag_i']) <= 3.5)
elif band is 'r':
# r_des = r_sdss - 0.155*(r-i)_sdss - 0.007 [-0.4 < (r-i)_sdss <= 2.0]
dataFrame[magStdColName] = dataFrame['psfMag_r']+\
A[band][0]*(dataFrame['psfMag_r']-dataFrame['psfMag_i'])+A[band][1]
dataFrame[magerrStdColName] = dataFrame['psfMagErr_r'] # temporary
mask = ( (dataFrame['psfMag_r']-dataFrame['psfMag_i']) > -0.4)
mask &= ( (dataFrame['psfMag_r']-dataFrame['psfMag_i']) <= 2.0)
elif band is 'i':
# i_des = i_sdss - 0.166*(r-i)_sdss + 0.032 [-0.4 < (r-i)_sdss <= 2.0]
dataFrame[magStdColName] = dataFrame['psfMag_i']+\
A[band][0]*(dataFrame['psfMag_r']-dataFrame['psfMag_i'])+A[band][1]
dataFrame[magerrStdColName] = dataFrame['psfMagErr_i'] # temporary
mask = ( (dataFrame['psfMag_r']-dataFrame['psfMag_i']) > -0.4)
mask &= ( (dataFrame['psfMag_r']-dataFrame['psfMag_i']) <= 2.0)
elif band is 'z':
# z_des = z_sdss - 0.056*(r-i)_sdss + 0.027 [-0.4 < (r-i)_sdss <= 2.0]
dataFrame[magStdColName] = dataFrame['psfMag_z']+\
A[band][0]*(dataFrame['psfMag_r']-dataFrame['psfMag_i'])+A[band][1]
dataFrame[magerrStdColName] = dataFrame['psfMagErr_z'] # temporary
mask = ( (dataFrame['psfMag_r']-dataFrame['psfMag_i']) > -0.4)
mask &= ( (dataFrame['psfMag_r']-dataFrame['psfMag_i']) <= 2.0)
else:
msg = "Unrecognized band: %s "%band
raise ValueError(msg)
dataFrame = dataFrame[mask].copy()
return dataFrame
Explanation: 3. Useful Modules
End of explanation
%%time
if do_calc_sdss_zps:
fluxObsColName = 'FLUX_PSF'
fluxerrObsColName = 'FLUXERR_PSF'
aggFieldColName = 'FILENAME'
subdir = DES_%s % (tag_des)
tmpdir = os.environ['TMPDIR']
for regionName in region_name_list:
print
print # # # # # # # # # # # # # # #
print Working on region %s % (regionName)
print # # # # # # # # # # # # # # #
print
for band in bandList:
input_file_template = cat_%s.%s.?.%s.sdss.fits % (subdir, regionName, band)
input_file_template = os.path.join(rawdata_dir, 'ExpCatFITS', subdir, input_file_template)
input_file_list = glob.glob(input_file_template)
input_file_list = np.sort(input_file_list)
if np.size(input_file_list) == 0:
print "No files matching template %s" % (input_file_template)
for inputFile in input_file_list:
print inputFile
if os.path.exists(inputFile):
#outputFile = os.path.splitext(inputFile)[0] + '.zps.csv'
#print outputFile
outputFile = os.path.splitext(os.path.basename(inputFile))[0]+'.csv'
outputFile = 'zps_' + outputFile[4:]
outputFile = os.path.join(zeropoints_dir, outputFile)
print outputFile
status = DECam_tie_to_sdss(inputFile, outputFile,
band,
fluxObsColName, fluxerrObsColName,
aggFieldColName, verbose)
if status > 0:
print 'ERROR: %s FAILED! Continuing...'
else:
print %s does not exist... skipping... % (inputFile)
print
Explanation: 4. Zeropoints by tying to DES-transformed SDSS DR13 Stars
We will first work with the DES data, and then we will repeat for the DECADE data.
DES:
Calculate zeropoints region by region...
End of explanation
%%time
if do_calc_sdss_zps:
subdir = DES_%s % (tag_des)
tmpdir = os.environ['TMPDIR']
for band in bandList:
print
print # # # # # # # # # # # # # # #
print Working on band %s % (band)
print # # # # # # # # # # # # # # #
print
outputFile = zps_%s.%s.sdss.csv % (subdir, band)
outputFile = outputFile = os.path.join(zeropoints_dir, outputFile)
input_file_template = zps_%s.*.?.%s.sdss.csv % (subdir, band)
input_file_template = os.path.join(zeropoints_dir, input_file_template)
input_file_list = glob.glob(input_file_template)
input_file_list = np.sort(input_file_list)
if np.size(input_file_list) == 0:
print "No files matching template %s" % (input_file_template)
continue
df_comb = pd.concat(pd.read_csv(inputFile) for inputFile in input_file_list)
outputFile = zps_%s.%s.sdss.csv % (subdir, band)
outputFile = outputFile = os.path.join(zeropoints_dir, outputFile)
print outputFile
df_comb.to_csv(outputFile, index=False)
del df_comb
Explanation: Combine region-by-region results into a single file...
End of explanation
%%time
if do_calc_sdss_zps:
fluxObsColName = 'FLUX_PSF'
fluxerrObsColName = 'FLUXERR_PSF'
aggFieldColName = 'FILENAME'
subdir = %s % (tag_decade)
tmpdir = os.environ['TMPDIR']
for regionName in region_name_list:
print
print # # # # # # # # # # # # # # #
print Working on region %s % (regionName)
print # # # # # # # # # # # # # # #
print
for band in bandList:
input_file_template = cat_%s.%s.?.%s.sdss.fits % (subdir, regionName, band)
input_file_template = os.path.join(rawdata_dir, 'ExpCatFITS', subdir, input_file_template)
input_file_list = glob.glob(input_file_template)
input_file_list = np.sort(input_file_list)
if np.size(input_file_list) == 0:
print "No files matching template %s" % (input_file_template)
for inputFile in input_file_list:
print inputFile
if os.path.exists(inputFile):
#outputFile = os.path.splitext(inputFile)[0] + '.zps.csv'
#print outputFile
outputFile = os.path.splitext(os.path.basename(inputFile))[0]+'.csv'
outputFile = 'zps_' + outputFile[4:]
outputFile = os.path.join(zeropoints_dir, outputFile)
print outputFile
status = DECam_tie_to_sdss(inputFile, outputFile,
band,
fluxObsColName, fluxerrObsColName,
aggFieldColName, verbose)
if status > 0:
print 'ERROR: %s FAILED! Continuing...'
else:
print %s does not exist... skipping... % (inputFile)
print
Explanation: DECADE:
Calculate zeropoints region by region...
End of explanation
%%time
if do_calc_sdss_zps:
subdir = %s % (tag_decade)
tmpdir = os.environ['TMPDIR']
for band in bandList:
print
print # # # # # # # # # # # # # # #
print Working on band %s % (band)
print # # # # # # # # # # # # # # #
print
outputFile = zps_%s.%s.sdss.csv % (subdir, band)
outputFile = outputFile = os.path.join(zeropoints_dir, outputFile)
input_file_template = zps_%s.*.?.%s.sdss.csv % (subdir, band)
input_file_template = os.path.join(zeropoints_dir, input_file_template)
input_file_list = glob.glob(input_file_template)
input_file_list = np.sort(input_file_list)
if np.size(input_file_list) == 0:
print "No files matching template %s" % (input_file_template)
continue
df_comb = pd.concat(pd.read_csv(inputFile) for inputFile in input_file_list)
outputFile = zps_%s.%s.sdss.csv % (subdir, band)
outputFile = outputFile = os.path.join(zeropoints_dir, outputFile)
print outputFile
df_comb.to_csv(outputFile, index=False)
del df_comb
Explanation: Combine region-by-region results into a single file...
End of explanation |
5,506 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Description
Step1: Init
Step2: Creating DR consensus seqs & loading into CLdb
Step3: That's it! Now, the CLdb.sqlite file contains the DR consensus sequences for each CRISPR locus
Assessing DR consensus seqs
Step4: Sequence naming is 'locus_ID'|'subtype'
Making a quick tree of DR consensus sequences
Alignment
Step5: Tree inference | Python Code:
# directory where you want the spacer blasting to be done
## CHANGE THIS!
workDir = "/home/nyoungb2/t/CLdb_Ecoli/DR_consensus/"
Explanation: Description:
This notebook goes through the creation and assessment of direct repeat (DR) consensus sequences
Before running this notebook:
run the Setup notebook
User-defined variables
End of explanation
import os
from IPython.display import FileLinks
%load_ext rpy2.ipython
if not os.path.isdir(workDir):
os.makedirs(workDir)
# checking that CLdb is in $PATH & ~/.CLdb config file is set up
!CLdb --config-params
Explanation: Init
End of explanation
!CLdb -- loadDRConsensus
Explanation: Creating DR consensus seqs & loading into CLdb
End of explanation
!CLdb -- DRconsensus2fasta -h
# writing out the consensus sequences
!cd $workDir; \
CLdb -- DRconsensus2fasta > DR_consensus.fna
# checking output
!cd $workDir; \
head -n 6 DR_consensus.fna
Explanation: That's it! Now, the CLdb.sqlite file contains the DR consensus sequences for each CRISPR locus
Assessing DR consensus seqs
End of explanation
!cd $workDir; \
mafft --adjustdirection DR_consensus.fna > DR_consensus_aln.fna
!cd $workDir; \
echo "#-------#"; \
head -n 6 DR_consensus_aln.fna
Explanation: Sequence naming is 'locus_ID'|'subtype'
Making a quick tree of DR consensus sequences
Alignment
End of explanation
%%R
library(ape)
%%R -i workDir
inFile = file.path(workDir, 'DR_consensus_aln.fna')
seqs = read.dna(inFile, format='fasta')
seqs.dist = dist.dna(seqs)
plot(hclust(seqs.dist), main='Hierarchical clustering dendrogram')
plot(nj(seqs.dist), main='Neighbor-joining tree')
Explanation: Tree inference
End of explanation |
5,507 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-3', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: INM
Source ID: SANDBOX-3
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:05
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
5,508 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aerobee 150 Engine
The Aerobee 150 flew on an AJ11-26 IRFNA and ANFA hypergolic pressure fed liquid motor.
We have some information to start with.
Step1: First lets compute the fuel density, O/F ratio, mass flow rate, and Thrust | Python Code:
from math import pi, log
# Physics
g_0 = 9.80665 # kg.m/s^2 Standard gravity
# Chemistry
rho_rfna = 1500.0 # kg/m^3 Density of IRFNA
rho_fa = 1130.0 # kg/m^3 Density of Furfuryl Alcohol
rho_an = 1021.0 # kg/m^3 Density of Aniline
# Data
Isp = 209.0 # s Average Specific Impulse accounting for underexpantion[1]
r = 0.190 # m Radius of the tanks (OD of rocket)[2]
Burn_time = 52.0 # s Duration of the burn[2]
Mass_Fuel = 134.4 # kg Mass of the fuel burnt[1]
Mass_Ox = 343.9 # kg Mass of the oxidizer burnt[1]
Explanation: Aerobee 150 Engine
The Aerobee 150 flew on an AJ11-26 IRFNA and ANFA hypergolic pressure fed liquid motor.
We have some information to start with.
End of explanation
rho_fuel = rho_an*0.65 + rho_fa*0.35
OF = Mass_Ox / Mass_Fuel
mdot = (Mass_Fuel+Mass_Ox) / Burn_time
Thrust = mdot*g_0*Isp
print "O/F ratio: %6.1f" % OF
print "mdot: %7.2f [kg/s]" % mdot
print "Thrust: %6.1f [kN]" % (Thrust/1000.0)
# Mass flow for each propllent
mdot_o = mdot / (1 + (1/OF))
mdot_f = mdot / (1 + OF)
print "Ox flow: %7.2f kg/s" % mdot_o
print "Fuel flow: %7.2f kg/s" % mdot_f
def tank_length(m, rho):
l = m / (rho*pi*r*r)
return l
l_o = tank_length(Mass_Ox, rho_rfna)
l_o += l_o*0.1 # add 10% for ullage
l_f = tank_length(Mass_Fuel, rho_fuel)
l_f += l_f*0.1 # add 10% for ullage
print "Ox tank length: . . . .%7.3f m" % l_o
print "Fuel tank length: %7.3f m" % l_f
Explanation: First lets compute the fuel density, O/F ratio, mass flow rate, and Thrust
End of explanation |
5,509 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Correlating microstripline model to measurement
Target
The aim of this example is to correlate the microstripline model to the measurement over 4 frequency decades from 1MHz to 5GHz.
Plan
Two different lengths of microstripline are measured;
Multiline method is used to compute the frequency dependent relative permittivity and loss angle of the dielectric;
Microstripline model is fitted to the computed parameters by optimization;
Checking the results by embedding the connectors and comparison against measurement;
Step1: Measurement of two microstripline with different lengths
The measurement where performed the 21th March 2017 on a Anritsu MS46524B 20GHz Vector Network Analyser. The setup is a linear frequency sweep from 1MHz to 10GHz with 10'000 points. Output power is 0dBm, IF bandwidth is 1kHz and neither averaging nor smoothing are used.
The frequency range of interest is limited from 1MHz to 5GHz, but the measurement are up to 10GHz.
MSLxxx is a L long, W wide, T thick copper microstripline on a H height substrate with bottom ground plane.
| Name | L (mm) | W (mm) | H (mm) | T (um) | Substrate |
|
Step2: The measured data shows that the electrical length of MSL200 is approximately twice the one of MSL100. The frequency spacing between Return Loss dips is approximately the half for MSL200 compared to MSL100. This is coherent with the physical dimensions if the small connector length is neglected.
The MSL200 Insertion Loss is also about twice than MSL100, which is coherent as a longer path bring more attenuation.
Return Loss under -20dB is usually considered to be fair for microstripline, it correspond to 1% of the power being reflected.
Dielectric effective relative permittivity extraction by multiline method
The phase of the measurements transmission parameter are subtracted. Because connectors are present on both DUTs, their length effect is canceled and the remaining phase difference is related to the difference of the DUTs length.
Knowing the physical length $\Delta L$ and the phase $\Delta \phi$, the effective relative permittivity constant $\epsilon_{r,eff}$ can be computed from the relation
$$\left{ \begin{array}{ll}
\lambda = \frac{c_0}{f \cdot \sqrt{\epsilon_{r,eff}}} \
\phi = \frac{2\pi L}{\lambda}
\end{array} \right. \implies
\epsilon_{r,eff} = \left( \frac{\Delta \phi \cdot c_0}{2 \pi f \cdot \Delta L} \right)^2 $$
In the same idea, the difference of Insertion Loss of the two DUT gives the Insertion Loss of the difference of the length and cancel connectors effects.
Step3: The effective relative permittivity of the geometry shows a dispersion effect at low frequency which can be modelled by a wideband Debye model such as Djordjevic/Svensson implementation of skrf microstripline media. The value then increase slowly with frequency which correspond roughly to the Kirschning and Jansen dispersion model.
The Insertion Loss seems proportional to frequency, which indicate a predominance of the dielectric losses. Conductor losses are related to the square-root of frequency. Radiation losses are neglected.
Fit microstripline model to the computed parameters by optimization
Effective relative permittivity
Microstrip media model with the physical dimensions of the measured microstriplines is fitted to the computed $\epsilon_{r,eff}$ by optimization of $\epsilon_r$ and tand of the substrate at 1GHz. The dispersion model used to account for frequency variation of the parameters are Djordjevic/Svensson and Kirschning and Jansen.
Step4: As a sanity check, the model data are compared with the computed parameters
Step5: The model results shows a reasonable agreement with the measured $\epsilon_{r,eff}$ and Insertion Loss values.
Checking the results
If the model is now plotted against the measurement of the same length, the plot shows no agreement. This is because the connector effects are not captured by the model.
Step6: Connector delay and loss estimation
The delay of the connector is estimated by fitting a line to its phase contribution vs frequency.
The phase and loss of the two connector are computed by subtracting phase and loss computed without the connectors to the measurement of the same length.
Step7: The phase of the model shows a good agreement, while the Insertion Loss seems to have a reasonable agreement and is small whatsoever.
Connector impedance adjustment by time-domain reflectometry
Time-domain step responses of measurement and model are used to adjust the connector model characteristic impedance.
The plots shows the connector having an inductive behaviour (positive peak) and the microstripline being a bit too much capacitive (negative plateau).
Characteristic impedance of the connector is tuned by trial-and-error until a reasonable agreement is achieved. Optimization could have been used instead.
Step8: Final comparison | Python Code:
%load_ext autoreload
%autoreload 2
import skrf as rf
import numpy as np
from numpy import real, log10, sum, absolute, pi, sqrt
import matplotlib.pyplot as plt
from scipy.optimize import minimize, differential_evolution
rf.stylely()
Explanation: Correlating microstripline model to measurement
Target
The aim of this example is to correlate the microstripline model to the measurement over 4 frequency decades from 1MHz to 5GHz.
Plan
Two different lengths of microstripline are measured;
Multiline method is used to compute the frequency dependent relative permittivity and loss angle of the dielectric;
Microstripline model is fitted to the computed parameters by optimization;
Checking the results by embedding the connectors and comparison against measurement;
End of explanation
# Load raw measurements
MSL100_raw = rf.Network('MSL100.s2p')
MSL200_raw = rf.Network('MSL200.s2p')
# Keep only the data from 1MHz to 5GHz
MSL100 = MSL100_raw['1-5000mhz']
MSL200 = MSL200_raw['1-5000mhz']
plt.figure()
plt.title('Measured data')
MSL100.plot_s_db()
MSL200.plot_s_db()
plt.show()
Explanation: Measurement of two microstripline with different lengths
The measurement where performed the 21th March 2017 on a Anritsu MS46524B 20GHz Vector Network Analyser. The setup is a linear frequency sweep from 1MHz to 10GHz with 10'000 points. Output power is 0dBm, IF bandwidth is 1kHz and neither averaging nor smoothing are used.
The frequency range of interest is limited from 1MHz to 5GHz, but the measurement are up to 10GHz.
MSLxxx is a L long, W wide, T thick copper microstripline on a H height substrate with bottom ground plane.
| Name | L (mm) | W (mm) | H (mm) | T (um) | Substrate |
| :--- | ---: | ---: | ---: | ---: | :--- |
| MSL100 | 100 | 3.00 | 1.55 | 50 | FR-4 |
| MSL200 | 200 | 3.00 | 1.55 | 50 | FR-4 |
The milling of the artwork is performed mechanically with a lateral wall of 45°. A small top ground plane chunk connected by a vias array to bottom ground is provided to solder the connector top ground legs and provide some coplanar-like transition from coax to microstrip.
The relative permittivity of the dielectric was assumed to be approximately 4.5 for design purpose.
End of explanation
c0 = 3e8
f = MSL100.f
deltaL = 0.1
deltaPhi = np.unwrap(np.angle(MSL100.s[:,1,0])) - np.unwrap(np.angle(MSL200.s[:,1,0]))
Er_eff = np.power(deltaPhi * c0 / (2 * np.pi * f * deltaL), 2)
Loss_mea = 20 * log10(absolute(MSL200.s[:,1,0] / MSL100.s[:,1,0]))
plt.figure()
plt.suptitle('Effective relative permittivity and loss')
plt.subplot(2,1,1)
plt.plot(f * 1e-9, Er_eff)
plt.ylabel('$\epsilon_{r,eff}$')
plt.subplot(2,1,2)
plt.plot(f * 1e-9, Loss_mea)
plt.xlabel('Frequency (GHz)')
plt.ylabel('Insertion Loss (dB)')
plt.show()
Explanation: The measured data shows that the electrical length of MSL200 is approximately twice the one of MSL100. The frequency spacing between Return Loss dips is approximately the half for MSL200 compared to MSL100. This is coherent with the physical dimensions if the small connector length is neglected.
The MSL200 Insertion Loss is also about twice than MSL100, which is coherent as a longer path bring more attenuation.
Return Loss under -20dB is usually considered to be fair for microstripline, it correspond to 1% of the power being reflected.
Dielectric effective relative permittivity extraction by multiline method
The phase of the measurements transmission parameter are subtracted. Because connectors are present on both DUTs, their length effect is canceled and the remaining phase difference is related to the difference of the DUTs length.
Knowing the physical length $\Delta L$ and the phase $\Delta \phi$, the effective relative permittivity constant $\epsilon_{r,eff}$ can be computed from the relation
$$\left{ \begin{array}{ll}
\lambda = \frac{c_0}{f \cdot \sqrt{\epsilon_{r,eff}}} \
\phi = \frac{2\pi L}{\lambda}
\end{array} \right. \implies
\epsilon_{r,eff} = \left( \frac{\Delta \phi \cdot c_0}{2 \pi f \cdot \Delta L} \right)^2 $$
In the same idea, the difference of Insertion Loss of the two DUT gives the Insertion Loss of the difference of the length and cancel connectors effects.
End of explanation
from skrf.media import MLine
W = 3.00e-3
H = 1.55e-3
T = 50e-6
L = 0.1
Er0 = 4.5
tand0 = 0.02
f_epr_tand = 1e9
x0 = [Er0, tand0]
def model(x, freq, Er_eff, L, W, H, T, f_epr_tand, Loss_mea):
ep_r = x[0]
tand = x[1]
m = MLine(frequency=freq, z0=50, w=W, h=H, t=T,
ep_r=ep_r, mu_r=1, rho=1.712e-8, tand=tand, rough=0.15e-6,
f_low=1e3, f_high=1e12, f_epr_tand=f_epr_tand,
diel='djordjevicsvensson', disp='kirschningjansen')
DUT = m.line(L, 'm', embed=True, z0=m.Z0)
Loss_mod = 20 * log10(absolute(DUT.s[:,1,0]))
return sum((real(m.ep_reff_f) - Er_eff)**2) + 0.01*sum((Loss_mod - Loss_mea)**2)
res = minimize(model, x0, args=(MSL100.frequency, Er_eff, L, W, H, T, f_epr_tand, Loss_mea),
bounds=[(4.2, 4.7), (0.001, 0.1)])
Er = res.x[0]
tand = res.x[1]
print('Er={:.3f}, tand={:.4f} at {:.1f} GHz.'.format(Er, tand, f_epr_tand * 1e-9))
Explanation: The effective relative permittivity of the geometry shows a dispersion effect at low frequency which can be modelled by a wideband Debye model such as Djordjevic/Svensson implementation of skrf microstripline media. The value then increase slowly with frequency which correspond roughly to the Kirschning and Jansen dispersion model.
The Insertion Loss seems proportional to frequency, which indicate a predominance of the dielectric losses. Conductor losses are related to the square-root of frequency. Radiation losses are neglected.
Fit microstripline model to the computed parameters by optimization
Effective relative permittivity
Microstrip media model with the physical dimensions of the measured microstriplines is fitted to the computed $\epsilon_{r,eff}$ by optimization of $\epsilon_r$ and tand of the substrate at 1GHz. The dispersion model used to account for frequency variation of the parameters are Djordjevic/Svensson and Kirschning and Jansen.
End of explanation
m = MLine(frequency=MSL100.frequency, z0=50, w=W, h=H, t=T,
ep_r=Er, mu_r=1, rho=1.712e-8, tand=tand, rough=0.15e-6,
f_low=1e3, f_high=1e12, f_epr_tand=f_epr_tand,
diel='djordjevicsvensson', disp='kirschningjansen')
DUT = m.line(L, 'm', embed=True, z0=m.Z0)
DUT.name = 'DUT'
Loss_mod = 20 * log10(absolute(DUT.s[:,1,0]))
plt.figure()
plt.suptitle('Measurement vs Model')
plt.subplot(2,1,1)
plt.plot(f * 1e-9, Er_eff, label='Measured')
plt.plot(f * 1e-9, real(m.ep_reff_f), label='Model')
plt.ylabel('$\epsilon_{r,eff}$')
plt.legend()
plt.subplot(2,1,2)
plt.plot(f * 1e-9, Loss_mea, label='Measured')
plt.plot(f * 1e-9, Loss_mod, label='Model')
plt.xlabel('Frequency (GHz)')
plt.ylabel('Insertion Loss (dB)')
plt.legend()
plt.show()
Explanation: As a sanity check, the model data are compared with the computed parameters
End of explanation
plt.figure()
plt.title('Measured vs modelled data')
MSL100.plot_s_db()
DUT.plot_s_db(0, 0, color='k')
DUT.plot_s_db(1, 0, color='k')
plt.show()
Explanation: The model results shows a reasonable agreement with the measured $\epsilon_{r,eff}$ and Insertion Loss values.
Checking the results
If the model is now plotted against the measurement of the same length, the plot shows no agreement. This is because the connector effects are not captured by the model.
End of explanation
phi_conn = np.unwrap(np.angle(MSL100.s[:,1,0])) + deltaPhi
z = np.polyfit(f, phi_conn, 1)
p = np.poly1d(z)
delay = -z[0]/(2*np.pi)/2
print('Connector delay: {:.0f} ps'.format(delay * 1e12))
loss_conn_db = 20 * log10(absolute(MSL100.s[:,1,0])) - Loss_mea
alpha = 1.6*np.log(10)/20 * np.sqrt(f/1e9)
beta = 2*np.pi*f/c0
gamma = alpha + 1j*beta
mf = rf.media.DefinedGammaZ0(m.frequency, z0=50, gamma=gamma)
left = mf.line(delay*1e9, 'ns', embed=True, z0=53.5)
right = left.flipped()
check = left ** right
plt.figure()
plt.suptitle('Connector effects')
plt.subplot(2,1,1)
plt.plot(f * 1e-9, phi_conn, label='measured')
plt.plot(f * 1e-9, np.unwrap(np.angle(check.s[:,1,0])), label='model')
plt.ylabel('phase (rad)')
plt.legend()
plt.subplot(2,1,2)
plt.plot(f * 1e-9, loss_conn_db, label='Measured')
plt.plot(f * 1e-9, 20*np.log10(np.absolute(check.s[:,1,0])), label='model')
plt.xlabel('Frequency (GHz)')
plt.ylabel('Insertion Loss (dB)')
plt.legend()
plt.show()
Explanation: Connector delay and loss estimation
The delay of the connector is estimated by fitting a line to its phase contribution vs frequency.
The phase and loss of the two connector are computed by subtracting phase and loss computed without the connectors to the measurement of the same length.
End of explanation
mod = left ** DUT ** right
MSL100_dc = MSL100.extrapolate_to_dc(kind='linear')
DUT_dc = mod.extrapolate_to_dc(kind='linear')
plt.figure()
plt.suptitle('Left-right and right-left TDR')
plt.subplot(2,1,1)
MSL100_dc.s11.plot_z_time_step(pad=2000, window='hamming', label='Measured L-R')
DUT_dc.s11.plot_z_time_step(pad=2000, window='hamming', label='Model L-R')
plt.xlim(-2, 4)
plt.subplot(2,1,2)
MSL100_dc.s22.plot_z_time_step(pad=2000, window='hamming', label='Measured R-L')
DUT_dc.s22.plot_z_time_step(pad=2000, window='hamming', label='Model R-L')
plt.xlim(-2, 4)
plt.tight_layout()
plt.show()
Explanation: The phase of the model shows a good agreement, while the Insertion Loss seems to have a reasonable agreement and is small whatsoever.
Connector impedance adjustment by time-domain reflectometry
Time-domain step responses of measurement and model are used to adjust the connector model characteristic impedance.
The plots shows the connector having an inductive behaviour (positive peak) and the microstripline being a bit too much capacitive (negative plateau).
Characteristic impedance of the connector is tuned by trial-and-error until a reasonable agreement is achieved. Optimization could have been used instead.
End of explanation
plt.figure()
plt.title('Measured vs modelled data')
MSL100.plot_s_db()
mod.name = 'Model'
mod.plot_s_db(0, 0, color='k')
mod.plot_s_db(1, 0, color='k')
plt.show()
Explanation: Final comparison
End of explanation |
5,510 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>GetsDrawn DotCom</h1>
This is a python script to generate the website GetsDrawn. It takes data from /r/RedditGetsDrawn and makes something awesome.
The script has envolved and been rewritten several times.
The first script for rgdsnatch was written after I got banned from posting my artwork on /r/RedditGetsDrawn. The plan was to create a new site that displayed stuff from /r/RedditGetsDrawn.
Currently it only displays the most recent 25 items on redditgetsdrawn. The script looks at the newest 25 reference photos on RedditGetsDrawn. It focuses only on jpeg/png images and ignores and links to none .jpg or .png ending files.
It is needed to instead of ignoring them files - get the image or images in some cases, from the link.
The photos are always submitted from imgur.
Still filter out the i.imgur files, but take the links and filter them through a python imgur module returning the .jpeg or .png files.
This is moving forward from rgdsnatch.py because I am stuck on it.
TODO
Fix the links that don't link to png/jpeg and link to webaddress.
Needs to get the images that are at that web address and embed them.
Display artwork submitted under the images.
Upload artwork to user. Sends them a message on redditgetsdrawn with links.
More pandas
Saves reference images to imgs/year/month/day/reference/username-reference.png
Saves art images to imgs/year/month/day/art/username-line-bw-colour.png
Creates index.html file with
Step1: if i save the data to the file how am i going to get it to update as the post is archieved. Such as up and down votes.
Step2: Need to save json object.
Dict is created but it isnt saving. Looping through lisrgc twice, should only require the one loop.
Cycle through lisr and append to dict/concert to json, and also cycle through lisr.author meta folders saving the json that was created.
Step3: I have it creating a meta folder and creating/writing username.meta files. It wrote 'test' in each folder, but now it writes the photo author title of post.. the username/image data. It should be writing more than author title - maybe upvotes/downvotes, subreddit, time published etc.
Step4: Instead of creating these white images, why not download the art replies of the reference photo.
Step5: I want to save the list of usernames that submit images as png files in a dir.
Currently when I call the list of authors it returns Redditor(user_name='theusername'). I want to return 'theusername'.
Once this is resolved I can add '-line.png' '-bw.png' '-colour.png' to each folder.
Step6: Filter the non jpeg/png links. Need to perform request or imgur api to get the jpeg/png files from the link. Hey maybe bs4?
Step7: I need to get the image ids from each url. Strip the http | Python Code:
import os
import requests
from bs4 import BeautifulSoup
import re
import json
import time
import praw
import dominate
from dominate.tags import *
from time import gmtime, strftime
#import nose
#import unittest
import numpy as np
import pandas as pd
from pandas import *
from PIL import Image
from pprint import pprint
#import pyttsx
import shutil
gtsdrndir = ('/home/wcmckee/getsdrawndotcom')
os.chdir(gtsdrndir)
r = praw.Reddit(user_agent='getsdrawndotcom')
#getmin = r.get_redditor('itwillbemine')
#mincom = getmin.get_comments()
#engine = pyttsx.init()
#engine.say('The quick brown fox jumped over the lazy dog.')
#engine.runAndWait()
#shtweet = []
#for mi in mincom:
# print mi
# shtweet.append(mi)
bodycom = []
bodyicv = dict()
#beginz = pyttsx.init()
#for shtz in shtweet:
# print shtz.downs
# print shtz.ups
# print shtz.body
# print shtz.replies
#beginz.say(shtz.author)
#beginz.say(shtz.body)
#beginz.runAndWait()
# bodycom.append(shtz.body)
#bodyic
#bodycom
getnewr = r.get_subreddit('redditgetsdrawn')
rdnew = getnewr.get_new()
lisrgc = []
lisauth = []
for uz in rdnew:
#print uz
lisrgc.append(uz)
gtdrndic = dict()
imgdir = ('/home/wcmckee/getsdrawndotcom/imgs')
artlist = os.listdir(imgdir)
from time import time
yearz = strftime("%y", gmtime())
monthz = strftime("%m", gmtime())
dayz = strftime("%d", gmtime())
#strftime("%y %m %d", gmtime())
imgzdir = ('imgs/')
yrzpat = (imgzdir + yearz)
monzpath = (yrzpat + '/' + monthz)
dayzpath = (monzpath + '/' + dayz)
rmgzdays = (dayzpath + '/reference')
imgzdays = (dayzpath + '/art')
metzdays = (dayzpath + '/meta')
repathz = ('imgs/' + yearz + '/' + monthz + '/' + dayz + '/')
metzdays
imgzdays
repathz
def ospacheck():
if os.path.isdir(imgzdir + yearz) == True:
print 'its true'
else:
print 'its false'
os.mkdir(imgzdir + yearz)
ospacheck()
#if os.path.isdir(imgzdir + yearz) == True:
# print 'its true'
#else:
# print 'its false'
# os.mkdir(imgzdir + yearz)
lizmon = ['monzpath', 'dayzpath', 'imgzdays', 'rmgzdays', 'metzdays']
for liz in lizmon:
if os.path.isdir(liz) == True:
print 'its true'
else:
print 'its false'
os.mkdir(liz)
fullhom = ('/home/wcmckee/getsdrawndotcom/')
#artlist
httpad = ('http://getsdrawn.com/imgs')
#im = Image.new("RGB", (512, 512), "white")
#im.save(file + ".thumbnail", "JPEG")
rmgzdays = (dayzpath + '/reference')
imgzdays = (dayzpath + '/art')
metzdays = (dayzpath + '/meta')
os.chdir(fullhom + metzdays)
metadict = dict()
Explanation: <h1>GetsDrawn DotCom</h1>
This is a python script to generate the website GetsDrawn. It takes data from /r/RedditGetsDrawn and makes something awesome.
The script has envolved and been rewritten several times.
The first script for rgdsnatch was written after I got banned from posting my artwork on /r/RedditGetsDrawn. The plan was to create a new site that displayed stuff from /r/RedditGetsDrawn.
Currently it only displays the most recent 25 items on redditgetsdrawn. The script looks at the newest 25 reference photos on RedditGetsDrawn. It focuses only on jpeg/png images and ignores and links to none .jpg or .png ending files.
It is needed to instead of ignoring them files - get the image or images in some cases, from the link.
The photos are always submitted from imgur.
Still filter out the i.imgur files, but take the links and filter them through a python imgur module returning the .jpeg or .png files.
This is moving forward from rgdsnatch.py because I am stuck on it.
TODO
Fix the links that don't link to png/jpeg and link to webaddress.
Needs to get the images that are at that web address and embed them.
Display artwork submitted under the images.
Upload artwork to user. Sends them a message on redditgetsdrawn with links.
More pandas
Saves reference images to imgs/year/month/day/reference/username-reference.png
Saves art images to imgs/year/month/day/art/username-line-bw-colour.png
Creates index.html file with:
Title of site and logo: GetsDrawn
Last updated date and time.
Path of image file /imgs/year/month/day/username-reference.png.
(This needs changed to just their username).
Save off .meta data from reddit of each photo, saving it to reference folder.
username-yrmnthday.meta - contains info such as author, title, upvotes, downvotes.
Currently saving .meta files to a meta folder - along side art and reference.
Folder sorting system of files.
websitename/index.html-style.css-imgs/YEAR(15)-MONTH(2)-DAY(4)/art-reference-meta
Inside art folder
Currently it generates USERNAME-line/bw/colour.png 50/50 white files. Maybe should be getting art replies from reddit?
Inside reference folder
Reference fold is working decent.
it creates USERNAME-reference.png / jpeg files.
Currently saves username-line-bw-colour.png to imgs folder. Instead get it to save to imgs/year/month/day/usernames.png.
Script checks the year/month/day and if folder isnt created, it creates it. If folder is there, exit.
Maybe get the reference image and save it with the line/bw/color.pngs
The script now filters the jpeg and png image and skips links to imgur pages. This needs to be fixed by getting the images from the imgur pages.
It renames the image files to the redditor username followed by a -reference tag (and ending with png of course).
It opens these files up with PIL and checks the sizes.
It needs to resize the images that are larger than 800px to 800px.
These images need to be linked in the index.html instead of the imgur altenatives.
Instead of the jpeg/png files on imgur they are downloaded to the server with this script.
Filter through as images are getting downloaded and if it has been less than certain time or if the image has been submitted before
Extending the subreddits it gets data from to cycle though a list, run script though list of subreddits.
Browse certain days - Current day by default but option to scroll through other days.
Filters - male/female/animals/couples etc
Function that returns only male portraits.
tags to add to photos.
Filter images with tags
End of explanation
for lisz in lisrgc:
metadict.update({'up': lisz.ups})
metadict.update({'down': lisz.downs})
metadict.update({'title': lisz.title})
metadict.update({'created': lisz.created})
#metadict.update({'createdutc': lisz.created_utc})
#print lisz.ups
#print lisz.downs
#print lisz.created
#print lisz.comments
metadict
Explanation: if i save the data to the file how am i going to get it to update as the post is archieved. Such as up and down votes.
End of explanation
for lisr in lisrgc:
gtdrndic.update({'title': lisr.title})
lisauth.append(str(lisr.author))
for osliz in os.listdir(fullhom + metzdays):
with open(str(lisr.author) + '.meta', "w") as f:
rstrin = lisr.title.encode('ascii', 'ignore').decode('ascii')
#print matdict
#metadict = dict()
#for lisz in lisrgc:
# metadict.update({'up': lisz.ups})
# metadict.update({'down': lisz.downs})
# metadict.update({'title': lisz.title})
# metadict.update({'created': lisz.created})
f.write(rstrin)
#matdict
Explanation: Need to save json object.
Dict is created but it isnt saving. Looping through lisrgc twice, should only require the one loop.
Cycle through lisr and append to dict/concert to json, and also cycle through lisr.author meta folders saving the json that was created.
End of explanation
#os.listdir(dayzpath)
Explanation: I have it creating a meta folder and creating/writing username.meta files. It wrote 'test' in each folder, but now it writes the photo author title of post.. the username/image data. It should be writing more than author title - maybe upvotes/downvotes, subreddit, time published etc.
End of explanation
#for lisa in lisauth:
# #print lisa + '-line.png'
# im = Image.new("RGB", (512, 512), "white")
# im.save(lisa + '-line.png')
# im = Image.new("RGB", (512, 512), "white")
# im.save(lisa + '-bw.png')
#print lisa + '-bw.png'
# im = Image.new("RGB", (512, 512), "white")
# im.save(lisa + '-colour.png')
#print lisa + '-colour.png'
os.listdir('/home/wcmckee/getsdrawndotcom/imgs')
#lisauth
Explanation: Instead of creating these white images, why not download the art replies of the reference photo.
End of explanation
#lisr.author
namlis = []
opsinz = open('/home/wcmckee/visignsys/index.meta', 'r')
panz = opsinz.read()
os.chdir('/home/wcmckee/getsdrawndotcom/' + rmgzdays)
Explanation: I want to save the list of usernames that submit images as png files in a dir.
Currently when I call the list of authors it returns Redditor(user_name='theusername'). I want to return 'theusername'.
Once this is resolved I can add '-line.png' '-bw.png' '-colour.png' to each folder.
End of explanation
from imgurpython import ImgurClient
opps = open('/home/wcmckee/ps.txt', 'r')
opzs = open('/home/wcmckee/ps2.txt', 'r')
oprd = opps.read()
opzrd = opzs.read()
client = ImgurClient(oprd, opzrd)
# Example request
#items = client.gallery()
#for item in items:
# print(item.link)
#itz = client.get_album_images()
linklis = []
Explanation: Filter the non jpeg/png links. Need to perform request or imgur api to get the jpeg/png files from the link. Hey maybe bs4?
End of explanation
for rdz in lisrgc:
if 'http://imgur.com' in rdz.url:
print rdz.url
#itz = client.get_album_images()
# reimg = requests.get(rdz.url)
## retxt = reimg.text
# souptxt = BeautifulSoup(''.join(retxt))
# soupurz = souptxt.findAll('img')
# for soupuz in soupurz:
# imgurl = soupuz['src']
# print imgurl
# linklis.append(imgurl)
#try:
# imzdata = requests.get(imgurl)
linklis
if '.jpg' in linklis:
print 'yes'
else:
print 'no'
#panz()
for rdz in lisrgc:
(rdz.title)
#a(rdz.url)
if 'http://i.imgur.com' in rdz.url:
#print rdz.url
print (rdz.url)
url = rdz.url
response = requests.get(url, stream=True)
with open(str(rdz.author) + '-reference.png', 'wb') as out_file:
shutil.copyfileobj(response.raw, out_file)
del response
apsize = []
aptype = []
basewidth = 600
imgdict = dict()
for rmglis in os.listdir('/home/wcmckee/getsdrawndotcom/' + rmgzdays):
#print rmglis
im = Image.open(rmglis)
#print im.size
imgdict.update({rmglis : im.size})
#im.thumbnail(size, Image.ANTIALIAS)
#im.save(file + ".thumbnail", "JPEG")
apsize.append(im.size)
aptype.append(rmglis)
#for imdva in imgdict.values():
#print imdva
#for deva in imdva:
#print deva
# if deva < 1000:
# print 'omg less than 1000'
# else:
# print 'omg more than 1000'
# print deva / 2
#print imgdict.values
# Needs to update imgdict.values with this new number. Must halve height also.
#basewidth = 300
#img = Image.open('somepic.jpg')
#wpercent = (basewidth/float(img.size[0]))
#hsize = int((float(img.size[1])*float(wpercent)))
#img = img.resize((basewidth,hsize), PIL.Image.ANTIALIAS)
#img.save('sompic.jpg')
#os.chdir(metzdays)
#for numz in apsize:
# print numz[0]
# if numz[0] > 800:
# print ('greater than 800')
# else:
# print ('less than 800!')
reliz = []
for refls in os.listdir('/home/wcmckee/getsdrawndotcom/' + rmgzdays):
#print rmgzdays + refls
reliz.append(rmgzdays + '/' + refls)
reliz
aptype
opad = open('/home/wcmckee/ad.html', 'r')
opred = opad.read()
str2 = opred.replace("\n", "")
str2
doc = dominate.document(title='GetsDrawn')
with doc.head:
link(rel='stylesheet', href='style.css')
script(type ='text/javascript', src='script.js')
str(str2)
with div():
attr(cls='header')
h1('GetsDrawn')
p(img('imgs/getsdrawn-bw.png', src='imgs/getsdrawn-bw.png'))
#p(img('imgs/15/01/02/ReptileLover82-reference.png', src= 'imgs/15/01/02/ReptileLover82-reference.png'))
h1('Updated ', strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime()))
p(panz)
p(bodycom)
with doc:
with div(id='body').add(ol()):
for rdz in reliz:
#h1(rdz.title)
#a(rdz.url)
#p(img(rdz, src='%s' % rdz))
#print rdz
p(img(rdz, src = rdz))
p(rdz)
#print rdz.url
#if '.jpg' in rdz.url:
# img(rdz.urlz)
#else:
# a(rdz.urlz)
#h1(str(rdz.author))
#li(img(i.lower(), src='%s' % i))
with div():
attr(cls='body')
p('GetsDrawn is open source')
a('https://github.com/getsdrawn/getsdrawndotcom')
a('https://reddit.com/r/redditgetsdrawn')
#print doc
docre = doc.render()
#s = docre.decode('ascii', 'ignore')
yourstring = docre.encode('ascii', 'ignore').decode('ascii')
indfil = ('/home/wcmckee/getsdrawndotcom/index.html')
mkind = open(indfil, 'w')
mkind.write(yourstring)
mkind.close()
#os.system('scp -r /home/wcmckee/getsdrawndotcom/ [email protected]:/home/wcmckee/getsdrawndotcom')
#rsync -azP source destination
#updatehtm = raw_input('Update index? Y/n')
#updateref = raw_input('Update reference? Y/n')
#if 'y' or '' in updatehtm:
# os.system('scp -r /home/wcmckee/getsdrawndotcom/index.html [email protected]:/home/wcmckee/getsdrawndotcom/index.html')
#elif 'n' in updatehtm:
# print 'not uploading'
#if 'y' or '' in updateref:
# os.system('rsync -azP /home/wcmckee/getsdrawndotcom/ [email protected]:/home/wcmckee/getsdrawndotcom/')
os.system('scp -r /home/wcmckee/getsdrawndotcom/index.html [email protected]:/home/wcmckee/getsdrawndotcom/index.html')
#os.system('scp -r /home/wcmckee/getsdrawndotcom/style.css [email protected]:/home/wcmckee/getsdrawndotcom/style.css')
Explanation: I need to get the image ids from each url. Strip the http://imgur.com/ from the string. The gallery id is the random characters after. if it's an album a is added. if multi imgs then , is used to seprate.
Doesnt currently work.
End of explanation |
5,511 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
From Skip-gram word2vec
Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
vocab_to_int = {word:integer for integer,word in enumerate(set(text))}
int_to_vocab = {integer:word for integer,word in enumerate(set(text))}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
dictionary = {
'.' : '||Period||',
',' : '||Comma||',
'"' : '||Quotation_Mark||',
';' : '||Semicolon||',
'!' : '||Exclamation_Mark||',
'?' : '||Question_Mark||',
'(' : '||Left_Parentheses||',
')' : '||Right_Parentheses||',
'--': '||Dash||',
'\n': '||Return||',
}
#print(dictionary)
return dictionary
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
inputs = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input')
targets = tf.placeholder(dtype=tf.int32, shape=[None, None], name='targets')
learningRate = tf.placeholder(dtype=tf.float32, shape=None, name='learning_rate')
return inputs, targets, learningRate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([cell])
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name='initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
From Skip-gram word2vec
Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
n_batch = len(int_text) // (batch_size * seq_length)
int_text_x = np.array(int_text[:batch_size * seq_length * n_batch])
int_text_y = np.roll(int_text_x, -1)
x_batches = np.split(int_text_x.reshape(batch_size, -1), n_batch, 1)
y_batches = np.split(int_text_y.reshape(batch_size, -1), n_batch, 1)
return np.array(list(zip(x_batches, y_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 13
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
input_tensor=loaded_graph.get_tensor_by_name("input:0")
initial_state_tensor=loaded_graph.get_tensor_by_name("initial_state:0")
final_state_tensor=loaded_graph.get_tensor_by_name("final_state:0")
probs_tensor=loaded_graph.get_tensor_by_name("probs:0")
return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
return int_to_vocab[np.argmax(probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
5,512 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
controlling jobs locally
This is a set of convenient commands used to control simulations locally.
🏄 running scripts 🏄
Step1: managing lock files
counting
Step2: removing older files
⚠️ THIS WILL DELETE (cached) FILES
version control | Python Code:
!ipython3 experiment_fle.py
!ipython3 experiment_speed.py
!ipython3 experiment_contrast.py
!ipython3 experiment_MotionReversal.py
!ipython3 experiment_SI_controls.py
Explanation: controlling jobs locally
This is a set of convenient commands used to control simulations locally.
🏄 running scripts 🏄
End of explanation
!find . -name *lock* -exec ls -l {} \; |wc -l
!find . -name *lock* -exec ls -l {} \;
Explanation: managing lock files
counting:
End of explanation
!git commit -m' finished new run ' ../notebooks/control_jobs.ipynb
Explanation: removing older files
⚠️ THIS WILL DELETE (cached) FILES
version control
End of explanation |
5,513 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Best practices
Let's start with pep8 (https
Step1: Look at Pandas Dataframes
this is italicized
Step2: Pivot Tables w/ pandas
http
Step3: Enhanced Pandas Dataframe Display
Step4: Tab
Step5: shift-tab
Step6: shift-tab-tab
(equivalent in in Lab to shift-tab)
Step7: shift-tab-tab-tab-tab
(doesn't work in lab)
Step8: ?
Step9: ??
(Lab can scroll if you click)
Step11: Inspect everything
Step12: Keyboard shortcuts
For help, ESC + h
h doesn't work in Lab
l / shift L for line numbers
Step13: Headings and LaTeX
With text and $\LaTeX$ support.
$$\begin{align}
B'&=-\nabla \times E,\
E'&=\nabla \times B - 4\pi j
\end{align}$$
Step14: More markdown
Step15: You can also get monospaced fonts by indenting 4 spaces
Step16: ```sql
SELECT first_name,
last_name,
year_of_birth
FROM presidents
WHERE year_of_birth > 1800;
```
Step17: Other cell-magics
Step18: Autoreload is cool -- don't have time to give it the attention that it deserves.
https
Step19: Multicursor magic
Hold down option, click and drag.
Step20: Find and replace -- regex notebook (or cell) wide.
R
pyRserve
rpy2 | Python Code:
%matplotlib inline
%config InlineBackend.figure_format='retina'
# Add this to python2 code to make life easier
from __future__ import absolute_import, division, print_function
import numpy as np
# don't do:
# from numpy import *
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import ipywidgets
import os
import sys
import warnings
sns.set()
plt.rcParams['figure.figsize'] = (12, 8)
sns.set_style("darkgrid")
sns.set_context("poster", font_scale=1.3)
warnings.filterwarnings('ignore')
Explanation: Best practices
Let's start with pep8 (https://www.python.org/dev/peps/pep-0008/)
Imports should be grouped in the following order:
standard library imports
related third party imports
local application/library specific imports
You should put a blank line between each group of imports.
Put any relevant all specification after the imports.
End of explanation
df = pd.read_csv("../data/coal_prod_cleaned.csv")
df.head()
df.shape
# import qgrid # Put imports at the top
# qgrid.nbinstall(overwrite=True)
# qgrid.show_grid(df[['MSHA_ID',
# 'Year',
# 'Mine_Name',
# 'Mine_State',
# 'Mine_County']], remote_js=True)
# Check out http://nbviewer.ipython.org/github/quantopian/qgrid/blob/master/qgrid_demo.ipynb for more (including demo)
Explanation: Look at Pandas Dataframes
this is italicized
End of explanation
!conda install pivottablejs -y
df = pd.read_csv("../data/mps.csv", encoding="ISO-8859-1")
df.head(10)
Explanation: Pivot Tables w/ pandas
http://nicolas.kruchten.com/content/2015/09/jupyter_pivottablejs/
End of explanation
# Province, Party, Average, Age, Heatmap
from pivottablejs import pivot_ui
pivot_ui(df)
Explanation: Enhanced Pandas Dataframe Display
End of explanation
import numpy as np
np.random.
Explanation: Tab
End of explanation
np.linspace(start=, )
Explanation: shift-tab
End of explanation
np.linspace(50, 150, num=100,)
Explanation: shift-tab-tab
(equivalent in in Lab to shift-tab)
End of explanation
np.linspace(start=, )
Explanation: shift-tab-tab-tab-tab
(doesn't work in lab)
End of explanation
np.linspace?
Explanation: ?
End of explanation
np.linspace??
Explanation: ??
(Lab can scroll if you click)
End of explanation
def silly_absolute_value_function(xval):
Takes a value and returns the value.
xval_sq = xval ** 2.0
1 + 4
xval_abs = np.sqrt(xval_sq)
return xval_abs
silly_absolute_value_function(2)
silly_absolute_value_function?
silly_absolute_value_function??
Explanation: Inspect everything
End of explanation
# in select mode, shift j/k (to select multiple cells at once)
# split cell with ctrl shift -
first = 1
second = 2
third = 3
first = 1
second = 2
third = 3
# a new cell above
# b new cell below
Explanation: Keyboard shortcuts
For help, ESC + h
h doesn't work in Lab
l / shift L for line numbers
End of explanation
%%latex
If you want to get crazier...
\begin{equation}
\oint_S {E_n dA = \frac{1}{{\varepsilon _0 }}} Q_\textrm{inside}
\end{equation}
Explanation: Headings and LaTeX
With text and $\LaTeX$ support.
$$\begin{align}
B'&=-\nabla \times E,\
E'&=\nabla \times B - 4\pi j
\end{align}$$
End of explanation
# Indent
# Cmd + [
# Cmd + ]
# Comment
# Cmd + /
Explanation: More markdown
End of explanation
# note difference w/ lab
Explanation: You can also get monospaced fonts by indenting 4 spaces:
mkdir toc
cd toc
Wrap with triple-backticks and language:
bash
mkdir toc
cd toc
wget https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
SQL
SELECT *
FROM tablename
End of explanation
%%bash
pwd
for i in *.ipynb
do
echo ${i} | awk -F . '{print $1}'
done
echo
echo "break"
echo
for i in *.ipynb
do
echo $i | awk -F - '{print $2}'
done
Explanation: ```sql
SELECT first_name,
last_name,
year_of_birth
FROM presidents
WHERE year_of_birth > 1800;
```
End of explanation
%%writefile ../scripts/temp.py
from __future__ import absolute_import, division, print_function
I promise that I'm not cheating!
!cat ../scripts/temp.py
Explanation: Other cell-magics
End of explanation
%load_ext autoreload
%autoreload 2
example_dict = {}
# Indent/dedent/comment
for _ in range(5):
example_dict["one"] = 1
example_dict["two"] = 2
example_dict["three"] = 3
example_dict["four"] = 4
Explanation: Autoreload is cool -- don't have time to give it the attention that it deserves.
https://gist.github.com/jbwhit/38c1035c48cdb1714fc8d47fa163bfae
End of explanation
example_dict["one_better_name"] = 1
example_dict["two_better_name"] = 2
example_dict["three_better_name"] = 3
example_dict["four_better_name"] = 4
Explanation: Multicursor magic
Hold down option, click and drag.
End of explanation
import numpy as np
!conda install -c r rpy2 -y
import rpy2
%load_ext rpy2.ipython
X = np.array([0,1,2,3,4])
Y = np.array([3,5,4,6,7])
%%R?
%%R -i X,Y -o XYcoef
XYlm = lm(Y~X)
XYcoef = coef(XYlm)
print(summary(XYlm))
par(mfrow=c(2,2))
plot(XYlm)
type(XYcoef)
XYcoef**2
thing()
Explanation: Find and replace -- regex notebook (or cell) wide.
R
pyRserve
rpy2
End of explanation |
5,514 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Topic Modelling
Author
Step1: 1. Corpus acquisition.
In this notebook we will explore some tools for text processing and analysis and two topic modeling algorithms available from Python toolboxes.
To do so, we will explore and analyze collections of Wikipedia articles from a given category, using wikitools, that makes easy the capture of content from wikimedia sites.
(As a side note, there are many other available text collections to test topic modelling algorithm. In particular, the NLTK library has many examples, that can explore them using the nltk.download() tool.
import nltk
nltk.download()
for instance, you can take the gutemberg dataset
Mycorpus = nltk.corpus.gutenberg
text_name = Mycorpus.fileids()[0]
raw = Mycorpus.raw(text_name)
Words = Mycorpus.words(text_name)
Also, tools like Gensim or Sci-kit learn include text databases to work with).
In order to use Wikipedia data, we will select a single category of articles
Step2: You can try with any other categories. Take into account that the behavior of topic modelling algorithms may depend on the amount of documents available for the analysis. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https
Step3: Now, we have stored the whole text collection in two lists
Step4: 2. Corpus Processing
Topic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection.
Thus, we will proceed with the following steps
Step5: 2.2. Stemming vs Lemmatization
At this point, we can choose between applying a simple stemming or ussing lemmatization. We will try both to test their differences.
Task
Step6: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk
Step7: Task
Step8: One of the advantages of the lemmatizer method is that the result of lemmmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.
However, without using contextual information, lemmatize() does not remove grammatical differences. This is the reason why "is" or "are" are preserved and not replaced by infinitive "be".
As an alternative, we can apply .lemmatize(word, pos), where 'pos' is a string code specifying the part-of-speech (pos), i.e. the grammatical role of the words in its sentence. For instance, you can check the difference between wnl.lemmatize('is') and wnl.lemmatize('is, pos='v').
2.3. Vectorization
Up to this point, we have transformed the raw text collection of articles in a list of articles, where each article is a collection of the word roots that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library.
As a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them.
Step9: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.
Task
Step10: At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus.
After that, we have transformed each article (in corpus_clean) in a list tuples (id, n).
Step11: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples
[(0, 1), (3, 3), (5,2)]
for a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero.
[1, 0, 0, 3, 0, 2, 0, 0, 0, 0]
These sparse vectors will be the inputs to the topic modeling algorithms.
Note that, at this point, we have built a Dictionary containing
Step12: and a bow representation of a corpus with
Step13: Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus.
Step14: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is
Step15: which appears
Step16: In the following we plot the most frequent terms in the corpus.
Step17: Exercise
Step18: Exercise
Step19: 3. Semantic Analysis
The dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms. In this section we will explore two algorithms
Step20: From now on, tfidf can be used to convert any vector from the old representation (bow integer counts) to the new one (TfIdf real-valued weights)
Step21: Or to apply a transformation to a whole corpus
Step22: 3.1. Latent Semantic Indexing (LSI)
Now we are ready to apply a topic modeling algorithm. Latent Semantic Indexing is provided by LsiModel.
Task
Step23: From LSI, we can check both the topic-tokens matrix and the document-topics matrix.
Now we can check the topics generated by LSI. An intuitive visualization is provided by the show_topics method.
Step24: However, a more useful representation of topics is as a list of tuples (token, value). This is provided by the show_topic method.
Task
Step25: LSI approximates any document as a linear combination of the topic vectors. We can compute the topic weights for any input corpus entered as input to the lsi model.
Step26: Task
Step27: 3.2. Latent Dirichlet Allocation (LDA)
There are several implementations of the LDA topic model in python
Step28: 3.2.2. LDA using python lda library
An alternative to gensim for LDA is the lda library from python. It requires a doc-frequency matrix as input
Step29: Document-topic distribution
Step30: It allows incremental updates
3.2.2. LDA using Sci-kit Learn
The input matrix to the sklearn implementation of LDA contains the token-counts for all documents in the corpus.
sklearn contains a powerfull CountVectorizer method that can be used to construct the input matrix from the corpus_bow.
First, we will define an auxiliary function to print the top tokens in the model, that has been taken from the sklearn documentation.
Step31: Now, we need a dataset to feed the Count_Vectorizer object, by joining all tokens in corpus_clean in a single string, using a space ' ' as separator.
Step32: Now we are ready to compute the token counts.
Step33: Now we can apply the LDA algorithm.
Task
Step34: Task
Step35: Exercise | Python Code:
%matplotlib inline
# Required imports
from wikitools import wiki
from wikitools import category
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
import gensim
import numpy as np
import lda
import lda.datasets
from time import time
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
import matplotlib.pyplot as plt
import pylab
from test_helper import Test
Explanation: Topic Modelling
Author: Jesús Cid Sueiro
Date: 2016/11/27
In this notebook we will explore some tools for text analysis in python. To do so, first we will import the requested python libraries.
End of explanation
site = wiki.Wiki("https://en.wikipedia.org/w/api.php")
# Select a category with a reasonable number of articles (>100)
# cat = "Economics"
cat = "Pseudoscience"
print cat
Explanation: 1. Corpus acquisition.
In this notebook we will explore some tools for text processing and analysis and two topic modeling algorithms available from Python toolboxes.
To do so, we will explore and analyze collections of Wikipedia articles from a given category, using wikitools, that makes easy the capture of content from wikimedia sites.
(As a side note, there are many other available text collections to test topic modelling algorithm. In particular, the NLTK library has many examples, that can explore them using the nltk.download() tool.
import nltk
nltk.download()
for instance, you can take the gutemberg dataset
Mycorpus = nltk.corpus.gutenberg
text_name = Mycorpus.fileids()[0]
raw = Mycorpus.raw(text_name)
Words = Mycorpus.words(text_name)
Also, tools like Gensim or Sci-kit learn include text databases to work with).
In order to use Wikipedia data, we will select a single category of articles:
End of explanation
# Loading category data. This may take a while
print "Loading category data. This may take a while..."
cat_data = category.Category(site, cat)
corpus_titles = []
corpus_text = []
for n, page in enumerate(cat_data.getAllMembersGen()):
print "\r Loading article {0}".format(n + 1),
corpus_titles.append(page.title)
corpus_text.append(page.getWikiText())
n_art = len(corpus_titles)
print "\nLoaded " + str(n_art) + " articles from category " + cat
Explanation: You can try with any other categories. Take into account that the behavior of topic modelling algorithms may depend on the amount of documents available for the analysis. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https://en.wikipedia.org/wiki/Category:Contents, for instance.
We start downloading the text collection.
End of explanation
# n = 5
# print corpus_titles[n]
# print corpus_text[n]
Explanation: Now, we have stored the whole text collection in two lists:
corpus_titles, which contains the titles of the selected articles
corpus_text, with the text content of the selected wikipedia articles
You can browse the content of the wikipedia articles to get some intuition about the kind of documents that will be processed.
End of explanation
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "punkt"
# Select option "d) Download", and identifier "stopwords"
# nltk.download()
stopwords_en = stopwords.words('english')
corpus_clean = []
for n, art in enumerate(corpus_text):
print "\rProcessing article {0} out of {1}".format(n + 1, n_art),
# This is to make sure that all characters have the appropriate encoding.
art = art.decode('utf-8')
# Tokenize each text entry.
# scode: tokens = <FILL IN>
token_list = word_tokenize(art)
# Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.
# Store the result in a new token list, clean_tokens.
# scode: filtered_tokens = <FILL IN>
filtered_tokens = [token.lower() for token in token_list if token.isalnum()]
# Remove all tokens in the stopwords list and append the result to corpus_clean
# scode: clean_tokens = <FILL IN>
clean_tokens = [token for token in filtered_tokens if token not in stopwords_en]
# scode: <FILL IN>
corpus_clean.append(clean_tokens)
print "\nLet's check the first tokens from document 0 after processing:"
print corpus_clean[0][0:30]
Test.assertTrue(len(corpus_clean) == n_art, 'List corpus_clean does not contain the expected number of articles')
Test.assertTrue(len([c for c in corpus_clean[0] if c in stopwords_en])==0, 'Stopwords have not been removed')
Explanation: 2. Corpus Processing
Topic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection.
Thus, we will proceed with the following steps:
Tokenization, filtering and cleaning
Homogeneization (stemming or lemmatization)
Vectorization
2.1. Tokenization, filtering and cleaning.
The first steps consists on the following:
Tokenization: convert text string into lists of tokens.
Filtering:
Removing capitalization: capital alphabetic characters will be transformed to their corresponding lowercase characters.
Removing non alphanumeric tokens (e.g. punktuation signs)
Cleaning: Removing stopwords, i.e., those words that are very common in language and do not carry out useful semantic content (articles, pronouns, etc).
To do so, we will need some packages from the Natural Language Toolkit.
End of explanation
# Select stemmer.
stemmer = nltk.stem.SnowballStemmer('english')
corpus_stemmed = []
for n, token_list in enumerate(corpus_clean):
print "\rStemming article {0} out of {1}".format(n + 1, n_art),
# Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.
# Store the result in a new token list, clean_tokens.
# scode: stemmed_tokens = <FILL IN>
stemmed_tokens = [stemmer.stem(token) for token in token_list]
# Add art to the stemmed corpus
# scode: <FILL IN>
corpus_stemmed.append(stemmed_tokens)
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_stemmed[0][0:30]
Test.assertTrue((len([c for c in corpus_stemmed[0] if c!=stemmer.stem(c)]) < 0.1*len(corpus_stemmed[0])),
'It seems that stemming has not been applied properly')
Explanation: 2.2. Stemming vs Lemmatization
At this point, we can choose between applying a simple stemming or ussing lemmatization. We will try both to test their differences.
Task: Apply the .stem() method, from the stemmer object created in the first line, to corpus_filtered.
End of explanation
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "wordnet"
# nltk.download()
Explanation: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk
End of explanation
wnl = WordNetLemmatizer()
# Select stemmer.
corpus_lemmat = []
for n, token_list in enumerate(corpus_clean):
print "\rLemmatizing article {0} out of {1}".format(n + 1, n_art),
# scode: lemmat_tokens = <FILL IN>
lemmat_tokens = [wnl.lemmatize(token) for token in token_list]
# Add art to the stemmed corpus
# scode: <FILL IN>
corpus_lemmat.append(lemmat_tokens)
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_lemmat[0][0:30]
Explanation: Task: Apply the .lemmatize() method, from the WordNetLemmatizer object created in the first line, to corpus_filtered.
End of explanation
# Create dictionary of tokens
D = gensim.corpora.Dictionary(corpus_clean)
n_tokens = len(D)
print "The dictionary contains {0} tokens".format(n_tokens)
print "First tokens in the dictionary: "
for n in range(10):
print str(n) + ": " + D[n]
Explanation: One of the advantages of the lemmatizer method is that the result of lemmmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.
However, without using contextual information, lemmatize() does not remove grammatical differences. This is the reason why "is" or "are" are preserved and not replaced by infinitive "be".
As an alternative, we can apply .lemmatize(word, pos), where 'pos' is a string code specifying the part-of-speech (pos), i.e. the grammatical role of the words in its sentence. For instance, you can check the difference between wnl.lemmatize('is') and wnl.lemmatize('is, pos='v').
2.3. Vectorization
Up to this point, we have transformed the raw text collection of articles in a list of articles, where each article is a collection of the word roots that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library.
As a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them.
End of explanation
# Transform token lists into sparse vectors on the D-space
corpus_bow = [D.doc2bow(doc) for doc in corpus_clean]
Test.assertTrue(len(corpus_bow)==n_art, 'corpus_bow has not the appropriate size')
Explanation: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.
Task: Apply the doc2bow method from gensim dictionary D, to all tokens in every article in corpus_clean. The result must be a new list named corpus_bow where each element is a list of tuples (token_id, number_of_occurrences).
End of explanation
print "Original article (after cleaning): "
print corpus_clean[0][0:30]
print "Sparse vector representation (first 30 components):"
print corpus_bow[0][0:30]
print "The first component, {0} from document 0, states that token 0 ({1}) appears {2} times".format(
corpus_bow[0][0], D[0], corpus_bow[0][0][1])
Explanation: At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus.
After that, we have transformed each article (in corpus_clean) in a list tuples (id, n).
End of explanation
print "{0} tokens".format(len(D))
Explanation: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples
[(0, 1), (3, 3), (5,2)]
for a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero.
[1, 0, 0, 3, 0, 2, 0, 0, 0, 0]
These sparse vectors will be the inputs to the topic modeling algorithms.
Note that, at this point, we have built a Dictionary containing
End of explanation
print "{0} Wikipedia articles".format(len(corpus_bow))
Explanation: and a bow representation of a corpus with
End of explanation
# SORTED TOKEN FREQUENCIES (I):
# Create a "flat" corpus with all tuples in a single list
corpus_bow_flat = [item for sublist in corpus_bow for item in sublist]
# Initialize a numpy array that we will use to count tokens.
# token_count[n] should store the number of ocurrences of the n-th token, D[n]
token_count = np.zeros(n_tokens)
# Count the number of occurrences of each token.
for x in corpus_bow_flat:
# Update the proper element in token_count
# scode: <FILL IN>
token_count[x[0]] += x[1]
# Sort by decreasing number of occurences
ids_sorted = np.argsort(- token_count)
tf_sorted = token_count[ids_sorted]
Explanation: Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus.
End of explanation
print D[ids_sorted[0]]
Explanation: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is
End of explanation
print "{0} times in the whole corpus".format(tf_sorted[0])
Explanation: which appears
End of explanation
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
# Example data
n_bins = 25
hot_tokens = [D[i] for i in ids_sorted[n_bins-1::-1]]
y_pos = np.arange(len(hot_tokens))
z = tf_sorted[n_bins-1::-1]/n_art
plt.barh(y_pos, z, align='center', alpha=0.4)
plt.yticks(y_pos, hot_tokens)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
# SORTED TOKEN FREQUENCIES:
# Example data
plt.semilogy(tf_sorted)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
Explanation: In the following we plot the most frequent terms in the corpus.
End of explanation
# scode: <WRITE YOUR CODE HERE>
# Example data
cold_tokens = [D[i] for i in ids_sorted if tf_sorted[i]==1]
print "There are {0} cold tokens, which represent {1}% of the total number of tokens in the dictionary".format(
len(cold_tokens), float(len(cold_tokens))/n_tokens*100)
Explanation: Exercise: There are usually many tokens that appear with very low frequency in the corpus. Count the number of tokens appearing only once, and what is the proportion of them in the token list.
End of explanation
# scode: <WRITE YOUR CODE HERE>
# SORTED TOKEN FREQUENCIES (I):
# Count the number of occurrences of each token.
token_count2 = np.zeros(n_tokens)
for x in corpus_bow_flat:
token_count2[x[0]] += (x[1]>0)
# Sort by decreasing number of occurences
ids_sorted2 = np.argsort(- token_count2)
tf_sorted2 = token_count2[ids_sorted2]
# SORTED TOKEN FREQUENCIES (II):
# Example data
n_bins = 25
hot_tokens2 = [D[i] for i in ids_sorted2[n_bins-1::-1]]
y_pos2 = np.arange(len(hot_tokens2))
z2 = tf_sorted2[n_bins-1::-1]/n_art
plt.barh(y_pos2, z2, align='center', alpha=0.4)
plt.yticks(y_pos2, hot_tokens2)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
Explanation: Exercise: Represent graphically those 20 tokens that appear in the highest number of articles. Note that you can use the code above (headed by # SORTED TOKEN FREQUENCIES) with a very minor modification.
End of explanation
tfidf = gensim.models.TfidfModel(corpus_bow)
Explanation: 3. Semantic Analysis
The dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms. In this section we will explore two algorithms:
Latent Semantic Indexing (LSI)
Latent Dirichlet Allocation (LDA)
The topic model algorithms in gensim assume that input documents are parameterized using the tf-idf model. This can be done using
End of explanation
doc_bow = [(0, 1), (1, 1)]
tfidf[doc_bow]
Explanation: From now on, tfidf can be used to convert any vector from the old representation (bow integer counts) to the new one (TfIdf real-valued weights):
End of explanation
corpus_tfidf = tfidf[corpus_bow]
print corpus_tfidf[0][0:5]
Explanation: Or to apply a transformation to a whole corpus
End of explanation
# Initialize an LSI transformation
n_topics = 5
# scode: lsi = <FILL IN>
lsi = gensim.models.LsiModel(corpus_tfidf, id2word=D, num_topics=n_topics)
Explanation: 3.1. Latent Semantic Indexing (LSI)
Now we are ready to apply a topic modeling algorithm. Latent Semantic Indexing is provided by LsiModel.
Task: Generate a LSI model with 5 topics for corpus_tfidf and dictionary D. You can check de sintaxis for gensim.models.LsiModel.
End of explanation
lsi.show_topics(num_topics=-1, num_words=10, log=False, formatted=True)
Explanation: From LSI, we can check both the topic-tokens matrix and the document-topics matrix.
Now we can check the topics generated by LSI. An intuitive visualization is provided by the show_topics method.
End of explanation
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
n_bins = 25
# Example data
y_pos = range(n_bins-1, -1, -1)
pylab.rcParams['figure.figsize'] = 16, 8 # Set figure size
for i in range(n_topics):
### Plot top 25 tokens for topic i
# Read i-thtopic
# scode: <FILL IN>
topic_i = lsi.show_topic(i, topn=n_bins)
tokens = [t[0] for t in topic_i]
weights = [t[1] for t in topic_i]
# Plot
# scode: <FILL IN>
plt.subplot(1, n_topics, i+1)
plt.barh(y_pos, weights, align='center', alpha=0.4)
plt.yticks(y_pos, tokens)
plt.xlabel('Top {0} topic weights'.format(n_bins))
plt.title('Topic {0}'.format(i))
plt.show()
Explanation: However, a more useful representation of topics is as a list of tuples (token, value). This is provided by the show_topic method.
Task: Represent the columns of the topic-token matrix as a series of bar diagrams (one per topic) with the top 25 tokens of each topic.
End of explanation
# On real corpora, target dimensionality of
# 200–500 is recommended as a “golden standard”
# Create a double wrapper over the original
# corpus bow tfidf fold-in-lsi
corpus_lsi = lsi[corpus_tfidf]
print corpus_lsi[0]
Explanation: LSI approximates any document as a linear combination of the topic vectors. We can compute the topic weights for any input corpus entered as input to the lsi model.
End of explanation
# Extract weights from corpus_lsi
# scode weight0 = <FILL IN>
weight0 = [doc[0][1] if doc != [] else -np.inf for doc in corpus_lsi]
# Locate the maximum positive weight
nmax = np.argmax(weight0)
print nmax
print weight0[nmax]
print corpus_lsi[nmax]
# Get topic 0
# scode: topic_0 = <FILL IN>
topic_0 = lsi.show_topic(0, topn=n_bins)
# Compute a list of tuples (token, wordcount) for all tokens in topic_0, where wordcount is the number of
# occurences of the token in the article.
# scode: token_counts = <FILL IN>
token_counts = [(t[0], corpus_clean[nmax].count(t[0])) for t in topic_0]
print "Topic 0 is:"
print topic_0
print "Token counts:"
print token_counts
Explanation: Task: Find the document with the largest positive weight for topic 0. Compare the document and the topic.
End of explanation
ldag = gensim.models.ldamodel.LdaModel(
corpus=corpus_tfidf, id2word=D, num_topics=10, update_every=1, passes=10)
ldag.print_topics()
Explanation: 3.2. Latent Dirichlet Allocation (LDA)
There are several implementations of the LDA topic model in python:
Python library lda.
Gensim module: gensim.models.ldamodel.LdaModel
Sci-kit Learn module: sklearn.decomposition
3.2.1. LDA using Gensim
The use of the LDA module in gensim is similar to LSI. Furthermore, it assumes that a tf-idf parametrization is used as an input, which is not in complete agreement with the theoretical model, which assumes documents represented as vectors of token-counts.
To use LDA in gensim, we must first create a lda model object.
End of explanation
# For testing LDA, you can use the reuters dataset
# X = lda.datasets.load_reuters()
# vocab = lda.datasets.load_reuters_vocab()
# titles = lda.datasets.load_reuters_titles()
X = np.int32(np.zeros((n_art, n_tokens)))
for n, art in enumerate(corpus_bow):
for t in art:
X[n, t[0]] = t[1]
print X.shape
print X.sum()
vocab = D.values()
titles = corpus_titles
# Default parameters:
# model = lda.LDA(n_topics, n_iter=2000, alpha=0.1, eta=0.01, random_state=None, refresh=10)
model = lda.LDA(n_topics=10, n_iter=1500, random_state=1)
model.fit(X) # model.fit_transform(X) is also available
topic_word = model.topic_word_ # model.components_ also works
# Show topics...
n_top_words = 8
for i, topic_dist in enumerate(topic_word):
topic_words = np.array(vocab)[np.argsort(topic_dist)][:-(n_top_words+1):-1]
print('Topic {}: {}'.format(i, ' '.join(topic_words)))
Explanation: 3.2.2. LDA using python lda library
An alternative to gensim for LDA is the lda library from python. It requires a doc-frequency matrix as input
End of explanation
doc_topic = model.doc_topic_
for i in range(10):
print("{} (top topic: {})".format(titles[i], doc_topic[i].argmax()))
# This is to apply the model to a new doc(s)
# doc_topic_test = model.transform(X_test)
# for title, topics in zip(titles_test, doc_topic_test):
# print("{} (top topic: {})".format(title, topics.argmax()))
Explanation: Document-topic distribution
End of explanation
# Adapted from an example in sklearn site
# http://scikit-learn.org/dev/auto_examples/applications/topics_extraction_with_nmf_lda.html
# You can try also with the dataset provided by sklearn in
# from sklearn.datasets import fetch_20newsgroups
# dataset = fetch_20newsgroups(shuffle=True, random_state=1,
# remove=('headers', 'footers', 'quotes'))
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
Explanation: It allows incremental updates
3.2.2. LDA using Sci-kit Learn
The input matrix to the sklearn implementation of LDA contains the token-counts for all documents in the corpus.
sklearn contains a powerfull CountVectorizer method that can be used to construct the input matrix from the corpus_bow.
First, we will define an auxiliary function to print the top tokens in the model, that has been taken from the sklearn documentation.
End of explanation
print("Loading dataset...")
# scode: data_samples = <FILL IN>
print "*".join(['Esto', 'es', 'un', 'ejemplo'])
data_samples = [" ".join(c) for c in corpus_clean]
print 'Document 0:'
print data_samples[0][0:200], '...'
Explanation: Now, we need a dataset to feed the Count_Vectorizer object, by joining all tokens in corpus_clean in a single string, using a space ' ' as separator.
End of explanation
# Use tf (raw term count) features for LDA.
print("Extracting tf features for LDA...")
n_features = 1000
n_samples = 2000
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
t0 = time()
tf = tf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
print tf[0][0][0]
Explanation: Now we are ready to compute the token counts.
End of explanation
print("Fitting LDA models with tf features, "
"n_samples=%d and n_features=%d..."
% (n_samples, n_features))
# scode: lda = <FILL IN>
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=10,
learning_method='online', learning_offset=50., random_state=0)
# doc_topic_prior= 1.0/n_topics, topic_word_prior= 1.0/n_topics)
Explanation: Now we can apply the LDA algorithm.
Task: Create an LDA object with the following parameters:
n_topics=n_topics, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0
End of explanation
t0 = time()
corpus_lda = lda.fit_transform(tf)
print corpus_lda[10]/np.sum(corpus_lda[10])
print("done in %0.3fs." % (time() - t0))
print corpus_titles[10]
# print corpus_text[10]
print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, 20)
topics = lda.components_
topic_probs = [t/np.sum(t) for t in topics]
# print topic_probs[0]
print -np.sort(-topic_probs[0])
Explanation: Task: Fit model lda with the token frequencies computed by tf_vectorizer.
End of explanation
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
n_bins = 50
# Example data
y_pos = range(n_bins-1, -1, -1)
pylab.rcParams['figure.figsize'] = 16, 8 # Set figure size
for i in range(n_topics):
### Plot top 25 tokens for topic i
# Read i-thtopic
# scode: <FILL IN>
topic_i = topic_probs[i]
rank = np.argsort(- topic_i)[0:n_bins]
tokens = [tf_feature_names[r] for r in rank]
weights = [topic_i[r] for r in rank]
# Plot
# scode: <FILL IN>
plt.subplot(1, n_topics, i+1)
plt.barh(y_pos, weights, align='center', alpha=0.4)
plt.yticks(y_pos, tokens)
plt.xlabel('Top {0} topic weights'.format(n_bins))
plt.title('Topic {0}'.format(i))
plt.show()
Explanation: Exercise: Represent graphically the topic distributions for the top 25 tokens with highest probability for each topic.
End of explanation |
5,515 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome
This notebook accompanies the Sunokisis Digital Classics common session on Named Entity Extraction, see https
Step1: And more precisely, we are using the following versions
Step2: Let's grab some text
To start with, we need some text from which we'll try to extract named entities using various methods and libraries.
There are several ways of doing this e.g.
Step3: With this information, we can query a CTS API and get some information about this text.
For example, we can "discover" its canonical text structure, an essential information to be able to cite this text.
Step4: But we can also query the same API and get back the text of a specific text section, for example the entire book 1.
To do so, we need to append the indication of the reference scope (i.e. book 1) to the URN.
Step5: So we retrieve the first book of the De Bello Gallico by passing its CTS URN (that we just stored in the variable my_passage) to the CTS API, via the resolver provided by MyCapytains
Step6: At this point the passage is available in various formats
Step7: Let's check that the text is there by printing the content of the variable de_bello_gallico_book1 where we stored it
Step8: The text that we have just fetched by using a programming interface (API) can also be viewed in the browser.
Or even imported as an iframe into this notebook!
Step9: Let's see how many words (tokens, more properly) there are in Caesar's De Bello Gallico I
Step10: Very simple baseline
Now let's write what in NLP jargon is called a baseline, that is a method for extracting named entities that can serve as a term of comparison to evaluate the accuracy of other methods.
Baseline method
Step11: Let's a havea look at the first 50 tokens that we just tagged
Step13: For convenience we can also wrap our baseline code into a function that we call extract_baseline. Let's define it
Step14: And now we can call it like this
Step16: We can modify slightly our function so that it prints the snippet of text where an entity is found
Step17: NER with CLTK
The CLTK library has some basic support for the extraction of named entities from Latin and Greek texts (see CLTK's documentation).
The current implementation (as of version 0.1.47) uses a lookup-based method.
For each token in a text, the tagger checks whether that token is contained within a predefined list of possible named entities
Step18: Let's have a look at the ouput, only the first 10 tokens (by using the list slicing notation)
Step19: The output looks slightly different from the one of our baseline function (the size of the tuples in the list varies).
But we can write a function to fix this, we call it reshape_cltk_output
Step20: We apply this function to CLTK's output
Step21: And the resulting output looks now ok
Step22: Now let's compare the two list of tagged tokens by using a python function called zip, which allows us to read multiple lists simultaneously
Step23: But, as you can see, the two lists are not aligned.
This is due to how the CLTK function tokenises the text. The comma after "tres" becomes a token on its own, whereas when we tokenise by white space the comma is attached to "tres" (i.e. "tres,").
A solution to this is to pass to the tag_ner function the text already tokenised by text.
Step24: NER with NLTK
Step25: Let's have a look at the output
Step26: Wrap up
At this point we can "compare" the output of the three different methods we used, again by using the zip function.
Step27: Excercise
Extract the named entities from the English translation of the De Bello Gallico book 1.
The CTS URN for this translation is urn | Python Code:
########
# NLTK #
########
import nltk
from nltk.tag import StanfordNERTagger
########
# CLTK #
########
import cltk
from cltk.tag.ner import tag_ner
##############
# MyCapytain #
##############
import MyCapytain
from MyCapytain.resolvers.cts.api import HttpCTSResolver
from MyCapytain.retrievers.cts5 import CTS
from MyCapytain.common.constants import Mimetypes
#################
# other imports #
#################
import sys
sys.path.append("/opt/nlp/pymodules/")
from idai_journals.nlp import sub_leaves
Explanation: Welcome
This notebook accompanies the Sunokisis Digital Classics common session on Named Entity Extraction, see https://github.com/SunoikisisDC/SunoikisisDC-2016-2017/wiki/Named-Entity-Extraction-I.
In this notebook we are going to experiment with three different methods for extracting named entities from a Latin text.
Library imports
External modules and libraries can be imported using import statements.
Let's the Natural Language ToolKit (NLTK), the Classical Language ToolKit (CLTK), MyCapytain and some local libraries that are used in this notebook.
End of explanation
print(nltk.__version__)
print(cltk.__version__)
print(MyCapytain.__version__)
Explanation: And more precisely, we are using the following versions:
End of explanation
my_passage = "urn:cts:latinLit:phi0448.phi001.perseus-lat2:1.1.1"
Explanation: Let's grab some text
To start with, we need some text from which we'll try to extract named entities using various methods and libraries.
There are several ways of doing this e.g.:
1. copy and paste the text from Perseus or the Latin Library into a text document, and read it into a variable
2. load a text from one of the Latin corpora available via cltk (cfr. this blog post)
3. or load it from Perseus by leveraging its Canonical Text Services API
Let's gor for #3 :)
What's CTS?
CTS URNs stand for Canonical Text Service Uniform Resource Names.
You can think of a CTS URN like a social security number for texts (or parts of texts).
Here are some examples of CTS URNs with different levels of granularity:
- urn:cts:latinLit:phi0448 (Caesar)
- urn:cts:latinLit:phi0448.phi001 (Caesar's De Bello Gallico)
- urn:cts:latinLit:phi0448.phi001.perseus-lat2 DBG Latin edtion
- urn:cts:latinLit:phi0448.phi001.perseus-lat2:1 DBG Latin edition, book 1
- urn:cts:latinLit:phi0448.phi001.perseus-lat2:1.1.1 DBG Latin edition, book 1, chapter 1, section 1
How do I find out the CTS URN of a given author or text? The Perseus Catalog is your friend! (crf. e.g. http://catalog.perseus.org/catalog/urn:cts:latinLit:phi0448)
Querying a CTS API
The URN of the Latin edition of Caesar's De Bello Gallico is urn:cts:latinLit:phi0448.phi001.perseus-lat2.
End of explanation
# We set up a resolver which communicates with an API available in Leipzig
resolver = HttpCTSResolver(CTS("http://cts.dh.uni-leipzig.de/api/cts/"))
# We require some metadata information
textMetadata = resolver.getMetadata("urn:cts:latinLit:phi0448.phi001.perseus-lat2")
# Texts in CTS Metadata have one interesting property : its citation scheme.
# Citation are embedded objects that carries information about how a text can be quoted, what depth it has
print([citation.name for citation in textMetadata.citation])
Explanation: With this information, we can query a CTS API and get some information about this text.
For example, we can "discover" its canonical text structure, an essential information to be able to cite this text.
End of explanation
my_passage = "urn:cts:latinLit:phi0448.phi001.perseus-lat2:1"
my_passage_en = "urn:cts:latinLit:phi0448.phi001.perseus-eng2:1"
Explanation: But we can also query the same API and get back the text of a specific text section, for example the entire book 1.
To do so, we need to append the indication of the reference scope (i.e. book 1) to the URN.
End of explanation
passage = resolver.getTextualNode(my_passage)
passage_en = resolver.getTextualNode(my_passage_en)
Explanation: So we retrieve the first book of the De Bello Gallico by passing its CTS URN (that we just stored in the variable my_passage) to the CTS API, via the resolver provided by MyCapytains:
End of explanation
de_bello_gallico_book1 = passage.export(Mimetypes.PLAINTEXT)
de_bello_gallico_en_book1 = passage_en.export(Mimetypes.PLAINTEXT)
Explanation: At this point the passage is available in various formats: text, but also TEI XML, etc.
Thus, we need to specify that we are interested in getting the text only:
End of explanation
print(de_bello_gallico_en_book1)
Explanation: Let's check that the text is there by printing the content of the variable de_bello_gallico_book1 where we stored it:
End of explanation
from IPython.display import IFrame
IFrame('http://cts.dh.uni-leipzig.de/read/latinLit/phi0448/phi001/perseus-lat2/1', width=1000, height=350)
Explanation: The text that we have just fetched by using a programming interface (API) can also be viewed in the browser.
Or even imported as an iframe into this notebook!
End of explanation
len(de_bello_gallico_en_book1.split(" "))
Explanation: Let's see how many words (tokens, more properly) there are in Caesar's De Bello Gallico I:
End of explanation
"T".istitle()
"t".istitle()
# we need a list to store the tagged tokens
tagged_tokens = []
# tokenisation is done by using the string method `split(" ")`
# that splits a string upon white spaces
for n, token in enumerate(de_bello_gallico_en_book1.split(" ")):
if(token.istitle()):
tagged_tokens.append((token, "Entity"))
else:
tagged_tokens.append((token, "O"))
Explanation: Very simple baseline
Now let's write what in NLP jargon is called a baseline, that is a method for extracting named entities that can serve as a term of comparison to evaluate the accuracy of other methods.
Baseline method:
- cycle through each token of the text
- if the token starts with a capital letter it's a named entity (only one type, i.e. Entity)
End of explanation
tagged_tokens[:50]
Explanation: Let's a havea look at the first 50 tokens that we just tagged:
End of explanation
def extract_baseline(input_text):
:param input_text: the text to tag (string)
:return: a list of tuples, where tuple[0] is the token and tuple[1] is the named entity tag
# we need a list to store the tagged tokens
tagged_tokens = []
# tokenisation is done by using the string method `split(" ")`
# that splits a string upon white spaces
for n, token in enumerate(input_text.split(" ")):
if(token.istitle()):
tagged_tokens.append((token, "Entity"))
else:
tagged_tokens.append((token, "O"))
return tagged_tokens
Explanation: For convenience we can also wrap our baseline code into a function that we call extract_baseline. Let's define it:
End of explanation
tagged_tokens_baseline = extract_baseline(de_bello_gallico_book1)
tagged_tokens_baseline[-50:]
Explanation: And now we can call it like this:
End of explanation
def extract_baseline(input_text):
:param input_text: the text to tag (string)
:return: a list of tuples, where tuple[0] is the token and tuple[1] is the named entity tag
# we need a list to store the tagged tokens
tagged_tokens = []
# tokenisation is done by using the string method `split(" ")`
# that splits a string upon white spaces
for n, token in enumerate(input_text.split(" ")):
if(token.istitle()):
tagged_tokens.append((token, "Entity"))
context = input_text.split(" ")[n-5:n+5]
print("Found entity \"%s\" in context \"%s\""%(token, " ".join(context)))
else:
tagged_tokens.append((token, "O"))
return tagged_tokens
tagged_text_baseline = extract_baseline(de_bello_gallico_book1)
tagged_text_baseline[:50]
Explanation: We can modify slightly our function so that it prints the snippet of text where an entity is found:
End of explanation
%%time
tagged_text_cltk = tag_ner('latin', input_text=de_bello_gallico_book1)
Explanation: NER with CLTK
The CLTK library has some basic support for the extraction of named entities from Latin and Greek texts (see CLTK's documentation).
The current implementation (as of version 0.1.47) uses a lookup-based method.
For each token in a text, the tagger checks whether that token is contained within a predefined list of possible named entities:
- list of Latin proper nouns: https://github.com/cltk/latin_proper_names_cltk
- list of Greek proper nouns: https://github.com/cltk/greek_proper_names_cltk
Let's run CLTK's tagger (it takes a moment):
End of explanation
tagged_text_cltk[:10]
Explanation: Let's have a look at the ouput, only the first 10 tokens (by using the list slicing notation):
End of explanation
def reshape_cltk_output(tagged_tokens):
reshaped_output = []
for tagged_token in tagged_tokens:
if(len(tagged_token)==1):
reshaped_output.append((tagged_token[0], "O"))
else:
reshaped_output.append((tagged_token[0], tagged_token[1]))
return reshaped_output
Explanation: The output looks slightly different from the one of our baseline function (the size of the tuples in the list varies).
But we can write a function to fix this, we call it reshape_cltk_output:
End of explanation
tagged_text_cltk_reshaped = reshape_cltk_output(tagged_text_cltk)
Explanation: We apply this function to CLTK's output:
End of explanation
tagged_text_cltk[:20]
Explanation: And the resulting output looks now ok:
End of explanation
list(zip(tagged_text_baseline[:20], tagged_text_cltk_reshaped[:20]))
Explanation: Now let's compare the two list of tagged tokens by using a python function called zip, which allows us to read multiple lists simultaneously:
End of explanation
tagged_text_cltk = reshape_cltk_output(tag_ner('latin', input_text=de_bello_gallico_book1.split(" ")))
list(zip(tagged_text_baseline[:20], tagged_text_cltk[:20]))
Explanation: But, as you can see, the two lists are not aligned.
This is due to how the CLTK function tokenises the text. The comma after "tres" becomes a token on its own, whereas when we tokenise by white space the comma is attached to "tres" (i.e. "tres,").
A solution to this is to pass to the tag_ner function the text already tokenised by text.
End of explanation
stanford_model_italian = "/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/ner-ita-nogpe-noiob_gaz_wikipedia_sloppy.ser.gz"
stanford_model_english = "/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/english.muc.7class.distsim.crf.ser.gz"
ner_tagger = StanfordNERTagger(stanford_model_italian)
ner_tagger = StanfordNERTagger(stanford_model_english)
tagged_text_nltk = ner_tagger.tag(de_bello_gallico_en_book1.split(" "))
Explanation: NER with NLTK
End of explanation
tagged_text_nltk[:100]
Explanation: Let's have a look at the output
End of explanation
list(zip(tagged_text_baseline[:20], tagged_text_cltk[:20], tagged_text_nltk[:20]))
for baseline_out, cltk_out, nltk_out in zip(tagged_text_baseline[:20], tagged_text_cltk[:20], tagged_text_nltk[:20]):
print("Baseline: %s\nCLTK: %s\nNLTK: %s\n"%(baseline_out, cltk_out, nltk_out))
Explanation: Wrap up
At this point we can "compare" the output of the three different methods we used, again by using the zip function.
End of explanation
stanford_model_english = "/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/english.muc.7class.distsim.crf.ser.gz"
Explanation: Excercise
Extract the named entities from the English translation of the De Bello Gallico book 1.
The CTS URN for this translation is urn:cts:latinLit:phi0448.phi001.perseus-eng2:1.
Modify the code above to use the English model of the Stanford tagger instead of the italian one.
Hint:
End of explanation |
5,516 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculating Transit Timing Variations (TTV) with REBOUND
The following code finds the transit times in a two planet system. The transit times of the inner planet are not exactly periodic, due to planet-planet interactions.
First, let's import the REBOUND and numpy packages.
Step1: Let's set up a coplanar two planet system.
Step2: We're now going to integrate the system forward in time. We assume the observer of the system is in the direction of the positive x-axis. We want to meassure the time when the inner planet transits. In this geometry, this happens when the y coordinate of the planet changes sign. Whenever we detect a change in sign between two steps, we try to find the transit time, which must lie somewhere within the last step, by bisection.
Step3: Next, we do a linear least square fit to remove the linear trend from the transit times, thus leaving us with the transit time variations.
Step4: Finally, let us plot the TTVs. | Python Code:
import rebound
import numpy as np
Explanation: Calculating Transit Timing Variations (TTV) with REBOUND
The following code finds the transit times in a two planet system. The transit times of the inner planet are not exactly periodic, due to planet-planet interactions.
First, let's import the REBOUND and numpy packages.
End of explanation
sim = rebound.Simulation()
sim.add(m=1)
sim.add(m=1e-5, a=1,e=0.1,omega=0.25)
sim.add(m=1e-5, a=1.757)
sim.move_to_com()
Explanation: Let's set up a coplanar two planet system.
End of explanation
N=174
transittimes = np.zeros(N)
p = sim.particles
i = 0
while i<N:
y_old = p[1].y - p[0].y # (Thanks to David Martin for pointing out a bug in this line!)
t_old = sim.t
sim.integrate(sim.t+0.5) # check for transits every 0.5 time units. Note that 0.5 is shorter than one orbit
t_new = sim.t
if y_old*(p[1].y-p[0].y)<0. and p[1].x-p[0].x>0.: # sign changed (y_old*y<0), planet in front of star (x>0)
while t_new-t_old>1e-7: # bisect until prec of 1e-5 reached
if y_old*(p[1].y-p[0].y)<0.:
t_new = sim.t
else:
t_old = sim.t
sim.integrate( (t_new+t_old)/2.)
transittimes[i] = sim.t
i += 1
sim.integrate(sim.t+0.05) # integrate 0.05 to be past the transit
Explanation: We're now going to integrate the system forward in time. We assume the observer of the system is in the direction of the positive x-axis. We want to meassure the time when the inner planet transits. In this geometry, this happens when the y coordinate of the planet changes sign. Whenever we detect a change in sign between two steps, we try to find the transit time, which must lie somewhere within the last step, by bisection.
End of explanation
A = np.vstack([np.ones(N), range(N)]).T
c, m = np.linalg.lstsq(A, transittimes)[0]
Explanation: Next, we do a linear least square fit to remove the linear trend from the transit times, thus leaving us with the transit time variations.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,5))
ax = plt.subplot(111)
ax.set_xlim([0,N])
ax.set_xlabel("Transit number")
ax.set_ylabel("TTV [hours]")
plt.scatter(range(N), (transittimes-m*np.array(range(N))-c)*(24.*365./2./np.pi));
Explanation: Finally, let us plot the TTVs.
End of explanation |
5,517 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integration Exercise 1
Imports
Step2: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral
Step3: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
Explanation: Integration Exercise 1
Imports
End of explanation
def trapz(f, a, b, N):
Integrate the function f(x) over the range [a,b] with N points.
pts = np.linspace(a, b, N + 1)
vals = f(pts)
h = (b - a) / (1.0 * N)
area = .5 * h * sum(vals[0:N] + vals[1:(N+1)])
return area
#raise NotImplementedError()
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
Explanation: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral:
$$ I(a,b) = \int_a^b f(x) dx $$
by dividing the interval $[a,b]$ into $N$ subdivisions of length $h$:
$$ h = (b-a)/N $$
Note that this means the function will be evaluated at $N+1$ points on $[a,b]$. The main idea of the trapezoidal rule is that the function is approximated by a straight line between each of these points.
Write a function trapz(f, a, b, N) that performs trapezoidal rule on the function f over the interval $[a,b]$ with N subdivisions (N+1 points).
End of explanation
def compare(f, a, b, N):
trapint = trapz(f, a, b, N)
quadint = integrate.quad(f, a, b)[0]
print("Trapezoid Rule: %f" % trapint)
print("Scipy: %f" % quadint)
print("Error: %f" % (quadint - trapint))
compare(f, 0, 1, 1000)
compare(g, 0, 1, 1000)
#raise NotImplementedError()
assert True # leave this cell to grade the previous one
Explanation: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
End of explanation |
5,518 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1.0 / (1.0 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
output_error_term = error
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, output_error_term)
# TODO: Backpropagated error terms - Replace these values with your calculations.
hidden_error_term = hidden_error * hidden_outputs * (np.ones(self.hidden_nodes) - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * (delta_weights_h_o / n_records) # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * (delta_weights_i_h / n_records) # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#print("final_outputs: ", final_outputs)
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 3500
learning_rate = 0.5
hidden_nodes = 25
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(20,10))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.plot(predictions[0], '--',label='Prediction')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
5,519 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
- Iterators are easy to understand
Step1: Python Generators
Step2: Generators are a simple and powerful tool for creating iterators.
Each iteration is computed on demand
In general terms they are more efficient than list comprehension or loops
If not the whole sequence is traversed
When looking for a certain element
When an exception is raised
So they save computing power and memory
Used to operate with I/O, with big amounts of data (e.g. DB queries)...
yield
Step3: yield makes a function a generator
the function only executes on next (easier than implements iteration)
it produces a value and suspend the execution of the function
Step4: using generators to build a pipeline as unix (tail + grep)
Step5: Coroutines
Using yield as this way we get a coroutine
function not just returns values, it can consume values that we send
Step6: Sent values are returned in (yield)
Execution as a generator function
coroutines responds to next and send
Step7: generators produces values and coroutines mostly consumes
DO NOT mix the concepts to avoid exploiting your mind
Coroutines are not for iteratin
Step8: What has happened here?
chain coroutines together and push data through the pipe using send()
you need a source that normally is not a coroutine
you will also needs a pipelines sinks (end-point) that consumes data and processes
don't mix the concepts too much
lets go back to the tail -f and grep
our source is tail -f
Step9: coroutines add routing
complex arrrangment of pipes, branches, merging... | Python Code:
spam = [0, 1, 2, 3, 4]
for item in spam:
print item
else:
print "Looped whole list"
# What is really happening here?
it = iter(spam) # Obtain an iterator
try:
item = it.next() # Retrieve first item through the iterator
while True:
# Body of the for loop goes here
print item
item = it.next() # Retrieve next item through the iterator
except StopIteration: # Capture iterator exception
# Body of the else clause goes here
print "Looped whole list"
# Another example
spam = "spam"
it = iter(spam)
print it.next()
print it.next()
print it.next()
print it.next()
# Once the StopIteration is raised an iterator is useless, there is no 'restart'
print it.next()
Explanation: - Iterators are easy to understand
End of explanation
# expression generator
spam = [0, 1, 2, 3, 4]
fooo = (2 ** s for s in spam) # Syntax similar to list comprehension but between parentheses
print fooo
print fooo.next()
print fooo.next()
print fooo.next()
print fooo.next()
print fooo.next()
# Generator is exhausted
print fooo.next()
Explanation: Python Generators
End of explanation
def countdown(n):
while n > 0:
yield n
n -= 1
gen_5 = countdown(5)
gen_5
# where is the sequence?
print gen_5.next()
print gen_5.next()
print gen_5.next()
print gen_5.next()
print gen_5.next()
gen_5.next()
for i in countdown(5):
print i,
Explanation: Generators are a simple and powerful tool for creating iterators.
Each iteration is computed on demand
In general terms they are more efficient than list comprehension or loops
If not the whole sequence is traversed
When looking for a certain element
When an exception is raised
So they save computing power and memory
Used to operate with I/O, with big amounts of data (e.g. DB queries)...
yield
End of explanation
# Let's see another example with yield tail -f and grep
import time
def follow(thefile):
thefile.seek(0, 2) # Go to the end of the file
while True:
line = thefile.readline()
if not line:
time.sleep(0.1) # Sleep briefly
continue
yield line
logfile = open("fichero.txt")
for line in follow(logfile):
print line,
# Ensure f is closed
if logfile and not logfile.closed:
logfile.close()
Explanation: yield makes a function a generator
the function only executes on next (easier than implements iteration)
it produces a value and suspend the execution of the function
End of explanation
def grep(pattern, lines):
for line in lines:
if pattern in line:
yield line
# TODO: use a generator expression
# Set up a processing pipe : tail -f | grep "tefcon"
logfile = open("fichero.txt")
loglines = follow(logfile)
pylines = grep("python", loglines)
# nothing happens until now
# Pull results out of the processing pipeline
for line in pylines:
print line,
# Ensure f is closed
if logfile and not logfile.closed:
logfile.close()
# Yield can be used as an expression too
def g_grep(pattern):
print "Looking for %s" % pattern
while True:
line = (yield)
if pattern in line:
print line,
Explanation: using generators to build a pipeline as unix (tail + grep)
End of explanation
g = g_grep("python")
g.next()
g.send("Prueba a ver si encontramos algo")
g.send("Hemos recibido python")
Explanation: Coroutines
Using yield as this way we get a coroutine
function not just returns values, it can consume values that we send
End of explanation
# avoid the first next call -> decorator
import functools
def coroutine(func):
def wrapper(*args, **kwargs):
cr = func(*args, **kwargs)
cr.next()
return cr
return wrapper
@coroutine
def cool_grep(pattern):
print "Looking for %s" % pattern
while True:
line = (yield)
if pattern in line:
print line,
g = cool_grep("python")
# no need of call next
g.send("Prueba a ver si encontramos algo")
g.send("Prueba a ver si python es cool")
# use close to shutdown a coroutine (can run forever)
@coroutine
def last_grep(pattern):
print "Looking for %s" % pattern
try:
while True:
line = (yield)
if pattern in line:
print line,
except GeneratorExit:
print "Going away. Goodbye"
# Exceptions can be thrown inside a coroutine
g = last_grep("python")
g.send("Prueba a ver si encontramos algo")
g.send("Prueba a ver si python es cool")
g.close()
g.send("prueba a ver si python es cool")
# can send exceptions
g.throw(RuntimeError, "Lanza una excepcion")
Explanation: Sent values are returned in (yield)
Execution as a generator function
coroutines responds to next and send
End of explanation
def countdown_bug(n):
print "Counting down from", n
while n >= 0:
newvalue = (yield n)
# If a new value got sent in, reset n with it
if newvalue is not None:
n = newvalue
else:
n -= 1
c = countdown_bug(5)
for n in c:
print n
if n == 5:
c.send(3)
Explanation: generators produces values and coroutines mostly consumes
DO NOT mix the concepts to avoid exploiting your mind
Coroutines are not for iteratin
End of explanation
import time
def c_follow(thefile, target):
thefile.seek(0,2) # Go to the end of the file
while True:
line = thefile.readline()
if not line:
time.sleep(0.1) # Sleep briefly
else:
target.send(line)
# a sink: just print
@coroutine
def printer(name):
while True:
line = (yield)
print name + " : " + line,
# example
f = open("fichero.txt")
c_follow(f, printer("uno"))
# Ensure f is closed
if f and not f.closed:
f.close()
# Pipeline filters: grep
@coroutine
def c_grep(pattern,target):
while True:
line = (yield) # Receive a line
if pattern in line:
target.send(line)
# Send to next stage
# Exercise: tail -f "fichero.txt" | grep "python"
# do not forget the last print as sink
# We have the same, with iterators we pull data with iteration
# With coroutines we push data with send
# BROADCAST
@coroutine
def broadcast(targets):
while True:
item = (yield)
for target in targets:
target.send(item)
f = open("fichero.txt")
c_follow(f,
broadcast([c_grep('python', printer("uno")),
c_grep('hodor', printer("dos")),
c_grep('hold', printer("tres"))])
)
Explanation: What has happened here?
chain coroutines together and push data through the pipe using send()
you need a source that normally is not a coroutine
you will also needs a pipelines sinks (end-point) that consumes data and processes
don't mix the concepts too much
lets go back to the tail -f and grep
our source is tail -f
End of explanation
if f and not f.closed:
f.close()
f = open("fichero.txt")
p = printer("uno")
c_follow(f,
broadcast([c_grep('python', p),
c_grep('hodor', p),
c_grep('hold', p)])
)
if f and not f.closed:
f.close()
Explanation: coroutines add routing
complex arrrangment of pipes, branches, merging...
End of explanation |
5,520 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 3
Step1: if ... else statement
python
if <condition>
Step2: if ...elif ... else statement
python
if <condition>
Step3: Imagine that in the above program, 23 is the temperature which was read by some sensor or manually entered by the user and Normal is the response of the program.
Step4: One line if
There are instances where we might need to have
If the code block is composed of only one line, it can be written after the colon | Python Code:
password = input("Please enter the password:")
if password == "Simsim":
print("\t> Welcome to the cave")
x = "Mayank"
y = "TEST"
if y == "TEST":
print(x)
if y:
print("Hello World")
z = None
if z:
print("TEST")
x = 11
if x > 10:
print("Hello")
if x > 10.999999999999:
print("Hello again")
if x % 2 == 0:
print("Bye bye bye ...")
x = 10
y = None
z = "111"
print(id(y))
if x:
print("Hello in x")
if y:
print("Hello in Y")
if z:
print("Hello in Z")
Explanation: Chapter 3: Compound statements
Compound statements contain one or groups of other statements; they affect or control the execution of those other statements in some way.
In general, they span multiple lines, but can be also be listed in a single line.
The if, while and for statements implement traditional control flow constructs, whereas try specifies exception handlers and/or cleanup code for a group of statements, while the with statement allows the execution of initialization and finalization code around a block of code. Function and class definitions are also syntactically compound statements.
They consists of one or more ‘clauses’. A clause consists of a 'header' and a ‘suite’.
The 'clause' headers of a particular compound statement are all at the same indentation level. They should begins with an uniquely identifying keyword and should ends with a colon.
'suite' is a group of statements controlled by a clause. It can be of one or more semicolon-separated simple statements on the same line as the header, following the header’s colon (one liner), or it can be one or more indented statements on subsequent lines. Only the latter form of a suite can contain nested compound statements; the following is illegal, mostly because it wouldn’t be clear to which if clause a following else clause would belong:
Traditional Control Flow Constructs
if Statement
The if statement is used for conditional execution similar to that in most common languages. If statement can be constructed in three format depending on our need.
if: when we have "if something do something" condition
if .. else: when we have condition like "if something: do something else do something else"
if .. elif ..else: When we have too many conditions or nested conditions
if
This format is used when specific operation needs to be performed if a specific condition is met.
Syntax:
python
if <condition>:
<code block>
Where:
<condition>: sentence that can be evaluated as true or false.
<code block>: sequence of command lines.
The clauses elif and else are optional and several elifs for the if may be used but only one else at the end.
Parentheses are only required to avoid ambiguity.
Example:
End of explanation
x = "Anuja"
if x == "mayank":
print("Name is mayank")
else:
print("Name is not mayank and its", x)
Explanation: if ... else statement
python
if <condition>:
<code block>
else:
<code block>
Where:
<condition>: sentence that can be evaluated as true if statement if statemente or false.
<code block>: sequence of command lines.
The clauses elif and else are optional and several elifs for the if may be used but only one else at the end.
Parentheses are only required to avoid ambiguity.
End of explanation
# temperature value used to test
temp = 31
if temp < 0:
print ('Freezing...')
elif 0 <= temp <= 20:
print ('Cold')
elif 21 <= temp <= 25:
print ('Room Temprature')
elif 26 <= temp <= 35:
print ('Hot')
else:
print ('Its very HOT!, lets stay at home... \nand drink lemonade.')
# temperature value used to test
temp = 60
if temp < 0:
print ('Freezing...')
elif 0 <= temp <= 20:
print ('Cold')
elif 21 <= temp <= 25:
print ('Room Temprature')
elif 26 <= temp <= 35:
print ('Hot')
else:
print ('Its very HOT!, lets stay at home... \nand drink lemonade.')
Explanation: if ...elif ... else statement
python
if <condition>:
<code block>
elif <condition>:
<code block>
elif <condition>:
<code block>
else:
<code block>
Where:
<condition>: sentence that can be evaluated as true or false.
<code block>: sequence of command lines.
The clauses elif and else are optional and several elifs for the if may be used but only one else at the end.
Parentheses are only required to avoid ambiguity.
End of explanation
a = "apple"
b = "banana"
c = "Mango"
if a == "apple":
print("apple")
elif b == "Mango":
print("mango")
elif c == "Mango":
print("My Mango farm")
x = list(range(10))
a = 0
x.reverse()
print(x)
while x:
a = x.pop()
print(a)
print(a)
x = list(range(10))
print(x)
a = 0
x.reverse()
while x:
a = x.pop(0)
print(a)
print(a)
items = "This is a test"
for count, item in enumerate(items[1:], start=11):
print(item, count, end=" ")
if 'i' in item: break
else:
print("\nFinished")
print(count)
Explanation: Imagine that in the above program, 23 is the temperature which was read by some sensor or manually entered by the user and Normal is the response of the program.
End of explanation
x = 20
if x > 10: print ("Hello ")
print("-"*30)
val = 1 if x < 10 else 24
print(val)
Explanation: One line if
There are instances where we might need to have
If the code block is composed of only one line, it can be written after the colon:
if temp < 0: print 'Freezing...'
Since version 2.5, Python supports the expression:
<variable> = <value 1> if <condition> else <value 2>
Where <variable> receives <value 1> if <condition> is true and <value 2> otherwise.
End of explanation |
5,521 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2012 Presidential Campaign Finance Analysis
Chisheng Li
Introduction
This project analyzes the 2012 Presidential Campaign Contributions data to compare Barack Obama's and Mitt Romney's campaign finance between March 2011 and December 2012. The dataset is obtained from the Federal Election Commission (FEC).
Dataset preparation
The dataset is downloaded and renamed to donations.txt with the following commands
Step1: Pandas's .describe() function generates summary statistics of the entire dataset. There are 6,036,458 observations in the campaign contributions data. Although there are extreme outliers in this dataset, with the highest donation amount at \$16,387,179.20 and the lowest donation amount at -\$60,800.00, the interquartile range is moderate. The 25th percentile donation amount is \$25.00, the 75th percentile donation amount is \$150.00, and the median donation amount is \$50.00. The average donation to all presidential candidates is \$212.99, suggesting a right skewed distribution.
Step2: Sorting the campaign contributions by decreasing amount indicates that the Obama Victory Fund contributed the highest donations to Barack Obama during the 2012 president election cycle.
Step3: Basic campaign finance statistics
Subset the positive campaign donations to President Barack Obama
Step4: Subset the positive campaign donations to Mitt Romney
Step5: Barack Obama received 4,078,945 non-refund/non-zero donations for the 2012 president election, totaling \$558,359,124.90. The minimum donation was \$0.01, and the maximum amount was \$16,387,179.20. His average donation amount was \$136.89, and the median amount was \$50.00.
Step6: On the other hand, Mitt Romney received 1,576,478 non-refund/non-zero donations for the election, totaling \$679,994,941.51. The minimum amount was \$0.01, and the maximum amount was \$30,000. Despite having fewer donors than Obama, Romney received higher average donation at \$431.34 and higher median amount at \$100.50.
Step7: Comparing Obama's and Romney's campaign donations by states
Across 50 states and Washington D.C., Obama out raised Romney in only 14 states that are Democratic stronghold
Step8: Comparing Obama's and Romney's monthly campaign donations between March 2011 and December 2012
Mitt Romney received donations early beginning in May 2011 as he prepared to enter the Republican presidential primaries (February 1st 2012 - April 1st 2012). His donations increased significantly in April 2012 after he clinched the Republican nomination. Obama, on the other hand, began his re-election campaign in January 2012 and he was formally nominated as President at the Democratic National Convention in Charlotte, North Carolina, on September 6th 2012. Obama finalized his campaign on November 6th 2012 as he was re-elected with 51% popular vote and 332 electorate votes.
Step9: Comparing Obama's and Romney's cumulative campaign donations between March 2011 and December 2012
Mitt Romney and Barack Obama had similar cumulative donation amounts until September 2012, after Obama and Biden were formally nominated for President and Vice President at the 2012 Democratic National Convention. The increased donations to Mitt Romney likely signaled efforts by his contributors to reach out to potential voters through ground work and television commercials.
Step10: Comparing Obama's and Romney's cumulative campaign reattributions between March 2011 and December 2012
In the campaign finance dataset, we see some of the data where the donation amount is negative. A quick modification of the code to output only the negative donation amounts
Step11: Comparing Obama's and Romney's cumulative campaign refunds between March 2011 and December 2012
Step12: Comparing distribution of Obama's and Romney's campaign donations
Because of the large number of outliers, I display the campaign donation distribution for both candidates within \$0 and \$1,500. Obama is on the right and Romney is on the left of the violin plot, and it indicates that Obama's donations were more concentrated in the smaller amounts between \$25 and \$150 while Romney's donations spanned a larger range.
Step13: Performing ttests on Obama's and Romney's campaign donations data
I perform a Welch's T-test on the data to determine if the difference between Romney's and Obama's average campaign contribution is significant. The reported p-value is within rounding error of 0, which is statistically significant.
Step14: Because the Welch's T-test makes no assumption about the size of the dataset or its variances, I have likely violated the assumption that the campaign finance data is normally distributed. I conduct the Shapiro-Wilk test to check if the data is actually normal for both presidential candidates. In this case, the test calculates a p-value and tells us that it is not normally distributed if the p-value <0.05.
Obama has a Shapiro-Wilks p-value of 0.000577, which indicates that I have violated the normality assumption of the Welch's T-test.
Step15: Because T-Tests are resilient to breaking of the normality assumption and there are nonparametric equivalents that don't make normality assumptions, I run the Mann-Whitney U nonparametric T-test to determine on the campaign dataset. The reported p-value is about 0, so the result is still statistically significant.
Step16: Which occupations were more generous to either candidate?
Among the campaign contributors that donated to either Barack Obama or Mitt Romney, only 13 contributors from 8 different occupations personally contributed at least \$5,000 to Obama's re-election campaign. On the other hand, 167 individuals from 36 different occupations (primarily chief executive officers, presidents and business owners) donated at least \$5,000 to Romney's re-election campaign | Python Code:
import pandas as pd
import numpy as np
donations = pd.read_csv('donations.txt', dtype={'contbr_zip': 'str', 'file_num': 'str'}, index_col=False)
# How many rows and columns does the dataframe have?
donations.shape
# The first 5 lines of the dataset
donations.head()
# Now, look at the last 5 lines
donations.tail()
Explanation: 2012 Presidential Campaign Finance Analysis
Chisheng Li
Introduction
This project analyzes the 2012 Presidential Campaign Contributions data to compare Barack Obama's and Mitt Romney's campaign finance between March 2011 and December 2012. The dataset is obtained from the Federal Election Commission (FEC).
Dataset preparation
The dataset is downloaded and renamed to donations.txt with the following commands:
unzip P00000001-ALL.zip
mv P00000001-ALL.csv donations.txt
Exploring the data
The FEC decided that it would be cool to insert commas at the end of each line, fooling the CSV readers that there are empty fields at the end of every line. For example, the first row of the dataset is as followed:
C00410118,"P20002978","Bachmann, Michele","HARVEY, WILLIAM","MOBILE","AL","366010290","RETIRED","RETIRED",250,20-JUN-11,"","","","SA17A","736166","A1FDABC23D2D545A1B83","P2012",
pandas's file parsers treat the first column as the data frame's row name by default if the data set has 1 too many columns, hence insert index_col=False to drop the last column to display the data frame properly. The dataset has 18 columns and 6,036,458 rows.
End of explanation
donations.describe()
Explanation: Pandas's .describe() function generates summary statistics of the entire dataset. There are 6,036,458 observations in the campaign contributions data. Although there are extreme outliers in this dataset, with the highest donation amount at \$16,387,179.20 and the lowest donation amount at -\$60,800.00, the interquartile range is moderate. The 25th percentile donation amount is \$25.00, the 75th percentile donation amount is \$150.00, and the median donation amount is \$50.00. The average donation to all presidential candidates is \$212.99, suggesting a right skewed distribution.
End of explanation
donations.sort('contb_receipt_amt', ascending=False)
Explanation: Sorting the campaign contributions by decreasing amount indicates that the Obama Victory Fund contributed the highest donations to Barack Obama during the 2012 president election cycle.
End of explanation
obama = donations[(donations['cand_nm'] == 'Obama, Barack') &
(donations['contb_receipt_amt'] > 0)]
obama
Explanation: Basic campaign finance statistics
Subset the positive campaign donations to President Barack Obama:
End of explanation
romney = donations[(donations['cand_nm'] == 'Romney, Mitt') &
(donations['contb_receipt_amt'] > 0)]
romney
# Calculate the total positive donations to Obama and Romney
totalOb = 0
totalRom = 0
for row in obama['contb_receipt_amt']:
amountOb = float(row)
totalOb += amountOb
for row in romney['contb_receipt_amt']:
amountRom = float(row)
totalRom += amountRom
Explanation: Subset the positive campaign donations to Mitt Romney:
End of explanation
# Obama's campaign finance stats:
print "Total positive donations: $%s" % totalOb
obama.describe()
Explanation: Barack Obama received 4,078,945 non-refund/non-zero donations for the 2012 president election, totaling \$558,359,124.90. The minimum donation was \$0.01, and the maximum amount was \$16,387,179.20. His average donation amount was \$136.89, and the median amount was \$50.00.
End of explanation
# Romney's campaign finance stats:
print "Total positive donations: $%s" % totalRom
romney.describe()
Explanation: On the other hand, Mitt Romney received 1,576,478 non-refund/non-zero donations for the election, totaling \$679,994,941.51. The minimum amount was \$0.01, and the maximum amount was \$30,000. Despite having fewer donors than Obama, Romney received higher average donation at \$431.34 and higher median amount at \$100.50.
End of explanation
# Aggregate Obama's non-refund/non-zero donations by States
obaState = obama[obama['contbr_st'].isin(['AK', 'AL', 'AR', 'AZ', 'CA',
'CO', 'CT', 'DC', 'DE', 'FL',
'GA', 'HI', 'IA', 'ID', 'IL',
'IN', 'KS', 'KY', 'LA', 'MA',
'MD', 'ME', 'MI', 'MN', 'MO',
'MS', 'MT', 'NC', 'ND', 'NE',
'NH', 'NJ', 'NM', 'NV', 'NY',
'OH', 'OK', 'OR', 'PA', 'RI',
'SC', 'SD', 'TN', 'TX', 'UT',
'VA', 'VT', 'WA', 'WI', 'WV', 'WY'])]
obst = obaState.groupby('contbr_st')
# obaState.groupby('contbr_st').agg(['mean', 'count', 'std'])
obst_sum = obst['contb_receipt_amt'].agg([np.sum, np.mean, len])
obst_sum.columns = ["Obama's total donations ($)", "Obama's average donation ($)",
"Obama's number of donations by state"]
# Aggregate Romney's non-refund/non-zero donations by States
romState = romney[romney['contbr_st'].isin(['AK', 'AL', 'AR', 'AZ', 'CA',
'CO', 'CT', 'DC', 'DE', 'FL',
'GA', 'HI', 'IA', 'ID', 'IL',
'IN', 'KS', 'KY', 'LA', 'MA',
'MD', 'ME', 'MI', 'MN', 'MO',
'MS', 'MT', 'NC', 'ND', 'NE',
'NH', 'NJ', 'NM', 'NV', 'NY',
'OH', 'OK', 'OR', 'PA', 'RI',
'SC', 'SD', 'TN', 'TX', 'UT',
'VA', 'VT', 'WA', 'WI', 'WV', 'WY'])]
rmst = romState.groupby('contbr_st')
rmst_sum = rmst['contb_receipt_amt'].agg([np.sum, np.mean, len])
rmst_sum.columns = ["Romney's total donations ($)", "Romney's average donation ($)",
"Romney's number of donations by state"]
camst = pd.concat([obst_sum, rmst_sum], axis=1)
camst
# Plot Obama's campaign donations to Plotly
import plotly.plotly as py
ob = pd.read_csv('obama donations', delimiter='\t')
for col in ob.columns:
ob[col] = ob[col].astype(str)
scl = [[0.0, 'rgb(242,240,247)'],[0.2, 'rgb(218,218,235)'],[0.4, 'rgb(188,189,220)'],\
[0.6, 'rgb(158,154,200)'],[0.8, 'rgb(117,107,177)'],[1.0, 'rgb(84,39,143)']]
data = [ dict(
type='choropleth',
colorscale = scl,
autocolorscale = False,
locations = ob['State'],
z = ob["Obama's total donations ($)"].astype(float),
locationmode = 'USA-states',
marker = dict(
line = dict (
color = 'rgb(255,255,255)',
width = 2
)
),
colorbar = dict(
title = "Millions USD"
)
) ]
layout = dict(
title = '2012 Obama Campaign Donations by State',
geo = dict(
scope='usa',
projection=dict( type='albers usa' ),
showlakes = True,
lakecolor = 'rgb(255, 255, 255)',
),
)
fig = dict(data=data, layout=layout)
url = py.plot(fig, validate=False, filename='d3-obama-map')
from IPython.display import Image
Image(filename='obama.png')
# Plot Romney's campaign donations to Plotly
import plotly.plotly as py
rm = pd.read_csv('romney donations', delimiter='\t')
for col in rm.columns:
rm[col] = rm[col].astype(str)
scl = [[0.0, 'rgb(242,240,247)'],[0.2, 'rgb(218,218,235)'],[0.4, 'rgb(188,189,220)'],\
[0.6, 'rgb(158,154,200)'],[0.8, 'rgb(117,107,177)'],[1.0, 'rgb(84,39,143)']]
data = [ dict(
type='choropleth',
colorscale = scl,
autocolorscale = False,
locations = rm['State'],
z = rm["Romney's total donations ($)"].astype(float),
locationmode = 'USA-states',
marker = dict(
line = dict (
color = 'rgb(255,255,255)',
width = 2
)
),
colorbar = dict(
title = "Millions USD"
)
) ]
layout = dict(
title = '2012 Romney Campaign Donations by State',
geo = dict(
scope='usa',
projection=dict( type='albers usa' ),
showlakes = True,
lakecolor = 'rgb(255, 255, 255)',
),
)
fig = dict(data=data, layout=layout)
url = py.plot(fig, validate=False, filename='d3-romney-map')
Image(filename='romney.png')
Explanation: Comparing Obama's and Romney's campaign donations by states
Across 50 states and Washington D.C., Obama out raised Romney in only 14 states that are Democratic stronghold: California, District of Columbia, Delaware, Hawaii, Illinois, Massachusetts, Maryland, Maine, New Mexico, New York, Oregon, Rhode Island, Vermont, Washington.
The 3 states where Obama received the highest amount of donations were Illinois (\$96.25 million), California (\$93.64 million) and New York (\$51.73 million). The 3 states where Romney received the highest amount of donations were California (\$79.06 million), Texas (\$70.05 million) and Florida (\$58.76 million).
End of explanation
%matplotlib inline
from collections import defaultdict
import matplotlib.pyplot as plt
import csv, sys, datetime
reader = csv.DictReader(open("donations.txt", 'r'))
obamadonations = defaultdict(lambda:0)
romneydonations = defaultdict(lambda:0)
for row in reader:
name = row['cand_nm']
datestr = row['contb_receipt_dt']
amount = float(row['contb_receipt_amt'])
date = datetime.datetime.strptime(datestr, '%d-%b-%y')
if 'Obama' in name:
obamadonations[date] += amount
if 'Romney' in name:
romneydonations[date] += amount
fig = plt.figure(figsize=(12,6)) # create a 12-inch x 6-inch figure
sorted_by_dateob = sorted(obamadonations.items(), key=lambda (key,val): key)
sorted_by_datemc = sorted(romneydonations.items(), key=lambda (key,val): key)
xs1,ys1 = zip(*sorted_by_dateob)
xs2,ys2 = zip(*sorted_by_datemc)
plt.plot(xs1, ys1, label="Obama's Donations")
plt.plot(xs2, ys2, label="Romney's Donations")
plt.legend(loc='upper center', ncol = 4)
Explanation: Comparing Obama's and Romney's monthly campaign donations between March 2011 and December 2012
Mitt Romney received donations early beginning in May 2011 as he prepared to enter the Republican presidential primaries (February 1st 2012 - April 1st 2012). His donations increased significantly in April 2012 after he clinched the Republican nomination. Obama, on the other hand, began his re-election campaign in January 2012 and he was formally nominated as President at the Democratic National Convention in Charlotte, North Carolina, on September 6th 2012. Obama finalized his campaign on November 6th 2012 as he was re-elected with 51% popular vote and 332 electorate votes.
End of explanation
import csv,sys,datetime,collections
import itertools
import matplotlib.pyplot as plt
reader = csv.DictReader(open("donations.txt", 'r'))
totaldonations = collections.defaultdict(list)
for row in reader:
name = row['cand_nm']
datestr = row['contb_receipt_dt']
amount = float(row['contb_receipt_amt'])
date = datetime.datetime.strptime(datestr, '%d-%b-%y')
if 'Obama' in name or 'Romney' in name:
totaldonations[name].append((date, amount))
campaigntotals = dict([(name, sum(map(lambda p:p[1], val))) for name, val
in totaldonations.iteritems()])
fig = plt.figure(figsize=(12,6))
idx = 0
# Obama's and Romney's cumulative donations
for name, monies in totaldonations.iteritems():
monies.sort(key=lambda pair: pair[0])
i = itertools.groupby(monies, key=lambda p: p[0])
monies = map(lambda (key, pairs): (key, sum([float(pair[1]) for pair in pairs])), i)
total = 0
newmonies = []
for pair in monies:
total += pair[1]
newmonies.append((pair[0], total ))
monies = newmonies
xs,ys = zip(*monies)
plt.plot(xs, ys, label = name + "'s donations")
idx += 1
plt.legend(loc='upper center', ncol = 4)
Explanation: Comparing Obama's and Romney's cumulative campaign donations between March 2011 and December 2012
Mitt Romney and Barack Obama had similar cumulative donation amounts until September 2012, after Obama and Biden were formally nominated for President and Vice President at the 2012 Democratic National Convention. The increased donations to Mitt Romney likely signaled efforts by his contributors to reach out to potential voters through ground work and television commercials.
End of explanation
import csv,sys,datetime,collections
import itertools
import matplotlib.pyplot as plt
reader = csv.DictReader(open("donations.txt", 'r'))
totalreattributions = collections.defaultdict(list)
for row in reader:
name = row['cand_nm']
datestr = row['contb_receipt_dt']
amount = float(row['contb_receipt_amt'])
date = datetime.datetime.strptime(datestr, '%d-%b-%y')
reason = row['receipt_desc']
if amount < 0 and 'REATTRIBUTION' in reason:
if 'Obama' in name or 'Romney' in name:
totalreattributions[name].append((date, -amount))
candreattributions = dict([(name, sum(map(lambda p:p[1], val))) for name, val
in totalreattributions.iteritems()])
fig = plt.figure(figsize=(12,6))
idx = 0
# Obama's and McCain's cumulative reattributions
for name, monies in totalreattributions.iteritems():
monies.sort(key=lambda pair: pair[0])
i = itertools.groupby(monies, key=lambda p: p[0])
monies = map(lambda (key, pairs): (key, sum([float(pair[1]) for pair in pairs])), i)
total = 0
newmonies = []
for pair in monies:
total += pair[1]
newmonies.append((pair[0], total ))
monies = newmonies
xs,ys = zip(*monies)
plt.plot(xs, ys, label = name + "'s reattributions")
idx += 1
plt.legend(loc='upper center', ncol = 4)
Explanation: Comparing Obama's and Romney's cumulative campaign reattributions between March 2011 and December 2012
In the campaign finance dataset, we see some of the data where the donation amount is negative. A quick modification of the code to output only the negative donation amounts:
import csv, sys, datetime
reader = csv.DictReader(open("donations.txt", 'r'))
for row in reader:
name = row['cand_nm']
datestr = row['contb_receipt_dt']
amount = float(row['contb_receipt_amt'])
if amount < 0:
line = '\t'.join(row.values())
print line
As it turns out, "redesignations" and "reattributions" of donations are normal. For instance, if a donation by person A is excessive, the part that exceeds the limits can be "reattributed" to person B, meaning that person B donated the rest to the campaign. Alternatively, the excess amount can be redesignated to another campaign in the same party. So a donation to Romney could be redesignated to a poor republican in Nebraska. However, "Reattribution to spouse" can be suspicious. A potential theory is that CEOs and wealthy contributors use it as a tactic to hide campaign contributions. For example, a CEO could donate money, then reattribute them to the donor's spouse. Then the spouse will donate that amount to the election candidate. In this way, a casual browser will find it difficult to notice that a the candidate is supoorted by a company's CEOs.
The campaign data indicates that Obama did not receive any reattributed donation but Romney received about \$4.1 million in reattribution especially between June 2012 and October 2012 when the presidential election race heated up.
End of explanation
import csv,sys,datetime,collections
import itertools
import matplotlib.pyplot as plt
reader = csv.DictReader(open("donations.txt", 'r'))
totalrefunds = collections.defaultdict(list)
for row in reader:
name = row['cand_nm']
datestr = row['contb_receipt_dt']
amount = float(row['contb_receipt_amt'])
date = datetime.datetime.strptime(datestr, '%d-%b-%y')
reason = row['receipt_desc']
if amount < 0 and 'Refund' in reason:
if 'Obama' in name or 'Romney' in name:
totalrefunds[name].append((date, -amount))
candrefunds = dict([(name, sum(map(lambda p:p[1], val))) for name, val
in totalrefunds.iteritems()])
fig = plt.figure(figsize=(12,6))
idx = 0
# Obama's and McCain's cumulative reattributions
for name, monies in totalrefunds.iteritems():
monies.sort(key=lambda pair: pair[0])
i = itertools.groupby(monies, key=lambda p: p[0])
monies = map(lambda (key, pairs): (key, sum([float(pair[1]) for pair in pairs])), i)
total = 0
newmonies = []
for pair in monies:
total += pair[1]
newmonies.append((pair[0], total ))
monies = newmonies
xs,ys = zip(*monies)
plt.plot(xs, ys, label = name + "'s refunds")
idx += 1
plt.legend(loc='upper center', ncol = 4)
Explanation: Comparing Obama's and Romney's cumulative campaign refunds between March 2011 and December 2012
End of explanation
import seaborn as sns
sns.set_style("whitegrid")
obrom = donations[(donations['cand_nm'].isin(['Obama, Barack', 'Romney, Mitt'])) &
(donations['contb_receipt_amt'] < 1200)]
obrom2 = obrom[obrom['contb_receipt_amt'] > 0]
fig = plt.figure(figsize=(8,12))
ax = sns.violinplot(x="cand_nm", y="contb_receipt_amt", data=obrom2)
Explanation: Comparing distribution of Obama's and Romney's campaign donations
Because of the large number of outliers, I display the campaign donation distribution for both candidates within \$0 and \$1,500. Obama is on the right and Romney is on the left of the violin plot, and it indicates that Obama's donations were more concentrated in the smaller amounts between \$25 and \$150 while Romney's donations spanned a larger range.
End of explanation
# Part I
# Run Welch's T-test on donations.txt. Is the difference between Romney and Obama's average
# compaign contribution significant?
import csv,sys,datetime,collections
import numpy
import scipy.stats
import welchttest
reader = csv.DictReader(open("donations.txt", 'r'))
idx = 0
candtomoney = collections.defaultdict(list)
for row in reader:
name = row['cand_nm']
amount = float(row['contb_receipt_amt'])
candtomoney[name].append(amount)
obama = candtomoney["Obama, Barack"]
romney = candtomoney["Romney, Mitt"]
print "Welch's T-Test p-value:", welchttest.ttest(obama, romney)
Explanation: Performing ttests on Obama's and Romney's campaign donations data
I perform a Welch's T-test on the data to determine if the difference between Romney's and Obama's average campaign contribution is significant. The reported p-value is within rounding error of 0, which is statistically significant.
End of explanation
print "Obama's Shapiro-Wilks p-value", scipy.stats.shapiro(obama)
print "Romney's Shapiro-Wilks p-value", scipy.stats.shapiro(romney)
Explanation: Because the Welch's T-test makes no assumption about the size of the dataset or its variances, I have likely violated the assumption that the campaign finance data is normally distributed. I conduct the Shapiro-Wilk test to check if the data is actually normal for both presidential candidates. In this case, the test calculates a p-value and tells us that it is not normally distributed if the p-value <0.05.
Obama has a Shapiro-Wilks p-value of 0.000577, which indicates that I have violated the normality assumption of the Welch's T-test.
End of explanation
print "mann-whitney U", scipy.stats.mannwhitneyu(obama, romney)
Explanation: Because T-Tests are resilient to breaking of the normality assumption and there are nonparametric equivalents that don't make normality assumptions, I run the Mann-Whitney U nonparametric T-test to determine on the campaign dataset. The reported p-value is about 0, so the result is still statistically significant.
End of explanation
obama = donations[(donations['cand_nm'] == 'Obama, Barack') &
(donations['contb_receipt_amt'] > 5000)]
obOcc = obama.groupby('contbr_occupation')
obOcc_sum = obOcc['contb_receipt_amt'].agg([np.sum, np.mean, len])
obOcc_sum.columns = ["Obama's total donations ($)", "Obama's average donation ($)",
"Number of donors"]
obOcc_sum
romney = donations[(donations['cand_nm'] == 'Romney, Mitt') &
(donations['contb_receipt_amt'] > 5000)]
rmOcc = romney.groupby('contbr_occupation')
rmOcc_sum = rmOcc['contb_receipt_amt'].agg([np.sum, np.mean, len])
rmOcc_sum.columns = ["Romney's total donations ($)", "Romney's average donation ($)",
"Number of donors"]
rmOcc_sum
Explanation: Which occupations were more generous to either candidate?
Among the campaign contributors that donated to either Barack Obama or Mitt Romney, only 13 contributors from 8 different occupations personally contributed at least \$5,000 to Obama's re-election campaign. On the other hand, 167 individuals from 36 different occupations (primarily chief executive officers, presidents and business owners) donated at least \$5,000 to Romney's re-election campaign
End of explanation |
5,522 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sierpinski Cube
Start with a cube, on iteration
Step1: Save the data
Step2: Get a mesh representation of the cube | Python Code:
def sierp_cube_iter(x0, x1, y0, y1, z0, z1, cur_depth, max_depth=3, n_pts=10, cur_index=0):
if cur_depth >= max_depth:
x = np.linspace(x0, x1, n_pts)
y = np.linspace(y0, y1, n_pts)
z = np.linspace(z0, z1, n_pts)
xx, yy, zz = np.meshgrid(x, y, z)
rr = np.ones(shape=xx.shape) * np.cos(xx * 5) * np.cos(yy * 5) * np.cos(zz * 5)
ii = np.ones(shape=xx.shape) * cur_index
df_res = pd.DataFrame({'x': xx.flatten(),
'y': yy.flatten(),
'z': zz.flatten(),
'r': rr.flatten(),
'i': ii.flatten()})
else:
dx = (x1 - x0) / 3
dy = (y1 - y0) / 3
dz = (z1 - z0) / 3
i_sub = 0
df_res = None
for ix in range(3):
for iy in range(3):
for iz in range(3):
if int(ix == 1) + int(iy == 1) + int(iz == 1) >= 2:
continue
print('\t' * cur_depth, ': #', i_sub + 1, '-', ix, iy, iz)
df_sub = sierp_cube_iter(x0 + ix * dx,
x0 + (ix + 1) * dx,
y0 + iy * dy,
y0 + (iy + 1) * dy,
z0 + iz * dz,
z0 + (iz + 1) * dz,
cur_depth + 1,
max_depth=max_depth,
n_pts=n_pts,
cur_index=cur_index * 20 + i_sub)
i_sub += 1
if df_res is None:
df_res = df_sub
else:
df_res = pd.concat([df_res, df_sub], axis=0)
return df_res
df_sierp = sierp_cube_iter(0, 1, 0, 1, 0, 1, 0, max_depth=2, n_pts=10)
len(df_sierp)
df_sierp.describe()
df_cut = df_sierp[df_sierp.z == 0.0]
df_cut.describe()
figure(figsize=(8, 8))
scatter(x=df_cut.x, y=df_cut.y, c=df_cut.r, marker='o',
cmap='viridis_r', alpha=0.8)
# colorscales
# 'pairs' | 'Greys' | 'Greens' | 'Bluered' | 'Hot' | 'Picnic' | 'Portland' | 'Jet' | 'RdBu' |
# 'Blackbody' | 'Earth' | 'Electric' | 'YIOrRd' | 'YIGnBu'
data = []
i_side = 0
for cond in [(df_sierp.x == df_sierp.x.min()), (df_sierp.x == df_sierp.x.max()),
(df_sierp.y == df_sierp.y.min()), (df_sierp.y == df_sierp.y.max()),
(df_sierp.z == df_sierp.z.min()), (df_sierp.z == df_sierp.z.max())]:
df_sierp_plot = df_sierp[cond]
print(df_sierp_plot.shape)
trace = go.Scatter3d(
x=df_sierp_plot.x,
y=df_sierp_plot.y,
z=df_sierp_plot.z,
mode='markers',
marker=dict(
size=2,
color=df_sierp_plot.r,
colorscale='Hot',
opacity=1.0
),
name='Side %d' % (i_side + 1),
visible=True
)
data.append(trace)
i_side += 1
layout = go.Layout(
autosize=True,
width=800,
height=600,
title='Sierpinski Cube Surface (d=2) - Point Cloud Plot',
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='3d-sierpinski-cube-pt-cloud')
Explanation: Sierpinski Cube
Start with a cube, on iteration:
- divide into 3 x 3 x 3 sub-cubes
- recurse on sub-cubes where # of 2 indexes < 2
End of explanation
df_sierp.to_csv('sierp_cube.csv')
Explanation: Save the data
End of explanation
def sierp_cube_mesh_iter(x0, x1, y0, y1, z0, z1, cur_depth, max_depth=3, cur_index=0):
if cur_depth >= max_depth:
x = [x0, x0, x1, x1, x0, x0, x1, x1]
y = [y0, y1, y1, y0, y0, y1, y1, y0]
z = [z0, z0, z0, z0, z1, z1, z1, z1]
i = [7, 0, 0, 0, 4, 4, 6, 6, 4, 0, 3, 2]
j = [3, 4, 1, 2, 5, 6, 5, 2, 0, 1, 6, 3]
k = [0, 7, 2, 3, 6, 7, 1, 1, 5, 5, 7, 6]
r = [2 * cur_index * cos(x[i] + y[i] + z[i]) for i in range(len(x))]
return (x, y, z, i, j, k, r, len(x))
else:
x = []
y = []
z = []
i = []
j = []
k = []
r = []
n = 0
dx = (x1 - x0) / 3
dy = (y1 - y0) / 3
dz = (z1 - z0) / 3
i_sub = 0
df_res = None
for ix in range(3):
for iy in range(3):
for iz in range(3):
if int(ix == 1) + int(iy == 1) + int(iz == 1) >= 2:
continue
print('\t' * cur_depth, ': #', i_sub + 1, '-', ix, iy, iz)
(sub_x, sub_y, sub_z, sub_i, sub_j, sub_k, sub_r, sub_n) = \
sierp_cube_mesh_iter(x0 + ix * dx,
x0 + (ix + 1) * dx,
y0 + iy * dy,
y0 + (iy + 1) * dy,
z0 + iz * dz,
z0 + (iz + 1) * dz,
cur_depth + 1,
max_depth=max_depth,
cur_index=cur_index * 20 + i_sub)
i_sub += 1
i += [n + _i for _i in sub_i]
j += [n + _j for _j in sub_j]
k += [n + _k for _k in sub_k]
x += sub_x
y += sub_y
z += sub_z
r += sub_r
n += sub_n
return (x, y, z, i, j, k, r, n)
def sierp_cube_mesh(x0, x1, y0, y1, z0, z1, max_depth=3):
(x, y, z, i, j, k, r, n) = sierp_cube_mesh_iter(x0, x1, y0, y1, z0, z1, 0, max_depth=max_depth)
mesh = go.Mesh3d(
x = x,
y = y,
z = z,
colorscale = 'Greens',
intensity = r,
i = i,
j = j,
k = k,
name='Sierpinski Cube (d=%d)' % max_depth,
showscale=True,
lighting=dict(ambient=0.99, roughness=0.99, diffuse=0.99)
)
data = [mesh]
layout = go.Layout(
autosize=True,
width=800,
height=600,
title='Sierpinski Cube (d=%d) - Mesh Plot' % max_depth,
)
fig = go.Figure(data=data, layout=layout)
return fig
fig = sierp_cube_mesh(0, 1, 0, 1, 0, 1, max_depth=3)
py.iplot(fig, filename='3d-sierpinski-cube-mesh')
Explanation: Get a mesh representation of the cube
End of explanation |
5,523 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In Depth - Decision Trees and Forests
Step1: Here we'll explore a class of algorithms based on decision trees.
Decision trees at their root are extremely intuitive. They
encode a series of "if" and "else" choices, similar to how a person might make a decision.
However, which questions to ask, and how to proceed for each answer is entirely learned from the data.
For example, if you wanted to create a guide to identifying an animal found in nature, you
might ask the following series of questions
Step2: A single decision tree allows us to estimate the signal in a non-parametric way,
but clearly has some issues. In some regions, the model shows high bias and
under-fits the data.
(seen in the long flat lines which don't follow the contours of the data),
while in other regions the model shows high variance and over-fits the data
(reflected in the narrow spikes which are influenced by noise in single points).
Decision Tree Classification
Decision tree classification work very similarly, by assigning all points within a leaf the majority class in that leaf
Step3: There are many parameter that control the complexity of a tree, but the one that might be easiest to understand is the maximum depth. This limits how finely the tree can partition the input space, or how many "if-else" questions can be asked before deciding which class a sample lies in.
This parameter is important to tune for trees and tree-based models. The interactive plot below shows how underfit and overfit looks like for this model. Having a max_depth of 1 is clearly an underfit model, while a depth of 7 or 8 clearly overfits. The maximum depth a tree can be grown at for this dataset is 8, at which point each leave only contains samples from a single class. This is known as all leaves being "pure."
In the interactive plot below, the regions are assigned blue and red colors to indicate the predicted class for that region. The shade of the color indicates the predicted probability for that class (darker = higher probability), while yellow regions indicate an equal predicted probability for either class.
Step4: Decision trees are fast to train, easy to understand, and often lead to interpretable models. However, single trees often tend to overfit the training data. Playing with the slider above you might notice that the model starts to overfit even before it has a good separation between the classes.
Therefore, in practice it is more common to combine multiple trees to produce models that generalize better. The most common methods for combining trees are random forests and gradient boosted trees.
Random Forests
Random forests are simply many trees, built on different random subsets (drawn with replacement) of the data, and using different random subsets (drawn without replacement) of the features for each split.
This makes the trees different from each other, and makes them overfit to different aspects. Then, their predictions are averaged, leading to a smoother estimate that overfits less.
Step5: Selecting the Optimal Estimator via Cross-Validation
Step6: Another option
Step7: <div class="alert alert-success">
<b>EXERCISE
Step8: Feature importance
Both RandomForest and GradientBoosting objects expose a feature_importances_ attribute when fitted. This attribute is one of the most powerful feature of these models. They basically quantify how much each feature contributes to gain in performance in the nodes of the different trees. | Python Code:
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
Explanation: In Depth - Decision Trees and Forests
End of explanation
from figures import make_dataset
x, y = make_dataset()
X = x.reshape(-1, 1)
plt.figure()
plt.xlabel('Feature X')
plt.ylabel('Target y')
plt.scatter(X, y);
from sklearn.tree import DecisionTreeRegressor
reg = DecisionTreeRegressor(max_depth=5)
reg.fit(X, y)
X_fit = np.linspace(-3, 3, 1000).reshape((-1, 1))
y_fit_1 = reg.predict(X_fit)
plt.figure()
plt.plot(X_fit.ravel(), y_fit_1, color='blue', label="prediction")
plt.plot(X.ravel(), y, '.k', label="training data")
plt.legend(loc="best");
Explanation: Here we'll explore a class of algorithms based on decision trees.
Decision trees at their root are extremely intuitive. They
encode a series of "if" and "else" choices, similar to how a person might make a decision.
However, which questions to ask, and how to proceed for each answer is entirely learned from the data.
For example, if you wanted to create a guide to identifying an animal found in nature, you
might ask the following series of questions:
Is the animal bigger or smaller than a meter long?
bigger: does the animal have horns?
yes: are the horns longer than ten centimeters?
no: is the animal wearing a collar
smaller: does the animal have two or four legs?
two: does the animal have wings?
four: does the animal have a bushy tail?
and so on. This binary splitting of questions is the essence of a decision tree.
One of the main benefit of tree-based models is that they require little preprocessing of the data.
They can work with variables of different types (continuous and discrete) and are invariant to scaling of the features.
Another benefit is that tree-based models are what is called "nonparametric", which means they don't have a fix set of parameters to learn. Instead, a tree model can become more and more flexible, if given more data.
In other words, the number of free parameters grows with the number of samples and is not fixed, as for example in linear models.
Decision Tree Regression
A decision tree is a simple binary classification tree that is
similar to nearest neighbor classification. It can be used as follows:
End of explanation
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from figures import plot_2d_separator
X, y = make_blobs(centers=[[0, 0], [1, 1]], random_state=61526, n_samples=100)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
clf = DecisionTreeClassifier(max_depth=5)
clf.fit(X_train, y_train)
plt.figure()
plot_2d_separator(clf, X, fill=True)
plt.scatter(X_train[:, 0], X_train[:, 1], c=np.array(['b', 'r'])[y_train], s=60, alpha=.7, edgecolor='k')
plt.scatter(X_test[:, 0], X_test[:, 1], c=np.array(['b', 'r'])[y_test], s=60, edgecolor='k');
Explanation: A single decision tree allows us to estimate the signal in a non-parametric way,
but clearly has some issues. In some regions, the model shows high bias and
under-fits the data.
(seen in the long flat lines which don't follow the contours of the data),
while in other regions the model shows high variance and over-fits the data
(reflected in the narrow spikes which are influenced by noise in single points).
Decision Tree Classification
Decision tree classification work very similarly, by assigning all points within a leaf the majority class in that leaf:
End of explanation
# %matplotlib inline
from figures import plot_tree_interactive
plot_tree_interactive()
Explanation: There are many parameter that control the complexity of a tree, but the one that might be easiest to understand is the maximum depth. This limits how finely the tree can partition the input space, or how many "if-else" questions can be asked before deciding which class a sample lies in.
This parameter is important to tune for trees and tree-based models. The interactive plot below shows how underfit and overfit looks like for this model. Having a max_depth of 1 is clearly an underfit model, while a depth of 7 or 8 clearly overfits. The maximum depth a tree can be grown at for this dataset is 8, at which point each leave only contains samples from a single class. This is known as all leaves being "pure."
In the interactive plot below, the regions are assigned blue and red colors to indicate the predicted class for that region. The shade of the color indicates the predicted probability for that class (darker = higher probability), while yellow regions indicate an equal predicted probability for either class.
End of explanation
from figures import plot_forest_interactive
plot_forest_interactive()
Explanation: Decision trees are fast to train, easy to understand, and often lead to interpretable models. However, single trees often tend to overfit the training data. Playing with the slider above you might notice that the model starts to overfit even before it has a good separation between the classes.
Therefore, in practice it is more common to combine multiple trees to produce models that generalize better. The most common methods for combining trees are random forests and gradient boosted trees.
Random Forests
Random forests are simply many trees, built on different random subsets (drawn with replacement) of the data, and using different random subsets (drawn without replacement) of the features for each split.
This makes the trees different from each other, and makes them overfit to different aspects. Then, their predictions are averaged, leading to a smoother estimate that overfits less.
End of explanation
from sklearn.model_selection import GridSearchCV
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestClassifier
digits = load_digits()
X, y = digits.data, digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rf = RandomForestClassifier(n_estimators=200)
parameters = {'max_features':['sqrt', 'log2', 10],
'max_depth':[5, 7, 9]}
clf_grid = GridSearchCV(rf, parameters, n_jobs=-1)
clf_grid.fit(X_train, y_train)
clf_grid.score(X_train, y_train)
clf_grid.score(X_test, y_test)
Explanation: Selecting the Optimal Estimator via Cross-Validation
End of explanation
from sklearn.ensemble import GradientBoostingRegressor
clf = GradientBoostingRegressor(n_estimators=100, max_depth=5, learning_rate=.2)
clf.fit(X_train, y_train)
print(clf.score(X_train, y_train))
print(clf.score(X_test, y_test))
Explanation: Another option: Gradient Boosting
Another Ensemble method that can be useful is Boosting: here, rather than
looking at 200 (say) parallel estimators, We construct a chain of 200 estimators
which iteratively refine the results of the previous estimator.
The idea is that by sequentially applying very fast, simple models, we can get a
total model error which is better than any of the individual pieces.
End of explanation
from sklearn.datasets import load_digits
from sklearn.ensemble import GradientBoostingClassifier
digits = load_digits()
X_digits, y_digits = digits.data, digits.target
# split the dataset, apply grid-search
# %load solutions/18_gbc_grid.py
Explanation: <div class="alert alert-success">
<b>EXERCISE: Cross-validating Gradient Boosting</b>:
<ul>
<li>
Use a grid search to optimize the `learning_rate` and `max_depth` for a Gradient Boosted
Decision tree on the digits data set.
</li>
</ul>
</div>
End of explanation
X, y = X_digits[y_digits < 2], y_digits[y_digits < 2]
rf = RandomForestClassifier(n_estimators=300, n_jobs=1)
rf.fit(X, y)
print(rf.feature_importances_) # one value per feature
plt.figure()
plt.imshow(rf.feature_importances_.reshape(8, 8), cmap=plt.cm.viridis, interpolation='nearest')
Explanation: Feature importance
Both RandomForest and GradientBoosting objects expose a feature_importances_ attribute when fitted. This attribute is one of the most powerful feature of these models. They basically quantify how much each feature contributes to gain in performance in the nodes of the different trees.
End of explanation |
5,524 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Document retrieval from wikipedia data
Fire up GraphLab Create
Step1: Load some text data - from wikipedia, pages on people
Step2: Data contains
Step3: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
Step4: Exploring the entry for actor George Clooney
Step5: Get the word counts for Obama article
Step6: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
Step7: Sorting the word counts to show most common words at the top
Step8: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
Step9: Examine the TF-IDF for the Obama article
Step10: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
Step11: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
Step12: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
Step13: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
Step14: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval | Python Code:
import graphlab
Explanation: Document retrieval from wikipedia data
Fire up GraphLab Create
End of explanation
people = graphlab.SFrame('people_wiki.gl/')
Explanation: Load some text data - from wikipedia, pages on people
End of explanation
people.head()
len(people)
Explanation: Data contains: link to wikipedia article, name of person, text of article.
End of explanation
obama = people[people['name'] == 'Barack Obama']
john = people[people['name'] == 'Elton John']
john
obama['text']
Explanation: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
End of explanation
clooney = people[people['name'] == 'George Clooney']
clooney['text']
Explanation: Exploring the entry for actor George Clooney
End of explanation
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
john['word_count'] = graphlab.text_analytics.count_words(john['text'])
print john['word_count']
Explanation: Get the word counts for Obama article
End of explanation
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
john_word_count_table = john[['word_count']].stack('word_count', new_column_name = ['word','count'])
Explanation: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
End of explanation
obama_word_count_table.head()
john_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
john_word_count_table.sort('count',ascending=False)
Explanation: Sorting the word counts to show most common words at the top
End of explanation
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
tfidf
people['tfidf'] = tfidf['docs']
Explanation: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
End of explanation
obama = people[people['name'] == 'Barack Obama']
john = people[people['name'] == 'Elton John']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
john[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
Explanation: Examine the TF-IDF for the Obama article
End of explanation
clinton = people[people['name'] == 'Bill Clinton']
paul = people[people['name'] == 'Paul McCartney']
beckham = people[people['name'] == 'Victoria Beckham']
Explanation: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
End of explanation
print graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
print graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
print graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
print graphlab.distances.cosine(john['tfidf'][0],beckham['tfidf'][0])
print graphlab.distances.cosine(john['tfidf'][0],paul['tfidf'][0])
Explanation: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
End of explanation
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name', distance='cosine')
wc_model = graphlab.nearest_neighbors.create(people,features=['word_count'],label='name', distance='cosine')
Explanation: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
End of explanation
knn_model.query(beckham)
print wc_model.query(john)
print knn_model.query(john)
Explanation: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
End of explanation
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
Explanation: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval
End of explanation |
5,525 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-hh', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-HH
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:14
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
5,526 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Converting $\LaTeX$ to <span style="font-variant
Step1: Now the variable data contains the text that is stored in this file.
Step2: $$ c = \sqrt{a^{2}+b^{2}} $$
Let us look at the output file example.pdf that is produced if we run $\LaTeX$ on this file.
Depending on your operating system, you might have to exchange the command start for another command
that is able to open the file example.pdf.
Step3: Next, we open the file example.html. The scanner we are going to implement has to write its output into this file.
Step4: <hr style="height
Step5: The function end_html writes the closing </body> and </html> tags.
Step6: The function start_math_block starts a math block. This is useful for formulas enclosed in $$. This type of formulas is displayed in a line by itself.
Step7: The function start_math_inline starts an <em style="color
Step8: The function end_math ends a math block.
Step9: The functions start_sum and end_sum write code to display formulas involving sums. For example, to display the expression
$$ \sum\limits_{i=1}^n i^2 $$
we can use the following MathML
Step10: The functions start_sqrt and end_sqrt write code to display formulas involving square roots. For example, to display the expression
$$ \sqrt{a^2 + b^2} $$
we can use the following MathML
Step11: In order to write exponents we have to use the tag <msup>. For example, the expression $a^2$
is equivalent to the following markup
Step12: In order to write fractions we have to use the tag <mfrac>. For example, the expression $\frac{1}{6}$
is equivalent to the following markup
Step13: Arguments of functions like the square root or exponents have to be enclosed in pairs of <mrow> and </mrow> tags.
Step14: Variable names should be enclosed in pairs of <mi> and </mi> tags. For example, the variable $x$ is displayed by the following MathML
Step15: Numbers should be enclosed in pairs of <mn> and </mn> tags. For example, the number $6$ is displayed by the following MathML
Step16: The symbol $\cdot$ is created by the following MathML
Step17: Mathematical operators should be enclosed in pairs of <mo> and </mo> tags. For example, the operator $+$ is displayed by the following MathML
Step18: The symbol $\pi$ is created by the following MathML
Step19: The symbol $\leq$ is created by the following MathML
Step20: The symbol $\geq$ is created by the following MathML
Step21: The function write_any writes a single character unadorned to the output file.
Step22: We will be use the library ply to translate $\LaTeX$ into
<span style="font-variant
Step23: We have to declare all tokens below. We will need tokens for the following parts of the $\LaTeX$ file
Step24: When we see a closing brace } things get difficult. The reason is that we need to know what type of formula is being closed.
Is it a square root, the subscript of a sum, the superscript of a sum, or some part of a fraction. My idea is to use a stack that is attached to the lexer, i.e. we have a variable lexer.stack that stores this information.
Furthermore, the scanner has two different states. Either we are inside a formula, i.e. inside something that is enclosed in dollar symbols, or we are inside text that needs to be echoed unchanged to the output file.
Step25: $\cdots$ lots of token definitions $\cdots$
Step26: The line below is necessary to trick ply.lex into assuming this program is written in an ordinary python file instead of a Jupyter notebook.
Step27: The line below generates the scanner.
Step28: Next, we feed our input string into the generated scanner.
Step29: In order to scan the data that we provided in the last line, we iterate over all tokens generated by our scanner.
Step30: Now you should be able to see a file with the name example.html in your current durectory. | Python Code:
with open('example.tex') as f:
data = f.read()
Explanation: Converting $\LaTeX$ to <span style="font-variant:small-caps;">Html</span>
The purpose of the following exercise is to implement a translator from $\LaTeX$ to
MathML. $\LaTeX$ is a document markup language
that is especially well suited to present text that contains mathematical formulas. MathML is the part of <span style="font-variant:small-caps;">Html</span> that deals with the representation of mathematical formulas. As $\LaTeX$ provides a very rich
document markup language and we can only afford to spend a few hours on this exercise, we confine
ourselves to a small subset of $\LaTeX$. The file example.tex contains some $\LaTeX$. The goal of this exercise is to implement a translator that is able to transform this file into MathML.
We start with reading the file.
End of explanation
print(data)
Explanation: Now the variable data contains the text that is stored in this file.
End of explanation
!start example.pdf
Explanation: $$ c = \sqrt{a^{2}+b^{2}} $$
Let us look at the output file example.pdf that is produced if we run $\LaTeX$ on this file.
Depending on your operating system, you might have to exchange the command start for another command
that is able to open the file example.pdf.
End of explanation
outfile = open('example.html', 'w')
Explanation: Next, we open the file example.html. The scanner we are going to implement has to write its output into this file.
End of explanation
def start_html():
outfile.write('<!doctype html>\n')
outfile.write('<html>\n')
outfile.write('<head>\n')
outfile.write('<script type="text/javascript" ')
outfile.write('src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML">\n')
outfile.write('</script>\n')
outfile.write('</head>\n\n')
outfile.write('<body>\n\n')
Explanation: <hr style="height:4px;background-color:blue">
Below are some predefined functions that you can use to create the <span style="font-variant:small-caps;">Html</span> file.
<hr style="height:4px;background-color:blue">
The function start_html writes the header of the <span style="font-variant:small-caps;">Html</span> file
and the opening <body> tag to the file opened above.
End of explanation
def end_html():
outfile.write('</body>\n')
outfile.write('</html>\n')
Explanation: The function end_html writes the closing </body> and </html> tags.
End of explanation
def start_math_block():
outfile.write('<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">\n')
Explanation: The function start_math_block starts a math block. This is useful for formulas enclosed in $$. This type of formulas is displayed in a line by itself.
End of explanation
def start_math_inline():
outfile.write('<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline">\n')
Explanation: The function start_math_inline starts an <em style="color:blue">inline formula</em>, i.e. a formula enclosed in $. Formulas of this type are part of the surrounding text.
End of explanation
def end_math():
outfile.write('</math>\n')
Explanation: The function end_math ends a math block.
End of explanation
def start_sum():
outfile.write('<munderover>\n')
outfile.write('<mo>∑</mo>\n')
def end_sum():
outfile.write('</munderover>\n')
Explanation: The functions start_sum and end_sum write code to display formulas involving sums. For example, to display the expression
$$ \sum\limits_{i=1}^n i^2 $$
we can use the following MathML:
<munderover>
<mo>&sum;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>n</mi>
</mrow>
</munderover>
<msup>
<mi>i</mi>
<mrow>
<mn>2</mn>
</mrow>
</msup>
End of explanation
def start_sqrt():
outfile.write('<msqrt>\n')
def end_sqrt():
outfile.write('</msqrt>\n')
Explanation: The functions start_sqrt and end_sqrt write code to display formulas involving square roots. For example, to display the expression
$$ \sqrt{a^2 + b^2} $$
we can use the following MathML:
<msqrt>
<mrow>
<msup>
<mi>a</mi>
<mrow>
<mn>2</mn>
</mrow>
</msup>
<mo>+</mo>
<msup>
<mi>b</mi>
<mrow>
<mn>2</mn>
</mrow>
</msup>
</mrow>
</msqrt>
End of explanation
def start_super():
outfile.write('<msup>\n')
def end_super():
outfile.write('</msup>\n')
Explanation: In order to write exponents we have to use the tag <msup>. For example, the expression $a^2$
is equivalent to the following markup:
<msup>
<mi>a</mi>
<mrow>
<mn>2</mn>
</mrow>
</msup>
Note that the exponent is enclosed in <mrow> </mrow> tags.
<b>Note</b> that everything, i.e. both the variable and the exponent is enclosed in <msup> </msup> tags.
End of explanation
def start_fraction():
outfile.write('<mfrac>\n')
def end_fraction():
outfile.write('</mfrac>\n')
Explanation: In order to write fractions we have to use the tag <mfrac>. For example, the expression $\frac{1}{6}$
is equivalent to the following markup:
<mfrac>
<mrow>
<mn>1</mn>
</mrow>
<mrow>
<mn>6</mn>
</mrow>
</mfrac>
Note that both nominator and denominator are enclosed in <mrow> </mrow> tags.
End of explanation
def start_row():
outfile.write('<mrow>\n')
def end_row():
outfile.write('</mrow>\n')
Explanation: Arguments of functions like the square root or exponents have to be enclosed in pairs of <mrow> and </mrow> tags.
End of explanation
def write_var(v):
outfile.write('<mi>' + v + '</mi>\n')
Explanation: Variable names should be enclosed in pairs of <mi> and </mi> tags. For example, the variable $x$ is displayed by the following MathML:
<mi>x</mi>
The tag name mi is short for math italics.
End of explanation
def write_number(n):
outfile.write('<mn>' + n + '</mn>\n')
Explanation: Numbers should be enclosed in pairs of <mn> and </mn> tags. For example, the number $6$ is displayed by the following MathML:
<mn>6</mn>
End of explanation
def write_times():
outfile.write('<mo>⋅</mo>\n')
Explanation: The symbol $\cdot$ is created by the following MathML:
<mo>&sdot;</mo>
End of explanation
def write_operator(op):
outfile.write('<mo>' + op + '</mo>\n')
Explanation: Mathematical operators should be enclosed in pairs of <mo> and </mo> tags. For example, the operator $+$ is displayed by the following MathML:
<mo>+</mo>
End of explanation
def write_pi():
outfile.write('<mn>π</mn>\n')
Explanation: The symbol $\pi$ is created by the following MathML:
<mn>&pi;</mn>
End of explanation
def write_leq():
outfile.write('<mo>≤</mo>\n')
Explanation: The symbol $\leq$ is created by the following MathML:
<mn>&le;</mn>
End of explanation
def write_geq():
outfile.write('<mo>≥</mo>\n')
Explanation: The symbol $\geq$ is created by the following MathML:
<mn>&ge;</mn>
End of explanation
def write_any(char):
outfile.write(char)
Explanation: The function write_any writes a single character unadorned to the output file.
End of explanation
import ply.lex as lex
Explanation: We will be use the library ply to translate $\LaTeX$ into
<span style="font-variant:small-caps;">MathML</span>.
We only use the scanner that is provided by the module ply.lex.
Hence we import the module ply.lex that contains the scanner generator from ply.
End of explanation
tokens = [ 'HEAD', # r'\\documentclass\{article\}'
'BEGIN_DOCUMENT', # r'\\begin\{document\}'
'END_DOCUMENT', # r'\\end\{document\}'
'DOLLAR_DOLLAR', # r'\$\$'
'DOLLAR', # r'\$'
'...' # many more token declarations here
'ANY', # r'.|\n'
'WS' # r'[ \t]'
]
Explanation: We have to declare all tokens below. We will need tokens for the following parts of the $\LaTeX$ file:
- The $\LaTeX$ file starts with the string \documentclass{article}.
- Next, there is the string \begin{document} that starts the content.
- The string \end{document} ends the content.
- The string $$ starts and ends a formula that is displayed on a line by itself.
- The string $ starts and ends a formula that is displayed as part of the text.
- The string \sum\limits_{ starts the definition of a sum.
- The string \sqrt{ starts the definition of a square root.
- The string \frac{ starts the definition of a fraction.
- A variable taken to a power starts something like a^{.
- $\vdots$
End of explanation
states = [ ('formula', 'exclusive') ]
def t_HEAD(t):
r'\\documentclass\{article\}'
pass
def t_BEGIN_DOCUMENT(t):
r'\\begin\{document\}'
start_html()
def t_END_DOCUMENT(t):
r'\\end\{document\}'
end_html()
def t_DOLLAR_DOLLAR(t):
r'\$\$'
t.lexer.begin('formula')
t.lexer.stack = []
t.lexer.stack.append('INITIAL')
start_math_block()
def t_DOLLAR(t):
r'\$'
t.lexer.begin('formula')
t.lexer.stack = []
t.lexer.stack.append('INITIAL')
start_math_inline()
def t_ANY(t):
r'.|\n'
write_any(t.value)
def t_formula_DOLLAR_DOLLAR(t):
r'\$\$'
t.lexer.begin('INITIAL')
end_math()
def t_formula_DOLLAR(t):
r'\$'
t.lexer.begin('INITIAL')
end_math()
Explanation: When we see a closing brace } things get difficult. The reason is that we need to know what type of formula is being closed.
Is it a square root, the subscript of a sum, the superscript of a sum, or some part of a fraction. My idea is to use a stack that is attached to the lexer, i.e. we have a variable lexer.stack that stores this information.
Furthermore, the scanner has two different states. Either we are inside a formula, i.e. inside something that is enclosed in dollar symbols, or we are inside text that needs to be echoed unchanged to the output file.
End of explanation
def formula_LEQ(t):
r'\\leq'
write_leq()
def t_formula_GEQ(t):
r'\\geq'
write_geq()
def t_formula_PI(t):
r'\\pi'
write_pi()
def t_formula_OPERATOR(t):
r'[.,()+<>=-]'
write_operator(t.value)
def t_formula_WS(t):
r'[ \t]'
pass
def t_formula_error(t):
print(f"Illegal character in state 'formula': '{t.value[0]}'")
t.lexer.skip(1)
Explanation: $\cdots$ lots of token definitions $\cdots$
End of explanation
__file__ = 'main'
Explanation: The line below is necessary to trick ply.lex into assuming this program is written in an ordinary python file instead of a Jupyter notebook.
End of explanation
lexer = lex.lex(debug=True)
Explanation: The line below generates the scanner.
End of explanation
lexer.input(data)
Explanation: Next, we feed our input string into the generated scanner.
End of explanation
def scan(lexer):
for t in lexer:
pass
scan(lexer)
outfile.close()
Explanation: In order to scan the data that we provided in the last line, we iterate over all tokens generated by our scanner.
End of explanation
!start 'example.html'
Explanation: Now you should be able to see a file with the name example.html in your current durectory.
End of explanation |
5,527 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Crash Course Exercises
This is an optional exercise to test your understanding of Python Basics. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp
Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
What is 7 to the power of 4?
Step1: Split this string
Step2: Given the variables
Step3: Given this nested list, use indexing to grab the word "hello"
Step4: Given this nested dictionary grab the word "hello". Be prepared, this will be annoying/tricky
Step5: What is the main difference between a tuple and a list?
Step6: Create a function that grabs the email website domain from a string in the form
Step7: Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization.
Step8: Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases.
Step9: Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example
Step10: Final Problem
You are driving a little too fast, and a police officer stops you. Write a function
to return one of 3 possible results | Python Code:
7**4
Explanation: Python Crash Course Exercises
This is an optional exercise to test your understanding of Python Basics. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp
Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
What is 7 to the power of 4?
End of explanation
s = 'Hi there Sam!'
s.split()
Explanation: Split this string:
s = "Hi there Sam!"
into a list.
End of explanation
planet = "Earth"
diameter = 12742
'The diameter of {} is {} kilometers.'.format(planet,diameter)
Explanation: Given the variables:
planet = "Earth"
diameter = 12742
Use .format() to print the following string:
The diameter of Earth is 12742 kilometers.
End of explanation
lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]
lst[3][1][2][0]
Explanation: Given this nested list, use indexing to grab the word "hello"
End of explanation
d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}
d['k1'][3]['tricky'][3]['target'][3]
Explanation: Given this nested dictionary grab the word "hello". Be prepared, this will be annoying/tricky
End of explanation
# Tuple is immutable, list items can be changed
Explanation: What is the main difference between a tuple and a list?
End of explanation
def domainGet(inp):
return inp.split('@')[1]
domainGet('[email protected]')
Explanation: Create a function that grabs the email website domain from a string in the form:
[email protected]
So for example, passing "[email protected]" would return: domain.com
End of explanation
def findDog(inp):
return 'dog' in inp.lower().split()
findDog('Is there a dog here?')
Explanation: Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization.
End of explanation
def countDog(inp):
dog = 0
for x in inp.lower().split():
if x == 'dog':
dog += 1
return dog
countDog('This dog runs faster than the other dog dude!')
Explanation: Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases.
End of explanation
seq = ['soup','dog','salad','cat','great']
list(filter(lambda item:item[0]=='s',seq))
Explanation: Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example:
seq = ['soup','dog','salad','cat','great']
should be filtered down to:
['soup','salad']
End of explanation
def caught_speeding(speed, is_birthday):
if is_birthday:
speed = speed - 5
if speed > 80:
return 'Big Ticket'
elif speed > 60:
return 'Small Ticket'
else:
return 'No Ticket'
caught_speeding(81,True)
caught_speeding(81,False)
Explanation: Final Problem
You are driving a little too fast, and a police officer stops you. Write a function
to return one of 3 possible results: "No ticket", "Small ticket", or "Big Ticket".
If your speed is 60 or less, the result is "No Ticket". If speed is between 61
and 80 inclusive, the result is "Small Ticket". If speed is 81 or more, the result is "Big Ticket". Unless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 higher in all
cases.
End of explanation |
5,528 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Traffic Signs Classification</h1>
<p>Using German Traffic Sign Dataset (http
Step1: <h1>Loading the Data</h1>
Step2: <h1>Data Info</h1>
<p>Spliting the train data as train and validation set</p>
Step3: <h2>Reshape All the Data</h2>
Step4: <h2>Data Normalization</h2>
<p>Process all the data as close as mean 0.0 and standard deviation 1.0.</p>
Step5: <p>Convert all the classes as one hot encode.</p>
Step6: <h2>Some Random Image</h2>
<p>Before normalization</p>
Step7: <p>After normalization</p>
Step8: <h2>Build the Model with Keras</h2>
Step9: <h2>Model Evaluation</h2>
Step10: <h2>Predicted Classes</h2> | Python Code:
import matplotlib.pyplot as plt
import random as rn
import numpy as np
from sklearn.model_selection import train_test_split
import pickle
from keras.models import Sequential
from keras.layers import Dense, Input, Activation
from keras.utils import np_utils
%matplotlib inline
Explanation: <h1>Traffic Signs Classification</h1>
<p>Using German Traffic Sign Dataset (http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset). You can download the from <a href="https://d17h27t6h515a5.cloudfront.net/topher/2016/November/581faac4_traffic-signs-data/traffic-signs-data.zip"> here</a></p>
<h1>Imports</h1>
End of explanation
train_data = 'data/train.p'
test_data = 'data/test.p'
with open(train_data, 'rb') as f:
train = pickle.load(f)
with open(test_data, 'rb') as f:
test = pickle.load(f)
Explanation: <h1>Loading the Data</h1>
End of explanation
X_train, X_val, Y_train, Y_val = train_test_split(train['features'], train['labels'], test_size=0.3, random_state=0)
X_test, Y_test = test['features'], test['labels']
n_train = X_train.shape[0]
n_val = X_val.shape[0]
n_test = X_test.shape[0]
image_shape = X_train.shape[1], X_train.shape[2]
n_channels = X_train.shape[3]
n_classes = np.unique(train['labels']).size
print('Train data size:\t\t\t', n_train)
print('Validation data size:\t\t\t', n_val)
print('test data size:\t\t\t\t', n_test)
print('Image shape:\t\t\t\t', image_shape)
print('Number of color channels in image:\t', n_channels)
print('Number of classes:\t\t\t', n_classes)
Explanation: <h1>Data Info</h1>
<p>Spliting the train data as train and validation set</p>
End of explanation
def reshape(arr):
return arr.reshape(-1, image_shape[0]*image_shape[1]*n_channels)
X_train_flat = reshape(X_train)
X_val_flat = reshape(X_val)
X_test_flat = reshape(X_test)
def print_info(st, arr_1, arr_2):
print('{} data shape before reshape: {}, and after reshape: {}'.format(st, arr_1.shape, arr_2.shape))
print_info('Train', X_train, X_test_flat)
print_info('Validation', X_val, X_val_flat)
print_info('Test', X_test, X_test_flat)
Explanation: <h2>Reshape All the Data</h2>
End of explanation
def normalize(arr):
arr = arr.astype('float32')
return (arr - np.mean(arr))/np.std(arr)
X_train_norm = normalize(X_train_flat)
X_val_norm = normalize(X_val_flat)
X_test_norm = normalize(X_test_flat)
def print_info(st, arr_1, arr_2):
print('{} Data: Before normalization : type: {}, mean: {}, std: {}. After processing, type: {}, mean: {}, std: {}'. format(st, arr_1.dtype, round(np.mean(arr_1),2), round(np.std(arr_1),2), arr_2.dtype, round(np.mean(arr_2),2), round(np.std(arr_2),2)))
print_info('Train', X_train_flat, X_train_norm)
print_info('Valdation', X_val_flat, X_val_norm)
print_info('Test', X_test_flat, X_test_norm)
Explanation: <h2>Data Normalization</h2>
<p>Process all the data as close as mean 0.0 and standard deviation 1.0.</p>
End of explanation
def make_categorical(arr):
return np_utils.to_categorical(arr, n_classes)
Y_train_cat = make_categorical(Y_train)
Y_val_cat = make_categorical(Y_val)
Y_test_cat = make_categorical(Y_test)
Explanation: <p>Convert all the classes as one hot encode.</p>
End of explanation
trc = rn.sample(range(n_test), 16)
def plot_images(arr_1, arr_2, pred=False):
fig, axes = plt.subplots(4, 4, figsize=(10,10))
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
if len(arr_1.shape) == 2:
ax.imshow(arr_1[trc[i]].reshape(32,32,3))
ax.set_xlabel('true:{}'.format(arr_2[trc[i]]))
else:
ax.imshow(arr_1[trc[i]])
ax.set_xlabel('true:{}, pred:{}'.format(arr_2[trc[i]], pred[trc[i]]))
ax.set_xticks([])
ax.set_yticks([])
plot_images(X_train_flat, Y_train)
Explanation: <h2>Some Random Image</h2>
<p>Before normalization</p>
End of explanation
plot_images(X_train_norm, Y_train)
Explanation: <p>After normalization</p>
End of explanation
model = Sequential()
model.add(Dense(256, activation='relu', input_shape=(32*32*3,)))
#model.add(Dense(800, activation='relu',input_shape=(1555,)))
model.add(Dense(43, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train_norm, Y_train_cat, batch_size=64, epochs=20, verbose=1, validation_data=(X_val_norm, Y_val_cat))
history.history['val_acc'][-1]
Explanation: <h2>Build the Model with Keras</h2>
End of explanation
score, acc = model.evaluate(X_test_norm, Y_test_cat, batch_size=64, verbose=0)
print('Score:\t', score)
print('Acc:\t{}%'.format(round(acc*100)))
Explanation: <h2>Model Evaluation</h2>
End of explanation
Y_predd = model.predict_classes(X_test_norm, batch_size=64, verbose=0)
plot_images(X_test, Y_test, Y_pred)
Explanation: <h2>Predicted Classes</h2>
End of explanation |
5,529 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: AST - Abstract Syntax Tree
For reasons, I want to parse a python source code file and extract certain elements. The case in point involves looking for all functions with a given decorator applied and return certain attributes of the function declaration.
When assuming a certain coding style, this could probably be done with a handful of lines or even a regex. This becomes problematic if youu want to be able to properly parse any and all (valid) python code. You'll soon find yourself reinventing the (lexer-)wheel which is already available in Python itsself.
Thanks to others, there is a built-in ast module which parses Python source code into an AST. The AST can then be inspected and modified, and even recompiled into source code. In our case we are only interested in inspection.
Step2: The ast module "helps Python applications to process trees of the Python abstract syntax grammar. The abstract syntax itself might change with each Python release; this module helps to find out programmatically what the current grammar looks like."
The tree of objects all inherit from ast.AST and the actual types and their properties can be found in the so called ASDL. The actual grammar of python as defined in the Zephyr Abstract Syntax Definition Language. The grammar file can be found in the Python sources at Parser/python.asdl.
Step3: The astunparse module helps in pretty printing the tree, which we rely heavy upon during exploration.
Step4: We want to look at function definitions which are aptly named FunctionDef in the ASDL and represented as FunctionDef objects in the tree. Looking at the ASDL we see the following deifnition for FunctionDef (reformatted)
Step5: For our purposes we should be able to use the walk method, I find it simpler to use for now. Let;s see what happens if we grab those FunctionDef objects and inspect them in the same way. Using the unparse() methof of astunparse we can transform it back into source code for extra fun.
Step6: We wanted to only grab functions who have a certain decorator, so we need to inspect the decorator_list attribute of the FunctionDef class.
Step7: So looking more closely there is a different representation in the AST for a single keyword (@function) decorator as there is for a compound (@Class.method).
Compare the decorator in my_function
Step9: The actual sources I want to parse have an additional complication, the decorator functions have arguments passed into them. And I want to know what's in them as well. So let's switch to some actual source code and see how to do that. I have removed the body of the function as we are only interested in the decorator now.
Step10: We find the decorator_list to contain a ast.Call object rather than a Name or Attribute. This corresponds to the signature of the called decorator function. I am interested in the first positional argument as well as the keyword arguments. Let's grab the [0] element of the decorator list to simplify.
Step11: Time to bring it all together and write a function that takes a filename and a decorator as argument and spits out a list of tuples which hold the | Python Code:
import ast
example_module = '''
@my_decorator
def my_function(my_argument):
My Docstring
my_value = 420
return my_value
def foo():
pass
@Some_decorator
@Another_decorator
def bar():
pass
@MyClass.subpackage.my_deco_function
def baz():
pass'''
Explanation: AST - Abstract Syntax Tree
For reasons, I want to parse a python source code file and extract certain elements. The case in point involves looking for all functions with a given decorator applied and return certain attributes of the function declaration.
When assuming a certain coding style, this could probably be done with a handful of lines or even a regex. This becomes problematic if youu want to be able to properly parse any and all (valid) python code. You'll soon find yourself reinventing the (lexer-)wheel which is already available in Python itsself.
Thanks to others, there is a built-in ast module which parses Python source code into an AST. The AST can then be inspected and modified, and even recompiled into source code. In our case we are only interested in inspection.
End of explanation
tree = ast.parse(example_module)
print(tree) # the object
# Built in dump method shows the actual content of the entire tree
print(ast.dump(ast.parse(example_module)))
Explanation: The ast module "helps Python applications to process trees of the Python abstract syntax grammar. The abstract syntax itself might change with each Python release; this module helps to find out programmatically what the current grammar looks like."
The tree of objects all inherit from ast.AST and the actual types and their properties can be found in the so called ASDL. The actual grammar of python as defined in the Zephyr Abstract Syntax Definition Language. The grammar file can be found in the Python sources at Parser/python.asdl.
End of explanation
import astunparse
print(astunparse.dump(tree))
Explanation: The astunparse module helps in pretty printing the tree, which we rely heavy upon during exploration.
End of explanation
class MyVisitor(ast.NodeVisitor):
def generic_visit(self, node):
print(f'Nodetype: {type(node).__name__:{16}} {node}')
ast.NodeVisitor.generic_visit(self, node)
v = MyVisitor()
print('Using NodeVisitor (depth first):')
v.visit(tree)
print('\nWalk()ing the tree breadth first:')
for node in ast.walk(tree):
print(f'Nodetype: {type(node).__name__:{16}} {node}')
Explanation: We want to look at function definitions which are aptly named FunctionDef in the ASDL and represented as FunctionDef objects in the tree. Looking at the ASDL we see the following deifnition for FunctionDef (reformatted):
FunctionDef(identifier name,
arguments args,
stmt* body,
expr* decorator_list,
expr? returns,
string? docstring)
Which seems to correspond to the structure of the object in the AST as shown in the astunparse dump above. There is some documentation at a place called Green Tree Snakes which explains the components of the FunctionDef object.
Traversing and inspecting the tree
There are two ways to work with the tree. The easiest is ast.walk() which "Recursively yield all descendant nodes in the tree starting at node (including node itself), in no specified order." and apparently does so breadth first. Alternatively you can subclass the ast.NodeVisitor class. This class provides a visit() method which does a depth first traversal. You can override visit_<Class_Name> which are called whenever the traversal hits a node of that class.
End of explanation
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
print(f'Nodetype: {type(node).__name__:{16}} {node}')
print(astunparse.unparse(node))
Explanation: For our purposes we should be able to use the walk method, I find it simpler to use for now. Let;s see what happens if we grab those FunctionDef objects and inspect them in the same way. Using the unparse() methof of astunparse we can transform it back into source code for extra fun.
End of explanation
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
decorators = [d.id for d in node.decorator_list]
print(node.name, decorators)
Explanation: We wanted to only grab functions who have a certain decorator, so we need to inspect the decorator_list attribute of the FunctionDef class.
End of explanation
def flatten_attr(node):
if isinstance(node, ast.Attribute):
return str(flatten_attr(node.value)) + '.' + node.attr
elif isinstance(node, ast.Name):
return str(node.id)
else:
pass
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
found_decorators = []
for decorator in node.decorator_list:
if isinstance(decorator, ast.Name):
found_decorators.append(decorator.id)
elif isinstance(decorator, ast.Attribute):
found_decorators.append(flatten_attr(decorator))
print(node.name, found_decorators)
Explanation: So looking more closely there is a different representation in the AST for a single keyword (@function) decorator as there is for a compound (@Class.method).
Compare the decorator in my_function:
decorator_list=[Name(
id='my_decorator',
ctx=Load())]
against the compound decorator in baz:
decorator_list=[Attribute(
value=Attribute(
value=Name(
id='MyClass',
ctx=Load()),
attr='subpackage',
ctx=Load()),
attr='my_deco_function',
ctx=Load())]
So we need to modify our treewalk to acomodate for this. When the top level element in the decorator_liist is of type Name, we grab the id and be done with it. If it is of type Attribute we need to do some more extra work. From the ASDL we can see that Attribute is a nested element:
Attribute(expr value, identifier attr, expr_context ctx)
Assuming it's nested ast.Attributes with a ast.Name at the root we can define a flattening function.
End of explanation
source =
@Route.get(
r"/projects/{project_id}/snapshots",
description="List snapshots of a project",
parameters={
"project_id": "Project UUID",
},
status_codes={
200: "Snasphot list returned",
404: "The project doesn't exist"
})
def list(request, response):
pass
print(astunparse.dump(ast.parse(source)))
Explanation: The actual sources I want to parse have an additional complication, the decorator functions have arguments passed into them. And I want to know what's in them as well. So let's switch to some actual source code and see how to do that. I have removed the body of the function as we are only interested in the decorator now.
End of explanation
complex_decorator = ast.parse(source).body[0].decorator_list[0]
print(astunparse.dump(complex_decorator))
decorator_name = flatten_attr(complex_decorator.func)
decorator_path = complex_decorator.args[0].s
for kw in complex_decorator.keywords:
if kw.arg == 'description':
decorator_description = kw.value.s
if kw.arg == 'parameters':
decorator_parameters = ast.literal_eval(astunparse.unparse(kw.value))
if kw.arg == 'status_codes':
decorator_statuscodes = ast.literal_eval(astunparse.unparse(kw.value))
print(decorator_name, decorator_path)
print('Parameters:')
for p in decorator_parameters.keys():
print(' ' + str(p) + ': ' + decorator_parameters[p])
print('Status Codes:')
for sc in decorator_statuscodes.keys():
print(' ' + str(sc) + ': ' + decorator_statuscodes[sc])
Explanation: We find the decorator_list to contain a ast.Call object rather than a Name or Attribute. This corresponds to the signature of the called decorator function. I am interested in the first positional argument as well as the keyword arguments. Let's grab the [0] element of the decorator list to simplify.
End of explanation
import collections
Route = collections.namedtuple('Route', 'filename function_name path description parameters status_codes')
def extract_routes(file, decorator_name):
routes = []
filename = file
with open(file) as f:
try:
tree = ast.parse(f.read())
except:
return routes
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
funcname = node.name
for d in node.decorator_list:
if isinstance(d, ast.Call):
if flatten_attr(d.func) == decorator_name:
route_path = d.args[0].s
description = None
parameters = None
statuscodes = None
for kw in d.keywords:
if kw.arg == 'description':
description = kw.value.s
if kw.arg == 'parameters':
parameters = ast.literal_eval(astunparse.unparse(kw.value))
if kw.arg == 'status_codes':
statuscodes = ast.literal_eval(astunparse.unparse(kw.value))
r = Route(filename, funcname, route_path, description, parameters, statuscodes)
routes.append(r)
return routes
get_routes = []
from pathlib import Path
pathlist = Path('./controller').glob('*.py')
for path in pathlist:
# because path is object not string
filename = str(path)
get_routes += extract_routes(filename, 'Route.post')
for route in get_routes:
print(f'{route.filename} {route.function_name:{20}} {route.path:{40}}')
Explanation: Time to bring it all together and write a function that takes a filename and a decorator as argument and spits out a list of tuples which hold the:
Function name (str)
description for the given decorator (str)
parameters for the decorator (dict)
status codes for the decorator (dict)
for every function in the sourcefile which is decorated with that decorator.
End of explanation |
5,530 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
使用行业内的排序,进行因子测试;与回归版本,以及原始因子值版本进行比较。本部分参考自《QEPM》 p.p 117
请在环境变量中设置DB_URI指向数据库
参数设定
Step1: 样例因子
我们下面分三种方法,分别考查这几种方法在避免行业集中上面的效果:
使用原始因子的排序;
使用原始因子在行业内的排序;
使用原始因子在行业哑变量上回归后得到的残差排序。
1. 原始因子排序
Step2: 对于原始因子,如果我们不做任何行业上面的处理,发现我们选定的alpha因子CFO2EV较大的股票集中于银行和大金融板块。
2. 行业内排序因子
这里我们使用调整后的申万行业分类作为行业标签:
Step3: 使用行业内的排序,则行业分布会比较平均。
3. 使用回归将因子行业中性
还有一种思路,使用线性回归,以行业为哑变量,使用回归后的残差作为因子的替代值,做到行业中性:
Step4: 我们发现这种方法的效果并不是很好。调整的幅度并不是很大,同时仍然存在着集中于大金融板块的问题。
回测结果
我们使用简单等权重做多前20%支股票,做空后20%的方法,考察三种方法的效果: | Python Code:
%matplotlib inline
import os
import pandas as pd
import numpy as np
from PyFin.api import *
from alphamind.api import *
factor = "EMA5D"
universe = Universe('zz800')
start_date = '2020-01-01'
end_date = '2020-02-21'
freq = '10b'
category = 'sw'
level = 1
horizon = map_freq(freq)
engine = SqlEngine(os.environ['DB_URI'])
ref_dates = makeSchedule(start_date, end_date, freq, 'china.sse')
sample_date = '2018-01-04'
sample_codes = engine.fetch_codes(sample_date, universe)
sample_industry = engine.fetch_industry(sample_date, sample_codes, category=category, level=level)
sample_industry.head()
Explanation: 使用行业内的排序,进行因子测试;与回归版本,以及原始因子值版本进行比较。本部分参考自《QEPM》 p.p 117
请在环境变量中设置DB_URI指向数据库
参数设定
End of explanation
factor1 = {'f1': CSQuantiles(factor)}
sample_factor1 = engine.fetch_factor(sample_date, factor1, sample_codes)
sample_factor1 = pd.merge(sample_factor1, sample_industry[['code', 'industry']], on='code')
sample_factor1.sort_values('f1', ascending=False).head(15)
Explanation: 样例因子
我们下面分三种方法,分别考查这几种方法在避免行业集中上面的效果:
使用原始因子的排序;
使用原始因子在行业内的排序;
使用原始因子在行业哑变量上回归后得到的残差排序。
1. 原始因子排序
End of explanation
factor2 = {'f2': CSQuantiles(factor)}
sample_factor2 = engine.fetch_factor(sample_date, factor2, sample_codes)
sample_factor2 = pd.merge(sample_factor2, sample_industry[['code', 'industry']], on='code')
sample_factor2.sort_values('f2', ascending=False).head(15)
Explanation: 对于原始因子,如果我们不做任何行业上面的处理,发现我们选定的alpha因子CFO2EV较大的股票集中于银行和大金融板块。
2. 行业内排序因子
这里我们使用调整后的申万行业分类作为行业标签:
End of explanation
factor3 = {'f3': factor}
sample_factor3 = engine.fetch_factor(sample_date, factor3, sample_codes)
risk_cov, risk_exp = engine.fetch_risk_model(sample_date, sample_codes)
sample_factor3 = pd.merge(sample_factor3, sample_industry[['code', 'industry']], on='code')
sample_factor3 = pd.merge(sample_factor3, risk_exp, on='code')
raw_factors = sample_factor3['f3'].values
industry_exp = sample_factor3[industry_styles + ['COUNTRY']].values.astype(float)
processed_values = factor_processing(raw_factors, pre_process=[], risk_factors=industry_exp, post_process=[percentile])
sample_factor3['f3'] = processed_values
sample_factor3 = sample_factor3[['code', 'f3', 'industry']]
sample_factor3.sort_values('f3', ascending=False).head(15)
Explanation: 使用行业内的排序,则行业分布会比较平均。
3. 使用回归将因子行业中性
还有一种思路,使用线性回归,以行业为哑变量,使用回归后的残差作为因子的替代值,做到行业中性:
End of explanation
factors = {
'raw': CSQuantiles(factor),
'peer quantile': CSQuantiles(factor),
'risk neutral': LAST(factor)
}
df_ret = pd.DataFrame(columns=['raw', 'peer quantile', 'risk neutral'])
df_ic = pd.DataFrame(columns=['raw', 'peer quantile', 'risk neutral'])
for date in ref_dates:
ref_date = date.strftime('%Y-%m-%d')
codes = engine.fetch_codes(ref_date, universe)
total_factor = engine.fetch_factor(ref_date, factors, codes)
risk_cov, risk_exp = engine.fetch_risk_model(ref_date, codes)
industry = engine.fetch_industry(ref_date, codes, category=category, level=level)
rets = engine.fetch_dx_return(ref_date, codes, horizon=horizon, offset=1)
total_factor = pd.merge(total_factor, industry[['code', 'industry']], on='code')
total_factor = pd.merge(total_factor, risk_exp, on='code')
total_factor = pd.merge(total_factor, rets, on='code').dropna()
raw_factors = total_factor['risk neutral'].values
industry_exp = total_factor[industry_styles + ['COUNTRY']].values.astype(float)
processed_values = factor_processing(raw_factors, pre_process=[], risk_factors=industry_exp, post_process=[percentile])
total_factor['risk neutral'] = processed_values
total_factor[['f1_d', 'f2_d', 'f3_d']] = (total_factor[['raw', 'peer quantile', 'risk neutral']] >= 0.8) * 1.
total_factor.loc[total_factor['raw'] <= 0.2, 'f1_d'] = -1.
total_factor.loc[total_factor['peer quantile'] <= 0.2, 'f2_d'] = -1.
total_factor.loc[total_factor['risk neutral'] <= 0.2, 'f3_d'] = -1.
total_factor[['f1_d', 'f2_d', 'f3_d']] /= np.abs(total_factor[['f1_d', 'f2_d', 'f3_d']]).sum(axis=0)
ret_values = total_factor.dx.values @ total_factor[['f1_d', 'f2_d', 'f3_d']].values
df_ret.loc[date] = ret_values
ic_values = total_factor[['dx', 'raw', 'peer quantile', 'risk neutral']].corr().values[0, 1:]
df_ic.loc[date] = ic_values
print(f"{date} is finished")
df_ret.cumsum().plot(figsize=(14, 7))
df_ic.cumsum().plot(figsize=(14, 7))
Explanation: 我们发现这种方法的效果并不是很好。调整的幅度并不是很大,同时仍然存在着集中于大金融板块的问题。
回测结果
我们使用简单等权重做多前20%支股票,做空后20%的方法,考察三种方法的效果:
End of explanation |
5,531 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bandpass calibration demonstration
Step1: Construct LOW core configuration
Step2: We create the visibility. This just makes the uvw, time, antenna1, antenna2, weight columns in a table
Step3: Read the venerable test image, constructing an image
Step4: Predict the visibility from this image
Step5: Create a gain table with modest amplitude and phase errors, smootheed over 16 channels
Step6: Plot the gains applied
Step7: Solve for the gains
Step8: Plot the solved relative to the applied. Declare antenna 0 to be the reference. | Python Code:
%matplotlib inline
import os
import sys
sys.path.append(os.path.join('..', '..'))
from data_models.parameters import arl_path
results_dir = arl_path('test_results')
from matplotlib import pylab
import numpy
from astropy.coordinates import SkyCoord
from astropy import units as u
from astropy.wcs.utils import pixel_to_skycoord
from matplotlib import pyplot as plt
from wrappers.serial.visibility.base import create_blockvisibility
from wrappers.serial.calibration.operations import apply_gaintable
from wrappers.serial.visibility.operations import copy_visibility
from wrappers.serial.calibration.calibration import solve_gaintable
from wrappers.serial.visibility.coalesce import convert_blockvisibility_to_visibility, \
convert_visibility_to_blockvisibility
from wrappers.serial.calibration.operations import create_gaintable_from_blockvisibility
from wrappers.serial.image.operations import show_image
from wrappers.serial.simulation.testing_support import create_test_image, simulate_gaintable
from wrappers.serial.simulation.configurations import create_named_configuration
from wrappers.serial.imaging.base import create_image_from_visibility
from workflows.serial.imaging.imaging_serial import predict_list_serial_workflow
from data_models.polarisation import PolarisationFrame
pylab.rcParams['figure.figsize'] = (8.0, 8.0)
pylab.rcParams['image.cmap'] = 'rainbow'
import logging
log = logging.getLogger()
log.setLevel(logging.DEBUG)
log.addHandler(logging.StreamHandler(sys.stdout))
Explanation: Bandpass calibration demonstration
End of explanation
lowcore = create_named_configuration('LOWBD2-CORE')
Explanation: Construct LOW core configuration
End of explanation
times = numpy.zeros([1])
vnchan = 128
frequency = numpy.linspace(0.8e8, 1.2e8, vnchan)
channel_bandwidth = numpy.array(vnchan*[frequency[1]-frequency[0]])
phasecentre = SkyCoord(ra=+15.0 * u.deg, dec=-45.0 * u.deg, frame='icrs', equinox='J2000')
bvt = create_blockvisibility(lowcore, times, frequency, channel_bandwidth=channel_bandwidth,
weight=1.0, phasecentre=phasecentre, polarisation_frame=PolarisationFrame('stokesI'))
Explanation: We create the visibility. This just makes the uvw, time, antenna1, antenna2, weight columns in a table
End of explanation
m31image = create_test_image(frequency=frequency, cellsize=0.0005)
nchan, npol, ny, nx = m31image.data.shape
m31image.wcs.wcs.crval[0] = bvt.phasecentre.ra.deg
m31image.wcs.wcs.crval[1] = bvt.phasecentre.dec.deg
m31image.wcs.wcs.crpix[0] = float(nx // 2)
m31image.wcs.wcs.crpix[1] = float(ny // 2)
fig=show_image(m31image)
Explanation: Read the venerable test image, constructing an image
End of explanation
vt = convert_blockvisibility_to_visibility(bvt)
vt = predict_list_serial_workflow(bvt, m31image, context='timeslice')
bvt = convert_visibility_to_blockvisibility(vt)
Explanation: Predict the visibility from this image
End of explanation
gt = create_gaintable_from_blockvisibility(bvt)
gt = simulate_gaintable(gt, phase_error=1.0, amplitude_error=0.1, smooth_channels=16)
Explanation: Create a gain table with modest amplitude and phase errors, smootheed over 16 channels
End of explanation
plt.clf()
for ant in range(4):
amp = numpy.abs(gt.gain[0,ant,:,0,0])
plt.plot(amp)
plt.title('Amplitude of bandpass')
plt.xlabel('channel')
plt.show()
plt.clf()
for ant in range(4):
phase = numpy.angle(gt.gain[0,ant,:,0,0])
plt.plot(phase)
plt.title('Phase of bandpass')
plt.xlabel('channel')
plt.show()
cbvt = copy_visibility(bvt)
cbvt = apply_gaintable(cbvt, gt)
Explanation: Plot the gains applied
End of explanation
gtsol=solve_gaintable(cbvt, bvt, phase_only=False)
Explanation: Solve for the gains
End of explanation
plt.clf()
for ant in range(4):
amp = numpy.abs(gtsol.gain[0,ant,:,0,0]/gt.gain[0,ant,:,0,0])
plt.plot(amp)
plt.title('Relative amplitude of bandpass')
plt.xlabel('channel')
plt.show()
plt.clf()
for ant in range(4):
refphase = numpy.angle(gtsol.gain[0,0,:,0,0]/gt.gain[0,0,:,0,0])
phase = numpy.angle(gtsol.gain[0,ant,:,0,0]/gt.gain[0,ant,:,0,0])
plt.plot(phase-refphase)
plt.title('Relative phase of bandpass')
plt.xlabel('channel')
plt.show()
Explanation: Plot the solved relative to the applied. Declare antenna 0 to be the reference.
End of explanation |
5,532 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Cargando datos en Pandas
Objetivo
La finalidad de este capítulo es mostrar como cargar datos desde un archivo tipo csv, pero Pandas soporta más tipo de archivos. Debido a que el módulo tiene estructudas de datos que facilitan la manipulación de los datos al leerse desde un archivo, entonces explico un poco sobre el módulo Numpy y las Series y DataFrame que son estruturas de datos en Pandas.
Algunas comparaciones con R
La primera tarea que yo analizar datos es evidentemente cargar los datos que proviene de alguna fuente( base de datos) o archivo. Considerando la semejanza de Pandas con el uso de R, para mi las primeras tareas después de cargar los datos es revisar propiedades como tamaño o dim(), leer los primeros o los últimos registros head() o tail(), explorar el tipo de variables que tienen la muestra de datos struct() o explorar la existencia de registros nulos o NA, ver el resumen de los estadísticos básicos de las variables. Desde mi punto de vista, conociendo esa primera información uno puede iniciar un análisis exploratorio mucho más robusto.
El módulo Pandas tiene estructuras de datos para procesar la información con nombre similar a las de R, son los DataFrame. Las librerías de R que permiten operar los datos del mismo modo que Pandas son ddply y reshape2, el resto de operaciones para manipular DataFrame tienen un equivalente en las dos tecnologías.
Algunas cosas previas a cargar datos.
Debido a que al cargar los datos desde alguna archivo o fuente, ya sea en formato csv, excel, xml, json o desde algúna base Mysql, etc. Serán cargados en Ipython como Data.Frame o Series, pero estas estructuras en Pandas tienen como base el manejo de matrices con Numpy. Por eso comento algunas cosas sobre Numpy.
En resumen, creo que es bueno antes de cargar datos, saber algo sobre las estructuras de datos en Pandas y a su vez creo importante saber un poco sobre lo que permite la creación de las estructuras en Pandas, que es el módulo Numpy.
Numpy
En breve, este módulo permite el tratamiento de matrices y arrays, cuenta con funciones estándar de matemáticas, algunas funciones estadísticas básicas, permite la generación de números aleatorios, operaciones de álgebra lineal y análisis de Fourier, entre otras operaciones y funciones. Quizás esto no suena muy interesante, pero el módulo tiene la calidad suficiente para hacer cálculos no triviales y a buena velocidad, con respecto a C o C++.
Siempre los ejemplos que se hacen de Numpy explican cosas como
Step3: Para ver un ejemplo de su uso, realiza lo siguiente
Step4: El ejemplo anterior muestra el uso de Numpy como una biblioteca o módulo que permite hacer manipulación de matrices y como la calidad del procesamiento numérico perminte considarar su uso en problemas de métodos numéricos.
Problema del laberito con Numpy
El problema tiene el mismo nombre que un ejemplo del uso del algoritmo Backtracking para solución de problemas por "fuerza bruta", respecto a esa técnica se puede consultar el brog de JC Gómez.
El ejemplo que comparto tienen que ver más con el uso de Cadenas de Markov, el ejemplo es solo para mostrar como funcionan y como se hace uso de ellas para resolver el problema con ciertos supuestos iniciales y como hacerlo con numpy.
Suponemos que se colocará un dispositivo que se puede desplazar y pasar de área como en el cuadro siguiente
Step5: Si se procede ha seguir incrementando los estados o cambios se "estabiliza"; es decir, el cambio en la probabilidad de estar en la caja 1 después de tercer movimiento es 37.53%, lo cual tiende a quedarse en 37.5% al incrementar los movimientos.
Ejemplo de PageRank de manera rupestre, calcuado con Numpy
El algoritmo para generar un Ranking en las páginas web fue desarrollado e implementado por los fundadores de Google, no profundizo en los detalles pero en general la idea es la siguiente
Step6: El ejemplo anterior tiene 3 vectores propios y 3 valores propios, pero no refleja el problema en general de la relación entre las páginas web, ya que podría suceder que una página no tienen referencias a ninguan otra salvo a si misma o tienen solo referencias a ella y ella a ninguna, ni a si misma. Entonces un caso más general sería representado como el gráfo siguiente
Step7: Se tienen como resultado que en 17 iteraciones el algoritmo indica que el PageRank no es totalmente dominado por el nodo E y D, pese a que son las "páginas" que tienen mayor valor, pero las otras 3 resultan muy cercanas en importancia. Se aprecia como se va ajustando el valor conforme avanzan las etapas de los cálculos.
Data.Frame y Series
Los dos elementos principales de Pandas son Data.Frame y Series. El nombre Data.Frame es igual que el que se usa en R project y en escencia tiene la misma finalidad de uso, para la carga y procesamiento de datos.
Los siguientes ejemplos son breves, para conocer con detalle las propiedades, operaciones y caracteristicas que tienen estos dos objetos se puede consultar el libro Python for Data Analysis o el sitio oficial del módulo Pandas.
Primero se carga el módulo y los objetos y se muestran como usarlos de manera sencilla.
Step8: Los ejemplos anteriores muestras que es muy sencillo manipular los datos con Pandas, ya sea con Series o con DataFrame. Para mayor detalle de las funciones lo recomendable es consultar las referencias mencionadas anteriormente.
Cargar datos desde diversos archivos y estadísticas sencillas.
Step9: Viendo los datos que se tienen, es natural preguntarse algo al respecto. Lo sencillo es, ¿cuál es el estado que presenta mayor cantidad de inscripciones de mujeres a ingeniería?, pero también se puede agregar a la pregunta anterior el preguntar en qué año o cíclo ocurrió eso.
Algo sencillo para abordar las anteriores preguntas construir una tabla que permita visualizar la relación entre las variables mencionadas.
Step10: Observación
Step11: Nota
Step12: Como cargar un json y analizarlo.
En la siguiente se da una ejemplo de como cargar datos desde algún servicio web que regresa un arvhivo de tipo JSON. | Python Code:
#
# Región de estabilidad absoluta
# Juan Luis Cano Rodríguez
import numpy as np
def region_estabilidad(p, X, Y):
Región de estabilidad absoluta
Computa la región de estabilidad absoluta de un método numérico, dados
los coeficientes de su polinomio característico de estabilidad.
Argumentos
----------
p : function
Acepta un argumento w y devuelve una lista de coeficientes
X, Y : numpy.ndarray
Rejilla en la que evaluar el polinomio de estabilidad generada por
numpy.meshgrid
Devuelve
--------
Z : numpy.ndarray
Para cada punto de la malla, máximo de los valores absolutos de las
raíces del polinomio de estabilidad
Ejemplos
--------
>>> import numpy as np
>>> x = np.linspace(-3.0, 1.5)
>>> y = np.linspace(-3.0, 3.0)
>>> X, Y = np.meshgrid(x, y)
>>> Z = region_estabilidad(lambda w: [1,
... -1 - w - w ** 2 / 2 - w ** 3 / 6 - w ** 4 / 24], X, Y) # RK4
>>> import matplotlib.pyplot as plt
>>> cs = plt.contour(X, Y, Z, np.linspace(0.05, 1.0, 9))
>>> plt.clabel(cs, inline=1, fontsize=10) # Para etiquetar los contornos
>>> plt.show()
Z = np.zeros_like(X)
w = X + Y * 1j
for j in range(len(X)):
for i in range(len(Y)):
r = np.roots(p(w[i, j]))
Z[i, j] = np.max(abs(r if np.any(r) else 0))
return Z
Explanation: Cargando datos en Pandas
Objetivo
La finalidad de este capítulo es mostrar como cargar datos desde un archivo tipo csv, pero Pandas soporta más tipo de archivos. Debido a que el módulo tiene estructudas de datos que facilitan la manipulación de los datos al leerse desde un archivo, entonces explico un poco sobre el módulo Numpy y las Series y DataFrame que son estruturas de datos en Pandas.
Algunas comparaciones con R
La primera tarea que yo analizar datos es evidentemente cargar los datos que proviene de alguna fuente( base de datos) o archivo. Considerando la semejanza de Pandas con el uso de R, para mi las primeras tareas después de cargar los datos es revisar propiedades como tamaño o dim(), leer los primeros o los últimos registros head() o tail(), explorar el tipo de variables que tienen la muestra de datos struct() o explorar la existencia de registros nulos o NA, ver el resumen de los estadísticos básicos de las variables. Desde mi punto de vista, conociendo esa primera información uno puede iniciar un análisis exploratorio mucho más robusto.
El módulo Pandas tiene estructuras de datos para procesar la información con nombre similar a las de R, son los DataFrame. Las librerías de R que permiten operar los datos del mismo modo que Pandas son ddply y reshape2, el resto de operaciones para manipular DataFrame tienen un equivalente en las dos tecnologías.
Algunas cosas previas a cargar datos.
Debido a que al cargar los datos desde alguna archivo o fuente, ya sea en formato csv, excel, xml, json o desde algúna base Mysql, etc. Serán cargados en Ipython como Data.Frame o Series, pero estas estructuras en Pandas tienen como base el manejo de matrices con Numpy. Por eso comento algunas cosas sobre Numpy.
En resumen, creo que es bueno antes de cargar datos, saber algo sobre las estructuras de datos en Pandas y a su vez creo importante saber un poco sobre lo que permite la creación de las estructuras en Pandas, que es el módulo Numpy.
Numpy
En breve, este módulo permite el tratamiento de matrices y arrays, cuenta con funciones estándar de matemáticas, algunas funciones estadísticas básicas, permite la generación de números aleatorios, operaciones de álgebra lineal y análisis de Fourier, entre otras operaciones y funciones. Quizás esto no suena muy interesante, pero el módulo tiene la calidad suficiente para hacer cálculos no triviales y a buena velocidad, con respecto a C o C++.
Siempre los ejemplos que se hacen de Numpy explican cosas como: la contrucción de matrices y arrays, operaciones entre matrices y arrays, como usar la vectorización de las matrices, selección de filas y columnas, copia y eliminación de entradas, etc. En mi caso pienso que aprendo mejor con ejemplos de como usar algunas de las funciones o herramientas del módulo, que solo leyendo las teoria y operaciones. Una de las mejores fuentes para conocer Numpy es el tutorial de la página oficial de la librería.
* Tutorial Numpy: https://docs.scipy.org/doc/numpy-dev/user/quickstart.html
Los 3 ejemplos son sencillos y muestran como usar Numpy para 3 tipos de problemas, uno tiene caracter de análisis numérico, calculos de una cadena de markov y el últmo es una aplicación de las cadenas de Markov (el algoritmo PageRank de manera rupestre).
Regiones de Estabilidad Absoluta calculados con Numpy
El concepto de región de estabilidad absoluta tiene su origen en análisis numérico, es un método por medio del cual se analiza la estabilidad de las soluciones de una ecuación diferencial ordinaria. No doy detalles mayores del problema, pero se puede leer el artículo de Juan Luis Cano, y sobra decir que el código del ejemplo es original su publicación en su sitio.
Ejemplo
~~~
-- coding: utf-8 --
Región de estabilidad absoluta
Juan Luis Cano Rodríguez
import numpy as np
def region_estabilidad(p, X, Y):
Región de estabilidad absoluta
Computa la región de estabilidad absoluta de un método numérico, dados
los coeficientes de su polinomio característico de estabilidad.
Argumentos
----------
p : function
Acepta un argumento w y devuelve una lista de coeficientes
X, Y : numpy.ndarray
Rejilla en la que evaluar el polinomio de estabilidad generada por
numpy.meshgrid
Devuelve
--------
Z : numpy.ndarray
Para cada punto de la malla, máximo de los valores absolutos de las
raíces del polinomio de estabilidad
Ejemplos
--------
>>> import numpy as np
>>> x = np.linspace(-3.0, 1.5)
>>> y = np.linspace(-3.0, 3.0)
>>> X, Y = np.meshgrid(x, y)
>>> Z = region_estabilidad(lambda w: [1,
... -1 - w - w 2 / 2 - w 3 / 6 - w ** 4 / 24], X, Y) # RK4
>>> import matplotlib.pyplot as plt
>>> cs = plt.contour(X, Y, Z, np.linspace(0.05, 1.0, 9))
>>> plt.clabel(cs, inline=1, fontsize=10) # Para etiquetar los contornos
>>> plt.show()
Z = np.zeros_like(X)
w = X + Y * 1j
for j in range(len(X)):
for i in range(len(Y)):
r = np.roots(p(w[i, j]))
Z[i, j] = np.max(abs(r if np.any(r) else 0))
return Z
~~~
Podemos guardar este código en un script o se puede cargar directamente a la consola, en caso de hacerlo en ipython se puede hacer uso del comando %paste.
End of explanation
#Se define la región
x = np.linspace(-3.0, 1.5)
y = np.linspace(-3.0, 3.0)
X, Y = np.meshgrid(x, y)
#Se define el polinomio
def p(w):
return [1, -1 - w - w ** 2 / 2 - w ** 3 / 6 - w ** 4 / 24]
Z = region_estabilidad(p, X, Y)
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize']=(20,7)
plt.contour(X, Y, Z, np.linspace(0.0, 1.0, 9))
plt.show()
Explanation: Para ver un ejemplo de su uso, realiza lo siguiente:
* Se define las regiones x e y.
* Se define un polinomio para probar la definición de la región.
* Se utiliza la función region_estabilidad y se grafica el resultado.
End of explanation
#Definición de la matriz
import numpy as np
M=np.matrix([[1.0/3.0,1.0/3.0,0,1.0/3.0,0,0],[1.0/3.0,1.0/3.0,1.0/3.0,0,0,0],[0,1.0/3.0,1.0/3.0,0,0,1.0/3.0],
[1.0/2.0,0,0,1.0/2.0,0,0],[0,0,0,0,1.0/2.0,1.0/2.0],[0,0,1.0/3.0,0,1.0/3.0,1.0/3.0]])
M
#Definición del vector de estados iniciales
v=np.array([1.0/3.0,1.0/3.0,1.0/3.0,1.0/3.0,1.0/3.0,1.0/3.0])
#Primer estado o movimiento
v*M
#Segundo movimiento
v*M.dot(M)
#Tercer Movimiento
v.dot(M).dot(M).dot(M).dot(M).dot(M).dot(M).dot(M)
Explanation: El ejemplo anterior muestra el uso de Numpy como una biblioteca o módulo que permite hacer manipulación de matrices y como la calidad del procesamiento numérico perminte considarar su uso en problemas de métodos numéricos.
Problema del laberito con Numpy
El problema tiene el mismo nombre que un ejemplo del uso del algoritmo Backtracking para solución de problemas por "fuerza bruta", respecto a esa técnica se puede consultar el brog de JC Gómez.
El ejemplo que comparto tienen que ver más con el uso de Cadenas de Markov, el ejemplo es solo para mostrar como funcionan y como se hace uso de ellas para resolver el problema con ciertos supuestos iniciales y como hacerlo con numpy.
Suponemos que se colocará un dispositivo que se puede desplazar y pasar de área como en el cuadro siguiente: La idea es que puede pasar del cuadro 1 hacia el 2 o el 4, si empieza en el cuadro 2 puede pasar del 1 al 3, pero si inicia en el 5 solo puede pasar al 6, etc.
Entonces lo que se plantea es que si inicia en el cuadro 1 y pasa al 2, eso solo depende de haber estado en el cuadro 1, si se pasa al cuadro 3 solo depende del estado 2, no de haber estado en el estado 1. Entonces la idea de los procesos de Markov es que para conocer si se pasará al cuadro 3 iniciando en el cuadro 1 solo se requiere conocer los pasos previos.
En lenguaje de probabilidad se expresa así:
\begin{align}
P(X_{n}|X_{n-1},X_{n-2},...,X_{1}) = P(X_{n}|X_{n-1})\\[5pt]
\end{align}
Entonces los supuestos son que se tienen 6 posibles estados iniciales o lugares desde donde iniciar, el cuadro 1 hasta el cuadro 6. Se hace una matriz que concentra la información ordenada de la relación entre los posibles movimientos entre cuadros contiguos. Entonces la relación de estados es:
\begin{align}
p_{ij}= P(X_{n}=j|X_{n-1}=i)\\[5pt]
\end{align}
Donde se refiere a la probabilidad de estar en el cuadro j si se estaba en el estado i, para el cuadro 2 las probabilidades serían:
\begin{align}
p_{21}= P(X_{n}=1|X_{n-1}=2)\\[5pt]
p_{23}= P(X_{n}=3|X_{n-1}=2)\\[5pt]
0= P(X_{n}=4|X_{n-1}=2)\\[5pt]
0= P(X_{n}=5|X_{n-1}=2)\\[5pt]
0= P(X_{n}=6|X_{n-1}=2)\\[5pt]
\end{align}
Visto como matriz se vería como:
\begin{array}{ccc}
p_{11} & p_{12} & p_{13} & p_{14} & p_{15} & p_{16} \
p_{21} & p_{22} & p_{23} & p_{24} & p_{25} & p_{26} \
p_{31} & p_{32} & p_{33} & p_{34} & p_{35} & p_{36}\
p_{41} & p_{42} & p_{43} & p_{44} & p_{45} & p_{46}\
p_{51} & p_{52} & p_{53} & p_{54} & p_{55} & p_{56}\
p_{61} & p_{62} & p_{63} & p_{64} & p_{65} & p_{66}\end{array}
Matriz anterior se llama matriz de transición, para este ejemplo es de la forma siguiente:
\begin{array}{ccc}
\frac{1}{3} & \frac{1}{3} & 0 & \frac{1}{3} & 0 & 0 \
\frac{1}{3} & \frac{1}{3} & \frac{1}{3} & 0 & 0 & 0 \
0 & \frac{1}{3} & \frac{1}{3} & 0 & 0 & \frac{1}{3}\
\frac{1}{2} & 0 & 0 & \frac{1}{2} & 0 & 0\
0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2}\
0 & 0 & \frac{1}{3} & 0 & \frac{1}{3} & \frac{1}{3}\end{array}
Se tienen entonces si la probabilidad de iniciar en cualquier estado es 1/6, entonces se tienen que la probabilidad despues de dos movimientos o cambios será la matriz de transición multiplicada por si misma en dos ocasiones, se vería así:
Vector de estados iniciales:
\begin{align}
v=(\frac{1}{6},\frac{1}{6},\frac{1}{6},\frac{1}{6},\frac{1}{6},\frac{1}{6})\
\end{align}
En el primer estado sería:
\begin{align}
v*M
\end{align}
En el segundo estado sería:
\begin{align}
v*M^2
\end{align}
Asi lo que se tendrá es la probabilidad de estar en cualqueir cuadro para un segundo movimiento, hace esto en Numpy pensando en que es para procesar matrices resulta sencillo. Solo basta definir la matriz, hacer producto de matrices y vectores.
End of explanation
import numpy as np
M=np.matrix([[.33,.5,1],[.33,0,0],[.33,.5,0]])
lambda1,v=np.linalg.eig(M)
lambda1,v
Explanation: Si se procede ha seguir incrementando los estados o cambios se "estabiliza"; es decir, el cambio en la probabilidad de estar en la caja 1 después de tercer movimiento es 37.53%, lo cual tiende a quedarse en 37.5% al incrementar los movimientos.
Ejemplo de PageRank de manera rupestre, calcuado con Numpy
El algoritmo para generar un Ranking en las páginas web fue desarrollado e implementado por los fundadores de Google, no profundizo en los detalles pero en general la idea es la siguiente:
Representar la Web como un gráfo dirigido.
Usar la matriz asociada a un grafo para analizar el comportamiento de la web bajo ciertos supuestos.
Agregar al modelo base lo necesario para que se adapate a la naturaleza de la web de mejor manera.
El primer objetivo es representar cada página como un vértice de un grafo y una arista representa la relación de la página a otra página; es decir, si dentro de la página A se hace referencia a la página B, entonces se agrega una "flecha", por lo tanto un ejemplo sencillo lo representa el siguiente grafo:
La imagen anterior representa un gráfo donde se simula que hay relación entre 3 páginas, la flecha indica el direccionamiento de una página a otra. Entonces para modelar la relación entre las páginas lo que se usa es matriz de adyacencia, esta matriz concentra la información de las relaciones entre nodos. La matriz adyacente de ese gráfo sería como:
\begin{array}{ccc}
.33 & .5 & 1 \
.33 & 0 & 0 \
.33 & .5 & 0 \end{array}
Esta matriz es una matriz de Markov por columnas, cada una suma 1, el objetivo es que se tenga un vector con el ranking del orden de las páginas por prioridad al iniciar la búsqueda en internet y después de hacer uso de la matriz se puede conocer cual es el orden de prioridad.
Así que suponiendo que cualquiera de las 3 páginas tienen la misma probabilidad de ser la página inicial, se tienen que el vector inicial es:
\begin{align}
v=(.33,.33,.33)\
\end{align}
Después de usar la matriz la ecuación que nos permitiría conocer el mejor ranking de las páginas sería:
\begin{align}
v=M*v
\end{align}
Entonces el problema se pasa a resolver un problema de vectores propios y valores propios, por lo tanto el problema sería calcularlos.
\begin{align}
Mv=\lambdav
\end{align}
End of explanation
from __future__ import division
import numpy as np
#Se definen los calores de las constantes de control y las matrices requeridas
beta=0.85
#Matriz adyacente
M=np.matrix([[0.33,0.25,0.5,0,0],[.33,0,0,0,0],[0.33,0.25,0,0,0],[0,.25,.5,1,0],[0,.25,0,0,1]])
#Cantidad de nodos
n=M.shape[1]
#Matriz del modelo de PageRank
A=beta*M+((1-beta)*(1/n)*np.ones((5,5)))
#Se define el vector inicial del ranking
v=np.ones(5)/5
#Proceso de estimación por iteracciones
iterN=1
while True:
v1=v
v=v.dot(M)
print "Interación %d\n" %iterN
print v
if not np.any((0.00001<np.abs(v-v1))):
break
else:
iterN=iterN+1
print "M*v\n"
v.dot(M)
Explanation: El ejemplo anterior tiene 3 vectores propios y 3 valores propios, pero no refleja el problema en general de la relación entre las páginas web, ya que podría suceder que una página no tienen referencias a ninguan otra salvo a si misma o tienen solo referencias a ella y ella a ninguna, ni a si misma. Entonces un caso más general sería representado como el gráfo siguiente:
En el grafo se tiene un nodo D y E, los cuales muestran un comportamiento que no se reflejaba en el grafo anterior. El nodo E y D no tienen salidas a otros nodos o páginas.
La matriz asociada para este gráfo resulta ser la siguiente:
\begin{array}{ccccc}
.33 & .25 & 0.5 & 0 & 0 \
.33 & 0 & 0 & 0 & 0 \
.33 & .25 & 0 & 0 & 0\
0 & .25 & 0.5 & 1 & 0\
0 & .25 & 0 & 0 & 1 \end{array}
Se observa que para esta matriz no cada columna tienen valor 1, ejemplo la correspondiente a la columna D tienen todas sus entradas 0. Entonces el algoritmo de PageRank propone modificar la matriz adyacencia y agregarle una matriz que la compensa.
La ecuación del siguiente modo:
\begin{align}
A=\betaM + (1-\beta)\frac{1}{n}ee^T
\end{align}
Lo que sucede con esta ecuación, es que las columnas como la correspiente el nodo D toman valores que en suma dan 1, en caso como el nodo E se "perturban" de modo que permite que el algoritmo no se quede antorado en ese tipo de nodos. Si se considera la misma hipótesis de que inicialmente se tienen la misma probabilidad de estar en cualquiera de las páginas (nodos) entonces se considera ahora solo un vector de tamaño 5 en lugar de 3, el cual se vería así:
\begin{align}
v=(.33,.33,.33,.33,.33)\
\end{align}
El coeficiente beta toma valores de entre 0.8 y 0.9, suelen considerarse 0.85 su valor o por lo menos es el que se suponía se usaba por parte de Google, en resumen este parámetros es de control. La letra e representa un vector de a forma v=(1,1,1,1,1), el producto con su traspuesta genera una matriz cuadrada la cual al multiplicarse por 1/n tiene una matriz de Markov.
La matriz A ya es objeto de poder calcular el vector y valor propio, sin entrar en detalle esta puede cumple condiciones del teorema de Perron y de Frobenius, lo cual en resumen implica que se busque calcular u obtener el "vector dominando".
Pensando en el calculo de los vectores y valores propios para una matriz como la asociada al grafo de ejemplo resulta trivial el cálculo, pero para el caso de millones de nodos sería computacionalmente muy costoso por lo cual lo que se usa es hacer un proceso de aproximación el cual convege "rápido" y fue parte del secreto para que las busquedas y el ranking de las páginas fuera mucho más rápido.
El código de la estimación en numpy sería el siguente:
End of explanation
#Se carga el módulo
import pandas as pd
from pandas import Series, DataFrame
#Se construye una Serie, se agregan primero la lista de datos y después la lista de índices
datos_series=Series([1,2,3,4,5,6],index=['a','b','c','d','e','f'])
#Se muestra como carga los datos Pandas en la estrutura definida
datos_series
#Se visualizan los valores que se guardan en la estructura de datos
datos_series.values
#Se visualizan los valores registrados como índices
datos_series.index
#Se seleccionan algún valor asociado al índice 'b'
datos_series['b']
#Se revisa si tienen datos nulos o NaN
datos_series.isnull()
#Se calcula la suma acumulada, es decir 1+2,1+2+3,1+2+3+4,1+2+3+4+5,1+2+3+4+5+6
datos_series.cumsum()
#Se define un DataFrame, primero se define un diccionario y luego de genera el DataFrame
datos={'Estado':['Guanajuato','Querétaro','Jalisco','Durango','Colima'],'Población':[5486000,1828000,7351000,1633000,723455],'superficie':[30607,11699,78588,123317,5627]}
Datos_Estados=DataFrame(datos)
Datos_Estados
#Se genrea de nuevo el DataFrame y lo que se hace es asignarle índice para manipular los datos
Datos_Estados=DataFrame(datos,index=[1,2,3,4,5])
Datos_Estados
#Se selecciona una columna
Datos_Estados.Estado
#Otro modo de elegir la columna es del siguiente modo.
Datos_Estados['Estado']
#Se elige una fila, y se hace uso del índice que se definió para los datos
Datos_Estados.ix[2]
#Se selecciona más de una fila
Datos_Estados.ix[[3,4]]
#Descripción estadística en general, la media, la desviación estándar, el máximo, mínimo, etc.
Datos_Estados.describe()
#Se modifica el DataFrame , se agrega una nueva columna
from numpy import nan as NA
Datos_Estados['Índice']=[1.0,4.0,NA,4.0,NA]
Datos_Estados
#Se revisa si hay datos NaN o nulos
Datos_Estados.isnull()
#Pandas cuenta con herramientas para tratar los Missing Values, en esto se pueden explorar como con isnull() o
#eliminar con dropna. En este caso de llena con fillna
Datos_Estados.fillna(0)
Explanation: Se tienen como resultado que en 17 iteraciones el algoritmo indica que el PageRank no es totalmente dominado por el nodo E y D, pese a que son las "páginas" que tienen mayor valor, pero las otras 3 resultan muy cercanas en importancia. Se aprecia como se va ajustando el valor conforme avanzan las etapas de los cálculos.
Data.Frame y Series
Los dos elementos principales de Pandas son Data.Frame y Series. El nombre Data.Frame es igual que el que se usa en R project y en escencia tiene la misma finalidad de uso, para la carga y procesamiento de datos.
Los siguientes ejemplos son breves, para conocer con detalle las propiedades, operaciones y caracteristicas que tienen estos dos objetos se puede consultar el libro Python for Data Analysis o el sitio oficial del módulo Pandas.
Primero se carga el módulo y los objetos y se muestran como usarlos de manera sencilla.
End of explanation
#Se agraga a la consola de ipython la salida de matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize']=(30,8)
#Se cargan los datos desde un directorio, se toma como headers los registros de la fila 0
datos=pd.read_csv('~/Documentos/Tutorial-Python/Datos/Mujeres_ingeniería_y_tecnología.csv', encoding='latin1')
#Se visualizan los primeros 10 registros
datos.head(10)
#Se observa la forma de los datos o las dimensiones, se observa que son 160 filas y 5 columnas.
datos.shape
#Se da una descripción del tipo de variables que se cargan en el dataFrame
datos.dtypes
#Se puede visualizar las información de las colunnas de manera más completa
datos.info()
#Se hace un resumen estadístico global de las variables o columnas
datos.describe()
Explanation: Los ejemplos anteriores muestras que es muy sencillo manipular los datos con Pandas, ya sea con Series o con DataFrame. Para mayor detalle de las funciones lo recomendable es consultar las referencias mencionadas anteriormente.
Cargar datos desde diversos archivos y estadísticas sencillas.
End of explanation
#Se construye una tabla pivot para ordenar los datos y conocer como se comportó el total de mujeres
#inscritas a ingenierías
datos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING')
#Se revisa cual son los 3 estados con mayor cantidad de inscripciones en el cíclo 2012/2013
datos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING').sort_values(by='2012/2013')[-3:]
#Se grafican los resultados de la tabla anterior, pero ordenadas por el cíclo 2010/2011
datos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING').sort_values(by='2010/2011').plot(kind='bar')
Explanation: Viendo los datos que se tienen, es natural preguntarse algo al respecto. Lo sencillo es, ¿cuál es el estado que presenta mayor cantidad de inscripciones de mujeres a ingeniería?, pero también se puede agregar a la pregunta anterior el preguntar en qué año o cíclo ocurrió eso.
Algo sencillo para abordar las anteriores preguntas construir una tabla que permita visualizar la relación entre las variables mencionadas.
End of explanation
#Se grafica el boxplot para cada periodo
datos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING').plot(kind='box', title='Boxplot')
Explanation: Observación: se vuelve evidente que las entidades federales o Estados donde se inscriben más mujeres a ingenierías son el Distrito Federal(Ciudad de México), Estado de México, Veracruz, Puebla, Guanajuato, Jalisco y Nuevo León. También se observa que en todos los estados en el periódo 2010/2011 la cantidad de mujeres que se inscribieron fue mayor y decayó significativamente en los años siguientes.
Esto responde a la pregunta : ¿cuál es el estado que presenta mayor cantidad de inscripciones de mujeres a ingeniería?
End of explanation
#Se construye la tabla
#Tabla1=datos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING')
datos.head()
#Se carga la librería seaborn
import seaborn as sns
#sns.set(style="ticks")
#sns.boxplot(x="CICLO",y="MUJERES_INSC_ING",data=datos,palette="PRGn",hue="CICLO")
Explanation: Nota: para estre breve análisis se hace uso de la construcción de tablas pivot en pandas. Esto facilidad analizar como se comportan las variables categóricas de los datos. En este ejemplo se muestra que el periódo 2010/2011 tuvo una media mayor de mujeres inscritas en ingeniarías, pero también se ve que la relación entre los estados fue más dispersa. Pero también se ve que los periódos del 2011/2012, 2012/2013 y 2013/2014 tienen comportamientos "similares".
Otras herramientas gráficas
Conforme evolucionó Pandas y el módulo se volvió más usado, la límitante que tenía a mi parecer, era el nivel de gráficos base. Para usar los DataFrame y Series en matplotlib se necesita definir los array o procesarlos de modo tal que puedan contruirse mejores gráficos. Matplotlib es un módulo muy potente, pero resulta mucho más engorroso hacer un análisis grafico. Si se ha usado R project para hacer una exploración de datos, resulta muy facil constrir los gráficos básicos y con librerías como ggplot2 o lattice se puede hacer un análisis gráfico sencillo y potente.
Ante este problema se diseño una librería para complementar el análisis grafico, que es algo asi como "de alto nivel" al comprarla con matplotlib. El módulo se llama seaborn.
Para los siguientes ejemplos uso los datos que se han analizado anteriormente.
End of explanation
sns.factorplot(x="ENTIDAD",y="MUJERES_INSC_ING",hue="CICLO",data=datos,palette="muted", size=15,kind="bar")
#Otra gráfica, se muestra el cruce entre las mujeres que se inscriben en ingeniería y el total de mujeres
with sns.axes_style('white'):
sns.jointplot('MUJERES_INSC_ING','MAT_TOTAL_SUP',data=datos,kind='hex')
Explanation: Como cargar un json y analizarlo.
En la siguiente se da una ejemplo de como cargar datos desde algún servicio web que regresa un arvhivo de tipo JSON.
End of explanation |
5,533 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 畳み込み変分オートエンコーダ
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: MNIST データセットを読み込む
それぞれの MNIST 画像は、もともと 784 個の整数値から成るベクトルで、各整数値は、ピクセルの強度を表す 0~255 の値です。各ピクセルをベルヌーイ分布を使ってモデル化し、データセットを統計的にバイナリ化します。
Step3: tf.data を使用して、データをバッチ化・シャッフルする
Step5: tf.keras.Sequential を使ってエンコーダとデコーダネットワークを定義する
この VAE の例では、エンコーダとデコーダのネットワークに 2 つの小さな ConvNets を使用しています。文献では、これらのネットワークはそれぞれ、推論/認識モデルおよび生成モデルとも呼ばれています。実装を簡略化するために tf.keras.Sequential を使用しています。以降の説明では、観測と潜在変数をそれぞれ $x$ と $z$ で表記しています。
エンコーダネットワーク
これは、おおよその事後分布 $q(z|x)$ を定義します。これは、観測を入力として受け取り、潜在表現 $z$ の条件付き分布を指定するための一連のパラメータを出力します。この例では、単純に対角ガウスとして分布をモデル化するため、ネットワークは、素因数分解されたガウスの平均と対数分散を出力します。数値的な安定性を得るために、直接分散を出力する代わりに対数分散を出力します。
デコーダネットワーク
これは、観測 $p(x|z)$ の条件付き分布を定義します。これは入力として潜在サンプル $z$ を取り、観測の条件付き分布のパラメータを出力します。$p(z)$ 前の潜在分布を単位ガウスとしてモデル化します。
パラメータ再設定のコツ
トレーニング中にデコーダのサンプル $z$ を生成するには、エンコーダが出力したパラメータによって定義される潜在分布から入力観測 $x$ を指定してサンプリングできます。ただし、このサンプリング演算では、バックプロパゲーションがランダムなノードを通過できないため、ボトルネックが発生します。
これを解消するために、パラメータ再設定のコツを使用します。この例では、次のように、デコーダパラメータともう 1 つの $\epsilon$ を使用して $z$ を近似化します。
$$z = \mu + \sigma \odot \epsilon$$
上記の $\mu$ と $\sigma$ は、それぞれガウス分布の平均と標準偏差を表します。これらは、デコーダ出力から得ることができます。$\epsilon$ は、$z$ の偶然性を維持するためのランダムノイズとして考えることができます。$\epsilon$ は標準正規分布から生成します。
潜在変数 $z$ は、$\mu$、$\sigma$、および $\epsilon$ の関数によって生成されるようになりました。これらによって、モデルがそれぞれ $\mu$ と $\sigma$ を介してエンコーダの勾配をバックプロパゲートしながら、$\epsilon$ を介して偶然性を維持できるようになります。
ネットワークアーキテクチャ
エンコーダネットワークの場合、2 つの畳み込みレイヤーを使用し、その後に全結合レイヤーが続きます。デコーダネットワークでは、3 つの畳み込み転置レイヤー(一部の文脈ではデコンボリューションレイヤーとも呼ばれる)が続く全結合レイヤーを使用することで、このアーキテクチャをミラーリングします。VAE をトレーニングする場合にはバッチの正規化を回避するのが一般的であることに注意してください。これは、ミニバッチを使用することで追加される偶然性によって、サンプリングの偶然性にさらに不安定性を加える可能性があるためです。
Step7: 損失関数とオプティマイザを定義する
VAE は、限界対数尤度の証拠下限(ELBO)を最大化することでトレーニングします。
$$\log p(x) \ge \text{ELBO} = \mathbb{E}_{q(z|x)}\left[\log \frac{p(x, z)}{q(z|x)}\right].$$
実際には、この期待値の単一サンプルのモンテカルロ推定を最適化します。
$$\log p(x| z) + \log p(z) - \log q(z|x),$$ とし、$z$ は $q(z|x)$ からサンプリングされます。
注意
Step8: トレーニング
データセットのイテレーションから始めます。
各イテレーションで、画像をエンコーダに渡して、おおよその事後分布 $q(z|x)$ の一連の平均値と対数分散パラメータを取得します。
次に、$q(z|x)$ から得たサンプルにパラメータ再設定のコツを適用します。
最後に、パラメータを再設定したサンプルをデコーダに渡して、生成分布 $p(x|z)$ のロジットを取得します。
注意
Step9: 最後のトレーニングエポックから生成された画像を表示する
Step10: 保存したすべての画像のアニメーション GIF を表示する
Step12: 潜在空間から数字の 2D 多様体を表示する
次のコードを実行すると、各数字が 2D 潜在空間で別の数字に変化する、さまざまな数字クラスの連続分布が表示されます。潜在空間の標準正規分布の生成には、TensorFlow Probability を使用します。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install tensorflow-probability
# to generate gifs
!pip install imageio
!pip install git+https://github.com/tensorflow/docs
from IPython import display
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import PIL
import tensorflow as tf
import tensorflow_probability as tfp
import time
Explanation: 畳み込み変分オートエンコーダ
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/generative/cvae"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/generative/cvae.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/generative/cvae.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/generative/cvae.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
このノートブックでは、MNIST データセットで変分オートエンコーダ(VAE)(1、2)のトレーニング方法を実演します。VAE はオートエンコードの確率論的見解で、高次元入力データをより小さな表現に圧縮するモデルです。入力を潜在ベクトルにマッピングする従来のオートエンコーダとは異なり、VAE は入力をガウスの平均や分散といった確率分布のパラメータにマッピングします。このアプローチによって、画像生成に役立つ構造化された連続的な潜在空間が生成されます。
MNIST モデルをビルドする
End of explanation
(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()
def preprocess_images(images):
images = images.reshape((images.shape[0], 28, 28, 1)) / 255.
return np.where(images > .5, 1.0, 0.0).astype('float32')
train_images = preprocess_images(train_images)
test_images = preprocess_images(test_images)
train_size = 60000
batch_size = 32
test_size = 10000
Explanation: MNIST データセットを読み込む
それぞれの MNIST 画像は、もともと 784 個の整数値から成るベクトルで、各整数値は、ピクセルの強度を表す 0~255 の値です。各ピクセルをベルヌーイ分布を使ってモデル化し、データセットを統計的にバイナリ化します。
End of explanation
train_dataset = (tf.data.Dataset.from_tensor_slices(train_images)
.shuffle(train_size).batch(batch_size))
test_dataset = (tf.data.Dataset.from_tensor_slices(test_images)
.shuffle(test_size).batch(batch_size))
Explanation: tf.data を使用して、データをバッチ化・シャッフルする
End of explanation
class CVAE(tf.keras.Model):
Convolutional variational autoencoder.
def __init__(self, latent_dim):
super(CVAE, self).__init__()
self.latent_dim = latent_dim
self.encoder = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(
filters=32, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latent_dim + latent_dim),
]
)
self.decoder = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(latent_dim,)),
tf.keras.layers.Dense(units=7*7*32, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(7, 7, 32)),
tf.keras.layers.Conv2DTranspose(
filters=64, kernel_size=3, strides=2, padding='same',
activation='relu'),
tf.keras.layers.Conv2DTranspose(
filters=32, kernel_size=3, strides=2, padding='same',
activation='relu'),
# No activation
tf.keras.layers.Conv2DTranspose(
filters=1, kernel_size=3, strides=1, padding='same'),
]
)
@tf.function
def sample(self, eps=None):
if eps is None:
eps = tf.random.normal(shape=(100, self.latent_dim))
return self.decode(eps, apply_sigmoid=True)
def encode(self, x):
mean, logvar = tf.split(self.encoder(x), num_or_size_splits=2, axis=1)
return mean, logvar
def reparameterize(self, mean, logvar):
eps = tf.random.normal(shape=mean.shape)
return eps * tf.exp(logvar * .5) + mean
def decode(self, z, apply_sigmoid=False):
logits = self.decoder(z)
if apply_sigmoid:
probs = tf.sigmoid(logits)
return probs
return logits
Explanation: tf.keras.Sequential を使ってエンコーダとデコーダネットワークを定義する
この VAE の例では、エンコーダとデコーダのネットワークに 2 つの小さな ConvNets を使用しています。文献では、これらのネットワークはそれぞれ、推論/認識モデルおよび生成モデルとも呼ばれています。実装を簡略化するために tf.keras.Sequential を使用しています。以降の説明では、観測と潜在変数をそれぞれ $x$ と $z$ で表記しています。
エンコーダネットワーク
これは、おおよその事後分布 $q(z|x)$ を定義します。これは、観測を入力として受け取り、潜在表現 $z$ の条件付き分布を指定するための一連のパラメータを出力します。この例では、単純に対角ガウスとして分布をモデル化するため、ネットワークは、素因数分解されたガウスの平均と対数分散を出力します。数値的な安定性を得るために、直接分散を出力する代わりに対数分散を出力します。
デコーダネットワーク
これは、観測 $p(x|z)$ の条件付き分布を定義します。これは入力として潜在サンプル $z$ を取り、観測の条件付き分布のパラメータを出力します。$p(z)$ 前の潜在分布を単位ガウスとしてモデル化します。
パラメータ再設定のコツ
トレーニング中にデコーダのサンプル $z$ を生成するには、エンコーダが出力したパラメータによって定義される潜在分布から入力観測 $x$ を指定してサンプリングできます。ただし、このサンプリング演算では、バックプロパゲーションがランダムなノードを通過できないため、ボトルネックが発生します。
これを解消するために、パラメータ再設定のコツを使用します。この例では、次のように、デコーダパラメータともう 1 つの $\epsilon$ を使用して $z$ を近似化します。
$$z = \mu + \sigma \odot \epsilon$$
上記の $\mu$ と $\sigma$ は、それぞれガウス分布の平均と標準偏差を表します。これらは、デコーダ出力から得ることができます。$\epsilon$ は、$z$ の偶然性を維持するためのランダムノイズとして考えることができます。$\epsilon$ は標準正規分布から生成します。
潜在変数 $z$ は、$\mu$、$\sigma$、および $\epsilon$ の関数によって生成されるようになりました。これらによって、モデルがそれぞれ $\mu$ と $\sigma$ を介してエンコーダの勾配をバックプロパゲートしながら、$\epsilon$ を介して偶然性を維持できるようになります。
ネットワークアーキテクチャ
エンコーダネットワークの場合、2 つの畳み込みレイヤーを使用し、その後に全結合レイヤーが続きます。デコーダネットワークでは、3 つの畳み込み転置レイヤー(一部の文脈ではデコンボリューションレイヤーとも呼ばれる)が続く全結合レイヤーを使用することで、このアーキテクチャをミラーリングします。VAE をトレーニングする場合にはバッチの正規化を回避するのが一般的であることに注意してください。これは、ミニバッチを使用することで追加される偶然性によって、サンプリングの偶然性にさらに不安定性を加える可能性があるためです。
End of explanation
optimizer = tf.keras.optimizers.Adam(1e-4)
def log_normal_pdf(sample, mean, logvar, raxis=1):
log2pi = tf.math.log(2. * np.pi)
return tf.reduce_sum(
-.5 * ((sample - mean) ** 2. * tf.exp(-logvar) + logvar + log2pi),
axis=raxis)
def compute_loss(model, x):
mean, logvar = model.encode(x)
z = model.reparameterize(mean, logvar)
x_logit = model.decode(z)
cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=x)
logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2, 3])
logpz = log_normal_pdf(z, 0., 0.)
logqz_x = log_normal_pdf(z, mean, logvar)
return -tf.reduce_mean(logpx_z + logpz - logqz_x)
@tf.function
def train_step(model, x, optimizer):
Executes one training step and returns the loss.
This function computes the loss and gradients, and uses the latter to
update the model's parameters.
with tf.GradientTape() as tape:
loss = compute_loss(model, x)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
Explanation: 損失関数とオプティマイザを定義する
VAE は、限界対数尤度の証拠下限(ELBO)を最大化することでトレーニングします。
$$\log p(x) \ge \text{ELBO} = \mathbb{E}_{q(z|x)}\left[\log \frac{p(x, z)}{q(z|x)}\right].$$
実際には、この期待値の単一サンプルのモンテカルロ推定を最適化します。
$$\log p(x| z) + \log p(z) - \log q(z|x),$$ とし、$z$ は $q(z|x)$ からサンプリングされます。
注意: KL 項を分析的に計算することもできますが、ここでは簡単にするために、3 つの項すべてをモンテカルロ Estimator に組み込んでいます。
End of explanation
epochs = 10
# set the dimensionality of the latent space to a plane for visualization later
latent_dim = 2
num_examples_to_generate = 16
# keeping the random vector constant for generation (prediction) so
# it will be easier to see the improvement.
random_vector_for_generation = tf.random.normal(
shape=[num_examples_to_generate, latent_dim])
model = CVAE(latent_dim)
def generate_and_save_images(model, epoch, test_sample):
mean, logvar = model.encode(test_sample)
z = model.reparameterize(mean, logvar)
predictions = model.sample(z)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i + 1)
plt.imshow(predictions[i, :, :, 0], cmap='gray')
plt.axis('off')
# tight_layout minimizes the overlap between 2 sub-plots
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
# Pick a sample of the test set for generating output images
assert batch_size >= num_examples_to_generate
for test_batch in test_dataset.take(1):
test_sample = test_batch[0:num_examples_to_generate, :, :, :]
generate_and_save_images(model, 0, test_sample)
for epoch in range(1, epochs + 1):
start_time = time.time()
for train_x in train_dataset:
train_step(model, train_x, optimizer)
end_time = time.time()
loss = tf.keras.metrics.Mean()
for test_x in test_dataset:
loss(compute_loss(model, test_x))
elbo = -loss.result()
display.clear_output(wait=False)
print('Epoch: {}, Test set ELBO: {}, time elapse for current epoch: {}'
.format(epoch, elbo, end_time - start_time))
generate_and_save_images(model, epoch, test_sample)
Explanation: トレーニング
データセットのイテレーションから始めます。
各イテレーションで、画像をエンコーダに渡して、おおよその事後分布 $q(z|x)$ の一連の平均値と対数分散パラメータを取得します。
次に、$q(z|x)$ から得たサンプルにパラメータ再設定のコツを適用します。
最後に、パラメータを再設定したサンプルをデコーダに渡して、生成分布 $p(x|z)$ のロジットを取得します。
注意: トレーニングセットに 60k のデータポイントとテストセットに 10k のデータポイント持つ Keras で読み込んだデータセットを使用しているため、テストセットの ELBO は、Larochelle の MNIST の動的なバイナリ化を使用する文献で報告された結果よりもわずかに高くなります。
画像の生成
トレーニングの後は、画像をいくつか生成します。
分布 $p(z)$ 前の単位ガウスから一連の潜在ベクトルをサンプリングすることから始めます。
すると、ジェネレータはその潜在サンプル $z$ を観測のロジットに変換し、分布 $p(x|z)$ が得られます。
ここで、ベルヌーイ分布の確率を図に作成します。
End of explanation
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
plt.imshow(display_image(epoch))
plt.axis('off') # Display images
Explanation: 最後のトレーニングエポックから生成された画像を表示する
End of explanation
anim_file = 'cvae.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
for filename in filenames:
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import tensorflow_docs.vis.embed as embed
embed.embed_file(anim_file)
Explanation: 保存したすべての画像のアニメーション GIF を表示する
End of explanation
def plot_latent_images(model, n, digit_size=28):
Plots n x n digit images decoded from the latent space.
norm = tfp.distributions.Normal(0, 1)
grid_x = norm.quantile(np.linspace(0.05, 0.95, n))
grid_y = norm.quantile(np.linspace(0.05, 0.95, n))
image_width = digit_size*n
image_height = image_width
image = np.zeros((image_height, image_width))
for i, yi in enumerate(grid_x):
for j, xi in enumerate(grid_y):
z = np.array([[xi, yi]])
x_decoded = model.sample(z)
digit = tf.reshape(x_decoded[0], (digit_size, digit_size))
image[i * digit_size: (i + 1) * digit_size,
j * digit_size: (j + 1) * digit_size] = digit.numpy()
plt.figure(figsize=(10, 10))
plt.imshow(image, cmap='Greys_r')
plt.axis('Off')
plt.show()
plot_latent_images(model, 20)
Explanation: 潜在空間から数字の 2D 多様体を表示する
次のコードを実行すると、各数字が 2D 潜在空間で別の数字に変化する、さまざまな数字クラスの連続分布が表示されます。潜在空間の標準正規分布の生成には、TensorFlow Probability を使用します。
End of explanation |
5,534 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
The beam_search_decoder() function implements the beam search decoder for Natural Language Processing.
| Python Code::
# beam search
def beam_search_decoder(data, k):
sequences = [[list(), 0.0]]
# walk over each step in sequence
for row in data:
all_candidates = list()
# expand each current candidate
for i in range(len(sequences)):
seq, score = sequences[i]
for j in range(len(row)):
candidate = [seq + [j], score - log(row[j])]
all_candidates.append(candidate)
# order all candidates by score
ordered = sorted(all_candidates, key=lambda tup:tup[1])
# select k best
sequences = ordered[:k]
return sequences
|
5,535 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vendor Recommender - EDA
@olibolly
Open TO-DO
Link the notebook with github using ungit - DONE
Provide access to the project if we go for Big query - DONE
Re-pull EDA using updated 2016-2017 data - DONE
Further EDA on collaborative filtering - DONE
Run first regression to understand what features matter - DONE
Join tables FAPIIS and USA spending
Useful links
https
Step1: SAM (System for Award Management) - exclusions
https
Step2: There are 8,659 firms on the SAM exclusion list
Step3: NPI and CAGE don't seem to be great keys to join the data - ideally we can use SAM
Federal Awardee Performance and Integrity Information System (FAPIIS)
This is the contractor's fault - you can do business with these contractors on SAM one cannot do business with
Only 5 years by design
Step4: FAPIIS is not bad with 3002 DUNS code but time range goes only from 2012 to 2017
USA Spending
Link to collaborative filtering
https
Step5: Which means we're dealing with 49.5M transactions totalling 6.7 trillion dollars. These purchases came from 622k vendors that won 2.2mn solicitations issued by government agencies.
Step6: Understanding where the budget is spent
Step7: Looking at SMBs by year
Step8: SMB contract by gov. agency & by naics code
Step9: Simple Linear regression (LR)
LR
Step10: LR
Step11: MVP
MVP 1 - The most popular vendor
Search query = 'construction'
Enter your department name - eg. 'agriculture'
Ranking based on 'counts' of number of contracts that occured
TO-DO check the uppercase and lowercase in the REGEX
Do we want to add more parameters, such as Geo, size of the contract? To be discussed
Step12: MVP 2 - Collaborative filtering
If a person A likes item 1, 2, 3 and B like 2,3,4 then they have similar interests and A should like item 4 and B should like item 1.
Looking at match between gov mod_agency (275) & vendors (770526)
See
Step13: Worklow 1
<br>
a. Collaborative Filtering - user-item prediction
Step14: b. Collaborative Filtering - item-item prediction
Step15: OTHERS - FROM TUTORIAL - Anton Tarasenko
Data Mining Government Clients
Suppose you want to start selling to the government. While FBO.gov publishes government RFPs and you can apply there, government agencies often issue requests when they've already chosen the supplier. Agencies go through FBO.gov because it's a mandatory step for deals north of $25K. But winning at this stage is unlikely if an RFP is already tailored for another supplier.
Reaching warm leads in advance would increase chances of winning a government contract. The contracts data helps identify the warm leads by looking at purchases in the previous years.
There're several ways of searching through those years.
Who Buys What You Make
The goods and services bought in each transaction are encoded in the variable productorservicecode. Top ten product categories according to this variable
Step16: You can find agencies that buy products like yours. If it's "software"
Step17: What Firms in Your Industry Sell to the Government
Another way to find customers is the variable principalnaicscode that encodes the industry in which the vendor does business.
The list of NAICS codes is available at Census.gov, but you can do text search in the table. Let's find who bought software from distributors in 2015
Step18: Inspecting Specific Transactions
You can learn details from looking at transactions for a specific (agency, NAICS) pair. For example, what software does TSA buy?
Step19: Alternatively, specify vendors your product relates to and check how the government uses it. Top deals in data analytics
Step20: Searching Through Descriptions
Full-text search and regular expressions for the variable descriptionofcontractrequirement narrow results for relevant product groups
Step21: Some rows of descriptionofcontractrequirement contain codes like "IGF
Step22: Facts about Government Contracting
Let's check some popular statements about government contracting.
Small Businesses Win Most Contracts
Contractors had to report their revenue and the number of employees. It makes easy to check if small business is welcomed in government contracting
Step23: The median shows the most likely supplier. Agencies on the top of the table actively employ vendors whose annual revenue is less than $1mn.
The Department of Defence, the largest buyer with $4.5tn worth of goods and services bought over these 17 years, has the median vendor with $2.5mn in revenue and 20 employees. It means that half of the DoD's vendors have less than $2.5mn in revenue.
Set-Aside Deals Take a Small Share
Set-aside purchases are reserved for special categories of suppliers, like women-, minority-, and veteran-owned businesses. There's a lot of confusion about their share in transactions. We can settle this confusion with data
Step24: Women-owned businesses make about one tenth of the transactions, but their share in terms of sales is only 3.7%.
A cross-tabulation for major set-aside categories
Step25: For example, firms owned by women, veterans, and minorities (all represented at the same time) sell $5bn in goods and services. That's 0.07% of all government purchases.
New Vendors Emerge Each Year
Becoming a government contractor may seem difficult at first, but let's see how many new contractors the government had in 2015. | Python Code:
import google.datalab.bigquery as bq
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn import cross_validation as cv
from sklearn.metrics.pairwise import pairwise_distances
from sklearn.metrics import mean_squared_error
from math import sqrt
Explanation: Vendor Recommender - EDA
@olibolly
Open TO-DO
Link the notebook with github using ungit - DONE
Provide access to the project if we go for Big query - DONE
Re-pull EDA using updated 2016-2017 data - DONE
Further EDA on collaborative filtering - DONE
Run first regression to understand what features matter - DONE
Join tables FAPIIS and USA spending
Useful links
https://github.com/antontarasenko/gpq
Dataset
USASpending.gov available on BigQuery dataset (17 years of data, 45mn transactions, $6.7tn worth of goods and services): gpqueries:contracts
Past Performance Information Retrieval System (PPIRS) -> review - not public data
System for Award Management (SAM)
FAPIIS
Are they any other dataset we should be considering??
Table gpqueries:contracts.raw
Table gpqueries:contracts.raw contains the unmodified data from the USASpending.gov archives. It's constructed from <year>_All_Contracts_Full_20160515.csv.zip files and includes contracts from 2000 to May 15, 2016.
Table gpqueries:contracts.raw contains 45M rows and 225 columns.
Each row refers to a transaction (a purchase or refund) made by a federal agency. It may be a pizza or an airplane.
The columns are grouped into categories:
Transaction: unique_transaction_id-baseandalloptionsvalue
Buyer (government agency): maj_agency_cat-fundedbyforeignentity
Dates: signeddate-lastdatetoorder, last_modified_date
Contract: contractactiontype-programacronym
Contractor (supplier, vendor): vendorname-statecode
Place of performance: PlaceofPerformanceCity-placeofperformancecongressionaldistrict
Product or service bought: psc_cat-manufacturingorganizationtype
General contract information: agencyid-idvmodificationnumber
Competitive procedure: solicitationid-statutoryexceptiontofairopportunity
Contractor details: organizationaltype-otherstatutoryauthority
Contractor's executives: prime_awardee_executive1-interagencycontractingauthority
Detailed description for each variable is available in the official codebook:
USAspending.govDownloadsDataDictionary.pdf
End of explanation
%%sql
select * from [fiery-set-171213:vrec.sam_exclusions] limit 5
%%sql
select Exclusion_Type from [fiery-set-171213:vrec.sam_exclusions] group by 1;
%%sql
select Classification from [fiery-set-171213:vrec.sam_exclusions] group by 1;
%%sql
select
count(*)
from [fiery-set-171213:vrec.sam_exclusions]
where Classification in ('Firm')
;
Explanation: SAM (System for Award Management) - exclusions
https://www.sam.gov/sam/transcript/SAM_Exclusions_Public_Extract_Layout.pdf
End of explanation
%%bq query -n df_query
select
EXTRACT(YEAR FROM Active_Date) as year,
count(*) as count
from `fiery-set-171213.vrec.sam_exclusions`
where Classification in ('Firm')
and Active_Date is not NULL
group by 1
order by 1;
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
ax = df.plot(kind='bar', x='year', title='Excluded Firms per year', figsize=(15,8))
ax.set_xlabel('Year')
ax.set_ylabel('count')
%%bq query
select
#Name,
SAM_Number,
count(*) as count
from `fiery-set-171213.vrec.sam_exclusions`
where Classification in ('Firm')
#and Active_Date is not NULL
group by 1
order by 2 DESC
limit 5;
%%bq query
select
NPI,
count(*) as count
from `fiery-set-171213.vrec.sam_exclusions`
where Classification in ('Firm')
#and CAGE is not NULL
group by 1
order by 2 DESC
limit 5;
%%bq query
select
CAGE,
count(*) as count
from `fiery-set-171213.vrec.sam_exclusions`
where Classification in ('Firm')
#and CAGE is not NULL
group by 1
order by 2 DESC
limit 5;
Explanation: There are 8,659 firms on the SAM exclusion list
End of explanation
%%bq query
select *
from `fiery-set-171213.vrec.fapiis`
limit 5
%%bq query -n df_query
select
EXTRACT(YEAR FROM RECORD_DATE) as year,
count(*) as count
from `fiery-set-171213.vrec.fapiis`
group by 1
order by 1;
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
ax = df.plot(kind='bar', x='year', title='Firms by Record date', figsize=(10,5))
ax.set_xlabel('Year')
ax.set_ylabel('count')
%%bq query -n df_query
select
EXTRACT(YEAR FROM TERMINATION_DATE) as year,
count(*) as count
from `fiery-set-171213.vrec.fapiis`
group by 1
order by 1;
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
ax = df.plot(kind='bar', x='year', title='Firms by termination date', figsize=(10,5))
ax.set_xlabel('Year')
ax.set_ylabel('count')
%%bq query
select
AWARDEE_NAME,
DUNS,
count(*) as count
from `fiery-set-171213.vrec.fapiis`
group by 1,2
order by 3 DESC
limit 5;
%%bq query
select
*
from `fiery-set-171213.vrec.fapiis`
where AWARDEE_NAME in ('ALPHA RAPID ENGINEERING SOLUTIONS')
limit 5;
%%bq query
select
RECORD_TYPE,
count(*) as count
from `fiery-set-171213.vrec.fapiis`
group by 1
order by 2 DESC
Explanation: NPI and CAGE don't seem to be great keys to join the data - ideally we can use SAM
Federal Awardee Performance and Integrity Information System (FAPIIS)
This is the contractor's fault - you can do business with these contractors on SAM one cannot do business with
Only 5 years by design
End of explanation
%%bq query -n df_query
select count(*) as transactions
from `fiery-set-171213.vrec.usa_spending_all`
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
df.head()
%%bq query
select *
from `fiery-set-171213.vrec.usa_spending_all`
where mod_agency in ('1700: DEPT OF THE NAVY')
limit 5
%%bq query -n df_query
select
#substr(signeddate, 1, 2) month,
fiscal_year as year,
count(*) transactions,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
group by year
order by year asc
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
ax = df.set_index('year')['dollarsobligated'].plot(kind='bar', title='Government purchases by years')
ax.set_ylabel('dollars obligated')
%%bq query -n df_query
select
fiscal_year as year,
sum(dollarsobligated)/count(*) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
group by year
order by year asc
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
ax = df.set_index('year')['dollarsobligated'].plot(kind='bar', title='avg. transaction size by years')
ax.set_ylabel('dollars obligated')
Explanation: FAPIIS is not bad with 3002 DUNS code but time range goes only from 2012 to 2017
USA Spending
Link to collaborative filtering
https://docs.google.com/presentation/d/1x5g-wIoSUGRSwDqHC6MhZBZD5d2LQ19WKFlRneN2TyU/edit#slide=id.p121
https://www.usaspending.gov/DownloadCenter/Documents/USAspending.govDownloadsDataDictionary.pdf
End of explanation
%%bq query
select
maj_agency_cat,
mod_agency,
count(*)
from `fiery-set-171213.vrec.usa_spending_all`
group by 1,2
order by 3 DESC
limit 20
%%bq query
select
mod_parent,
vendorname,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
group by 1,2
order by 3 DESC
limit 20
Explanation: Which means we're dealing with 49.5M transactions totalling 6.7 trillion dollars. These purchases came from 622k vendors that won 2.2mn solicitations issued by government agencies.
End of explanation
%%bq query
select
productorservicecode,
systemequipmentcode,
claimantprogramcode,
principalnaicscode,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where vendorname in ('LOCKHEED MARTIN CORPORATION')
group by 1,2,3,4
order by 5 DESC
limit 20
%%bq query
select
#mod_parent,
vendorname,
systemequipmentcode,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where productorservicecode in ('1510: AIRCRAFT, FIXED WING')
group by 1,2
order by 3 DESC
limit 20
%%bq query
select
vendorname,
systemequipmentcode,
claimantprogramcode,
principalnaicscode,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where productorservicecode in ('1510: AIRCRAFT, FIXED WING')
and contractingofficerbusinesssizedetermination in ('S: SMALL BUSINESS')
group by 1,2,3,4
order by dollarsobligated DESC
limit 20
%%bq query
select
*
from `gpqueries.contracts.raw`
where productorservicecode in ('1510: AIRCRAFT, FIXED WING')
and contractingofficerbusinesssizedetermination in ('S: SMALL BUSINESS')
limit 1
%%bq query
select
claimantprogramcode,
principalnaicscode,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where contractingofficerbusinesssizedetermination in ("S: SMALL BUSINESS")
group by 1,2
order by dollarsobligated DESC
limit 10
Explanation: Understanding where the budget is spent
End of explanation
%%bq query -n df_query
select
fiscal_year,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where contractingofficerbusinesssizedetermination in ("S: SMALL BUSINESS")
group by 1
order by 1
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
plt = df.set_index('fiscal_year')['dollarsobligated'].plot(kind='bar', title='transactions amount for SMBs')
%%bq query -n df_query
#%%sql
select
smb.fiscal_year,
sum(smb.transaction) as smb,
sum(total.transaction) as total,
sum(smb.transaction)/sum(total.transaction) as percentage
from
(select
fiscal_year,
sum(dollarsobligated) as transaction
from `fiery-set-171213.vrec.usa_spending_all`
where contractingofficerbusinesssizedetermination in ("S: SMALL BUSINESS")
group by 1) as smb
join
(select
fiscal_year,
sum(dollarsobligated) as transaction
from `fiery-set-171213.vrec.usa_spending_all`
group by 1) as total
on smb.fiscal_year = total.fiscal_year
group by 1
order by 1
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
plt = df.set_index('fiscal_year')['percentage'].plot(kind='bar', title='dollars % for SMBs')
Explanation: Looking at SMBs by year
End of explanation
%%bq query
select
smb.principalnaicscode as principalnaicscode,
sum(total.count) as count,
sum(smb.dollarsobligated) as dollarsobligated_smb,
sum(total.dollarsobligated) as dollarsobligated_total,
sum(smb.dollarsobligated)/sum(total.dollarsobligated) as smb_percentage
from
(select
principalnaicscode,
count(*) as count,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where contractingofficerbusinesssizedetermination in ("S: SMALL BUSINESS")
group by 1) as smb
join
(select
principalnaicscode,
count(*) as count,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
group by 1
having dollarsobligated > 0
) as total
on smb.principalnaicscode = total.principalnaicscode
group by 1
order by 5 DESC
limit 10
Explanation: SMB contract by gov. agency & by naics code
End of explanation
%%bq query -n df_query
select
maj_agency_cat,
#mod_agency,
#contractactiontype,
#typeofcontractpricing,
#performancebasedservicecontract,
state,
#vendorcountrycode,
#principalnaicscode,
contractingofficerbusinesssizedetermination,
#sum(dollarsobligated) as dollarsobligated
dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where vendorcountrycode in ('UNITED STATES', 'USA: UNITED STATES OF AMERICA')
and contractingofficerbusinesssizedetermination in ('O: OTHER THAN SMALL BUSINESS', 'S: SMALL BUSINESS')
and dollarsobligated > 0
#group by 1,2,3
limit 20000
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
df.head()
# Create dummy variable using pandas function get_dummies
df1 = df.join(pd.get_dummies(df['maj_agency_cat']))
df1 = df1.join(pd.get_dummies(df['state']))
df1 = df1.join(pd.get_dummies(df['contractingofficerbusinesssizedetermination']))
df1 = df1.drop('maj_agency_cat', axis = 1)
df1 = df1.drop('state', axis = 1)
df1 = df1.drop('contractingofficerbusinesssizedetermination', axis = 1)
df1.head()
train_data = df1.iloc[:,1:]
train_labels = df[['dollarsobligated']]
lm = LinearRegression()
lm.fit(train_data, train_labels)
# The coefficients
print('Coefficients: \n', lm.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% np.mean((lm.predict(train_data) - train_labels) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % lm.score(train_data, train_labels))
Explanation: Simple Linear regression (LR)
LR: predict the size of the contract
A lot of categorical feature -> needs to binarize it -> creates very sparse Matrix -> poor performance for LR
R square of to 2%
Not ideal for the problem we are trying for here
End of explanation
%%bq query -n df_query
select
vendorname,
maj_agency_cat,
state,
contractingofficerbusinesssizedetermination,
count(*) as count,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where vendorcountrycode in ('UNITED STATES', 'USA: UNITED STATES OF AMERICA')
and contractingofficerbusinesssizedetermination in ('O: OTHER THAN SMALL BUSINESS', 'S: SMALL BUSINESS')
and dollarsobligated > 0
group by 1,2,3,4
limit 20000
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
df.head()
#Create dummy variable using pandas function get_dummies
df1 = df.join(pd.get_dummies(df['maj_agency_cat']))
df1 = df1.join(pd.get_dummies(df['state']))
df1 = df1.join(pd.get_dummies(df['contractingofficerbusinesssizedetermination']))
df1 = df1.drop('maj_agency_cat', axis = 1)
df1 = df1.drop('state', axis = 1)
df1 = df1.drop('contractingofficerbusinesssizedetermination', axis = 1)
df1 = df1.drop('vendorname', axis = 1)
df1 = df1.drop('dollarsobligated', axis = 1)
train_data = df1.iloc[:,1:]
train_labels = df[['count']]
lm = LinearRegression()
lm.fit(train_data, train_labels)
# The coefficients
print('Coefficients: \n', lm.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% np.mean((lm.predict(train_data) - train_labels) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % lm.score(train_data, train_labels))
Explanation: LR: Predict the number of contracts (popularity)
Same issue than previously
R square of to 1%
Not ideal for the problem we are trying for here
End of explanation
%%bq query
select
#principalnaicscode,
fiscal_year,
maj_agency_cat,
#contractingofficerbusinesssizedetermination,
#vendorname,
productorservicecode,
count(*) as count,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
#where contractingofficerbusinesssizedetermination in ("S: SMALL BUSINESS")
#where regexp_contains(principalnaicscode, "CONSTRUCTION")
#and regexp_contains(maj_agency_cat, "AGRICULTURE")
where regexp_contains(productorservicecode, "MEAT")
#and fiscal_year = 2016
group by 1,2,3
order by dollarsobligated DESC
limit 10
Explanation: MVP
MVP 1 - The most popular vendor
Search query = 'construction'
Enter your department name - eg. 'agriculture'
Ranking based on 'counts' of number of contracts that occured
TO-DO check the uppercase and lowercase in the REGEX
Do we want to add more parameters, such as Geo, size of the contract? To be discussed
End of explanation
%%bq query -n df_query
select
contractingofficerbusinesssizedetermination,
mod_agency,
vendorname,
count(*) as count
from `fiery-set-171213.vrec.usa_spending_all`
where vendorcountrycode in ('UNITED STATES', 'USA: UNITED STATES OF AMERICA')
and contractingofficerbusinesssizedetermination in ('O: OTHER THAN SMALL BUSINESS', 'S: SMALL BUSINESS')
and mod_agency not in ("")
group by 1,2,3
order by count DESC
limit 20000
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
df.head()
df1 = df.drop('contractingofficerbusinesssizedetermination', axis = 1)
n_agency = df1.mod_agency.unique().shape[0]
n_vendors = df1.vendorname.unique().shape[0]
print 'Number of gov agency = ' + str(n_agency) + ' | Number of vendors = ' + str(n_vendors)
# Convert categorial value with label encoding
le_agency = LabelEncoder()
label_agency = le_agency.fit_transform(df1['mod_agency'])
le_vendor = LabelEncoder()
label_vendor = le_vendor.fit_transform(df1['vendorname'])
df_agency = pd.DataFrame(label_agency)
df_vendor = pd.DataFrame(label_vendor)
df2 = pd.concat([df_agency, df_vendor], axis = 1)
df2 = pd.concat([df2, df1['count']], axis = 1)
df2.columns = ['mod_agency', 'vendorname', 'count']
df2.head(5)
# To ge the right label back
# le_agency.inverse_transform([173, 100])
# Split into training and test data set
train_data, test_data = cv.train_test_split(df2, test_size=0.25)
#Build the matrix
train_data_matrix = np.zeros((n_agency, n_vendors))
for line in train_data.itertuples():
train_data_matrix[line[1]-1, line[2]-1] = line[3]
test_data_matrix = np.zeros((n_agency, n_vendors))
for line in test_data.itertuples():
test_data_matrix[line[1]-1, line[2]-1] = line[3]
#Compute cosine distance
user_similarity = pairwise_distances(train_data_matrix, metric='cosine')
item_similarity = pairwise_distances(train_data_matrix.T, metric='cosine')
def predict(ratings, similarity, type='user'):
if type == 'user':
mean_user_rating = ratings.mean(axis=1)
#You use np.newaxis so that mean_user_rating has same format as ratings
ratings_diff = (ratings - mean_user_rating[:, np.newaxis])
pred = mean_user_rating[:, np.newaxis] + similarity.dot(ratings_diff) / np.array([np.abs(similarity).sum(axis=1)]).T
elif type == 'item':
pred = ratings.dot(similarity) / np.array([np.abs(similarity).sum(axis=1)])
return pred
item_prediction = predict(train_data_matrix, item_similarity, type='item')
user_prediction = predict(train_data_matrix, user_similarity, type='user')
# Evaluation
def rmse(prediction, ground_truth):
prediction = prediction[ground_truth.nonzero()].flatten()
ground_truth = ground_truth[ground_truth.nonzero()].flatten() #filter out all items with no 0 as we only want to predict in the test set
return sqrt(mean_squared_error(prediction, ground_truth))
print 'User-based CF RMSE: ' + str(rmse(user_prediction, test_data_matrix))
print 'Item-based CF RMSE: ' + str(rmse(item_prediction, test_data_matrix))
Explanation: MVP 2 - Collaborative filtering
If a person A likes item 1, 2, 3 and B like 2,3,4 then they have similar interests and A should like item 4 and B should like item 1.
Looking at match between gov mod_agency (275) & vendors (770526)
See: https://cambridgespark.com/content/tutorials/implementing-your-own-recommender-systems-in-Python/index.html
TO DO - TRAINING (1.9M rows) kernel crashed abover 20K -> Need to Map/Reduce or getting a higher performance machine or use another algorithm (matrix factorization)?
TO DO - Think about scaling or binarizing the count data -> to improve results
TO DO - Look at match between product service code (5833) & vendors (770526)
TO DO - Add Geo filter?
TO DO - Already done business with a company?
End of explanation
print 'Worklow 1'
print '=' * 100
print 'Select your agency:'
agency = df1['mod_agency'][10]
print agency
print '=' * 100
print '1. Have you considered working with these SMB companies (user prediction)?'
agency = le_agency.transform(agency)
vendor_reco = pd.DataFrame(user_prediction[agency, :])
labels = pd.DataFrame(le_vendor.inverse_transform(range(0, len(vendor_reco))))
df_reco = pd.concat([vendor_reco, labels], axis = 1)
df_reco.columns = ['reco_score', 'vendorname']
#Join to get the SMB list
df_smb = df.drop(['mod_agency', 'count'], axis = 1)
df_reco = df_reco.set_index('vendorname').join(df_smb.set_index('vendorname'))
df_reco = df_reco.sort_values(['reco_score'], ascending = [0])
df_reco[df_reco['contractingofficerbusinesssizedetermination'] == 'S: SMALL BUSINESS'].head(10)
Explanation: Worklow 1
<br>
a. Collaborative Filtering - user-item prediction
End of explanation
print '=' * 100
print '2. Have you considered working with these SMB companies (item-item prediction?)'
vendor_reco = pd.DataFrame(item_prediction[agency, :])
df_reco = pd.concat([vendor_reco, labels], axis = 1)
df_reco.columns = ['reco_score', 'vendorname']
df_reco = df_reco.set_index('vendorname').join(df_smb.set_index('vendorname'))
df_reco = df_reco.sort_values(['reco_score'], ascending = [0])
df_reco[df_reco['contractingofficerbusinesssizedetermination'] == 'S: SMALL BUSINESS'].head(10)
print 'Worklow 2'
print '=' * 100
print 'Select a vendor:'
# Workflow 2 - WIP
# Select a vendor
# Other similar vendor
Explanation: b. Collaborative Filtering - item-item prediction
End of explanation
%%sql
select
substr(productorservicecode, 1, 4) product_id,
first(substr(productorservicecode, 7)) product_name,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
product_id
order by
sum_dollarsobligated desc
limit 10
Explanation: OTHERS - FROM TUTORIAL - Anton Tarasenko
Data Mining Government Clients
Suppose you want to start selling to the government. While FBO.gov publishes government RFPs and you can apply there, government agencies often issue requests when they've already chosen the supplier. Agencies go through FBO.gov because it's a mandatory step for deals north of $25K. But winning at this stage is unlikely if an RFP is already tailored for another supplier.
Reaching warm leads in advance would increase chances of winning a government contract. The contracts data helps identify the warm leads by looking at purchases in the previous years.
There're several ways of searching through those years.
Who Buys What You Make
The goods and services bought in each transaction are encoded in the variable productorservicecode. Top ten product categories according to this variable:
End of explanation
%%sql
select
substr(agencyid, 1, 4) agency_id,
first(substr(agencyid, 7)) agency_name,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
where
productorservicecode contains 'software'
group by
agency_id
order by
sum_dollarsobligated desc
ignore case
Explanation: You can find agencies that buy products like yours. If it's "software":
End of explanation
%%sql
select
substr(agencyid, 1, 4) agency_id,
first(substr(agencyid, 7)) agency_name,
substr(principalnaicscode, 1, 6) naics_id,
first(substr(principalnaicscode, 9)) naics_name,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
where
principalnaicscode contains 'software' and
fiscal_year = 2015
group by
agency_id, naics_id
order by
sum_dollarsobligated desc
ignore case
Explanation: What Firms in Your Industry Sell to the Government
Another way to find customers is the variable principalnaicscode that encodes the industry in which the vendor does business.
The list of NAICS codes is available at Census.gov, but you can do text search in the table. Let's find who bought software from distributors in 2015:
End of explanation
%%sql
select
fiscal_year,
dollarsobligated,
vendorname, city, state, annualrevenue, numberofemployees,
descriptionofcontractrequirement
from
gpqueries:contracts.raw
where
agencyid contains 'transportation security administration' and
principalnaicscode contains 'computer and software stores'
ignore case
Explanation: Inspecting Specific Transactions
You can learn details from looking at transactions for a specific (agency, NAICS) pair. For example, what software does TSA buy?
End of explanation
%%sql
select
agencyid,
dollarsobligated,
vendorname,
descriptionofcontractrequirement
from
gpqueries:contracts.raw
where
vendorname contains 'tableau' or
vendorname contains 'socrata' or
vendorname contains 'palantir' or
vendorname contains 'revolution analytics' or
vendorname contains 'mathworks' or
vendorname contains 'statacorp' or
vendorname contains 'mathworks'
order by
dollarsobligated desc
limit
100
ignore case
Explanation: Alternatively, specify vendors your product relates to and check how the government uses it. Top deals in data analytics:
End of explanation
%%sql
select
agencyid,
dollarsobligated,
descriptionofcontractrequirement
from
gpqueries:contracts.raw
where
descriptionofcontractrequirement contains 'body camera'
limit
100
ignore case
Explanation: Searching Through Descriptions
Full-text search and regular expressions for the variable descriptionofcontractrequirement narrow results for relevant product groups:
End of explanation
%%sql
select
substr(pop_state_code, 1, 2) state_code,
first(substr(pop_state_code, 4)) state_name,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
state_code
order by
sum_dollarsobligated desc
Explanation: Some rows of descriptionofcontractrequirement contain codes like "IGF::CT::IGF". These codes classify the purchase into three groups of "Inherently Governmental Functions" (IGF):
IGF::CT::IGF for Critical Functions
IGF::CL::IGF for Closely Associated
IGF::OT::IGF for Other Functions
Narrowing Your Geography
You can find local opportunities using variables for vendors (city, state) and services sold (PlaceofPerformanceCity, pop_state_code). The states where most contracts are delivered in:
End of explanation
%%sql --module gpq
define query vendor_size_by_agency
select
substr(agencyid, 1, 4) agency_id,
first(substr(agencyid, 7)) agency_name,
nth(11, quantiles(annualrevenue, 21)) vendor_median_annualrevenue,
nth(11, quantiles(numberofemployees, 21)) vendor_median_numberofemployees,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
agency_id
having
transactions > 1000 and
sum_dollarsobligated > 10e6
order by
vendor_median_annualrevenue asc
bq.Query(gpq.vendor_size_by_agency).to_dataframe()
Explanation: Facts about Government Contracting
Let's check some popular statements about government contracting.
Small Businesses Win Most Contracts
Contractors had to report their revenue and the number of employees. It makes easy to check if small business is welcomed in government contracting:
End of explanation
%%sql
select
womenownedflag,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
womenownedflag
Explanation: The median shows the most likely supplier. Agencies on the top of the table actively employ vendors whose annual revenue is less than $1mn.
The Department of Defence, the largest buyer with $4.5tn worth of goods and services bought over these 17 years, has the median vendor with $2.5mn in revenue and 20 employees. It means that half of the DoD's vendors have less than $2.5mn in revenue.
Set-Aside Deals Take a Small Share
Set-aside purchases are reserved for special categories of suppliers, like women-, minority-, and veteran-owned businesses. There's a lot of confusion about their share in transactions. We can settle this confusion with data:
End of explanation
%%sql
select
womenownedflag, veteranownedflag, minorityownedbusinessflag,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
womenownedflag, veteranownedflag, minorityownedbusinessflag
order by
womenownedflag, veteranownedflag, minorityownedbusinessflag desc
Explanation: Women-owned businesses make about one tenth of the transactions, but their share in terms of sales is only 3.7%.
A cross-tabulation for major set-aside categories:
End of explanation
%%sql
select
sum(if(before2015.dunsnumber is null, 1, 0)) new_vendors,
sum(if(before2015.dunsnumber is null, 0, 1)) old_vendors
from
flatten((select unique(dunsnumber) dunsnumber from gpqueries:contracts.raw where fiscal_year = 2015), dunsnumber) in2015
left join
flatten((select unique(dunsnumber) dunsnumber from gpqueries:contracts.raw where fiscal_year < 2015), dunsnumber) before2015
on before2015.dunsnumber = in2015.dunsnumber
Explanation: For example, firms owned by women, veterans, and minorities (all represented at the same time) sell $5bn in goods and services. That's 0.07% of all government purchases.
New Vendors Emerge Each Year
Becoming a government contractor may seem difficult at first, but let's see how many new contractors the government had in 2015.
End of explanation |
5,536 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: version 1.0.3
+
Text Analysis and Entity Resolution
Entity resolution is a common, yet difficult problem in data cleaning and integration. This lab will demonstrate how we can use Apache Spark to apply powerful and scalable text analysis techniques and perform entity resolution across two datasets of commercial products.
Entity Resolution, or "Record linkage" is the term used by statisticians, epidemiologists, and historians, among others, to describe the process of joining records from one data source with another that describe the same entity. Our terms with the same meaning include, "entity disambiguation/linking", duplicate detection", "deduplication", "record matching", "(reference) reconciliation", "object identification", "data/information integration", and "conflation".
Entity Resolution (ER) refers to the task of finding records in a dataset that refer to the same entity across different data sources (e.g., data files, books, websites, databases). ER is necessary when joining datasets based on entities that may or may not share a common identifier (e.g., database key, URI, National identification number), as may be the case due to differences in record shape, storage location, and/or curator style or preference. A dataset that has undergone ER may be referred to as being cross-linked.
Code
This assignment can be completed using basic Python, pySpark Transformations and actions, and the plotting library matplotlib. Other libraries are not allowed.
Files
Data files for this assignment are from the metric-learning project and can be found at
Step5: Let's examine the lines that were just loaded in the two subset (small) files - one from Google and one from Amazon
Step7: Part 1
Step9: (1b) Removing stopwords
Stopwords are common (English) words that do not contribute much to the content or meaning of a document (e.g., "the", "a", "is", "to", etc.). Stopwords add noise to bag-of-words comparisons, so they are usually excluded.
Using the included file "stopwords.txt", implement tokenize, an improved tokenizer that does not emit stopwords.
Step11: (1c) Tokenizing the small datasets
Now let's tokenize the two small datasets. For each ID in a dataset, tokenize the values, and then count the total number of tokens.
How many tokens, total, are there in the two datasets?
Step13: (1d) Amazon record with the most tokens
Which Amazon record has the biggest number of tokens?
In other words, you want to sort the records and get the one with the largest count of tokens.
Step15: Part 2
Step16: (2b) Create a corpus
Create a pair RDD called corpusRDD, consisting of a combination of the two small datasets, amazonRecToToken and googleRecToToken. Each element of the corpusRDD should be a pair consisting of a key from one of the small datasets (ID or URL) and the value is the associated value for that key from the small datasets.
Step18: (2c) Implement an IDFs function
Implement idfs that assigns an IDF weight to every unique token in an RDD called corpus. The function should return an pair RDD where the key is the unique token and value is the IDF weight for the token.
Recall that the IDF weight for a token, t, in a set of documents, U, is computed as follows
Step19: (2d) Tokens with the smallest IDF
Print out the 11 tokens with the smallest IDF in the combined small dataset.
Step20: (2e) IDF Histogram
Plot a histogram of IDF values. Be sure to use appropriate scaling and bucketing for the data.
First plot the histogram using matplotlib
Step22: (2f) Implement a TF-IDF function
Use your tf function to implement a tfidf(tokens, idfs) function that takes a list of tokens from a document and a Python dictionary of IDF weights and returns a Python dictionary mapping individual tokens to total TF-IDF weights.
The steps your function should perform are
Step26: Part 3
Step28: (3b) Implement a cosineSimilarity function
Implement a cosineSimilarity(string1, string2, idfsDictionary) function that takes two strings and a dictionary of IDF weights, and computes their cosine similarity in the context of some global IDF weights.
The steps you should perform are
Step31: (3c) Perform Entity Resolution
Now we can finally do some entity resolution!
For every product record in the small Google dataset, use your cosineSimilarity function to compute its similarity to every record in the small Amazon dataset. Then, build a dictionary mapping (Google URL, Amazon ID) tuples to similarity scores between 0 and 1.
We'll do this computation two different ways, first we'll do it without a broadcast variable, and then we'll use a broadcast variable
The steps you should perform are
Step34: (3d) Perform Entity Resolution with Broadcast Variables
The solution in (3c) works well for small datasets, but it requires Spark to (automatically) send the idfsSmallWeights variable to all the workers. If we didn't cache() similarities, then it might have to be recreated if we run similar() multiple times. This would cause Spark to send idfsSmallWeights every time.
Instead, we can use a broadcast variable - we define the broadcast variable in the driver and then we can refer to it in each worker. Spark saves the broadcast variable at each worker, so it is only sent once.
The steps you should perform are
Step36: (3e) Perform a Gold Standard evaluation
First, we'll load the "gold standard" data and use it to answer several questions. We read and parse the Gold Standard data, where the format of each line is "Amazon Product ID","Google URL". The resulting RDD has elements of the form ("AmazonID GoogleURL", 'gold')
Step37: Using the "gold standard" data we can answer the following questions
Step38: Part 4
Step39: (4b) Compute IDFs and TF-IDFs for the full datasets
We will reuse your code from above to compute IDF weights for the complete combined datasets.
The steps you should perform are
Step40: (4c) Compute Norms for the weights from the full datasets
We will reuse your code from above to compute norms of the IDF weights for the complete combined dataset.
The steps you should perform are
Step42: (4d) Create inverted indicies from the full datasets
Build inverted indices of both data sources.
The steps you should perform are
Step44: (4e) Identify common tokens from the full dataset
We are now in position to efficiently perform ER on the full datasets. Implement the following algorithm to build an RDD that maps a pair of (ID, URL) to a list of tokens they share in common
Step46: (4f) Identify common tokens from the full dataset
Use the data structures from parts (4a) and (4e) to build a dictionary to map record pairs to cosine similarity scores.
The steps you should perform are
Step47: Part 5
Step48: The next step is to pick a threshold between 0 and 1 for the count of True Positives (true duplicates above the threshold). However, we would like to explore many different thresholds. To do this, we divide the space of thresholds into 100 bins, and take the following actions
Step49: (5b) Precision, Recall, and F-measures
We define functions so that we can compute the Precision, Recall, and F-measure as a function of threshold value
Step50: (5c) Line Plots
We can make line plots of precision, recall, and F-measure as a function of threshold value, for thresholds between 0.0 and 1.0. You can change nthresholds (above in part (5a)) to change the threshold values to plot. | Python Code:
import re
DATAFILE_PATTERN = '^(.+),"(.+)",(.*),(.*),(.*)'
def removeQuotes(s):
Remove quotation marks from an input string
Args:
s (str): input string that might have the quote "" characters
Returns:
str: a string without the quote characters
return ''.join(i for i in s if i!='"')
def parseDatafileLine(datafileLine):
Parse a line of the data file using the specified regular expression pattern
Args:
datafileLine (str): input string that is a line from the data file
Returns:
str: a string parsed using the given regular expression and without the quote characters
match = re.search(DATAFILE_PATTERN, datafileLine)
if match is None:
print 'Invalid datafile line: %s' % datafileLine
return (datafileLine, -1)
elif match.group(1) == '"id"':
print 'Header datafile line: %s' % datafileLine
return (datafileLine, 0)
else:
product = '%s %s %s' % (match.group(2), match.group(3), match.group(4))
return ((removeQuotes(match.group(1)), product), 1)
import sys
import os
from test_helper import Test
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab3')
GOOGLE_PATH = 'Google.csv'
GOOGLE_SMALL_PATH = 'Google_small.csv'
AMAZON_PATH = 'Amazon.csv'
AMAZON_SMALL_PATH = 'Amazon_small.csv'
GOLD_STANDARD_PATH = 'Amazon_Google_perfectMapping.csv'
STOPWORDS_PATH = 'stopwords.txt'
def parseData(filename):
Parse a data file
Args:
filename (str): input file name of the data file
Returns:
RDD: a RDD of parsed lines
return (sc
.textFile(filename, 4, 0)
.map(parseDatafileLine)
.cache())
def loadData(path):
Load a data file
Args:
path (str): input file name of the data file
Returns:
RDD: a RDD of parsed valid lines
filename = os.path.join(baseDir, inputPath, path)
raw = parseData(filename).cache()
failed = (raw
.filter(lambda s: s[1] == -1)
.map(lambda s: s[0]))
for line in failed.take(10):
print '%s - Invalid datafile line: %s' % (path, line)
valid = (raw
.filter(lambda s: s[1] == 1)
.map(lambda s: s[0])
.cache())
print '%s - Read %d lines, successfully parsed %d lines, failed to parse %d lines' % (path,
raw.count(),
valid.count(),
failed.count())
assert failed.count() == 0
assert raw.count() == (valid.count() + 1)
return valid
googleSmall = loadData(GOOGLE_SMALL_PATH)
google = loadData(GOOGLE_PATH)
amazonSmall = loadData(AMAZON_SMALL_PATH)
amazon = loadData(AMAZON_PATH)
Explanation: version 1.0.3
+
Text Analysis and Entity Resolution
Entity resolution is a common, yet difficult problem in data cleaning and integration. This lab will demonstrate how we can use Apache Spark to apply powerful and scalable text analysis techniques and perform entity resolution across two datasets of commercial products.
Entity Resolution, or "Record linkage" is the term used by statisticians, epidemiologists, and historians, among others, to describe the process of joining records from one data source with another that describe the same entity. Our terms with the same meaning include, "entity disambiguation/linking", duplicate detection", "deduplication", "record matching", "(reference) reconciliation", "object identification", "data/information integration", and "conflation".
Entity Resolution (ER) refers to the task of finding records in a dataset that refer to the same entity across different data sources (e.g., data files, books, websites, databases). ER is necessary when joining datasets based on entities that may or may not share a common identifier (e.g., database key, URI, National identification number), as may be the case due to differences in record shape, storage location, and/or curator style or preference. A dataset that has undergone ER may be referred to as being cross-linked.
Code
This assignment can be completed using basic Python, pySpark Transformations and actions, and the plotting library matplotlib. Other libraries are not allowed.
Files
Data files for this assignment are from the metric-learning project and can be found at:
cs100/lab3
The directory contains the following files:
Google.csv, the Google Products dataset
Amazon.csv, the Amazon dataset
Google_small.csv, 200 records sampled from the Google data
Amazon_small.csv, 200 records sampled from the Amazon data
Amazon_Google_perfectMapping.csv, the "gold standard" mapping
stopwords.txt, a list of common English words
Besides the complete data files, there are "sample" data files for each dataset - we will use these for Part 1. In addition, there is a "gold standard" file that contains all of the true mappings between entities in the two datasets. Every row in the gold standard file has a pair of record IDs (one Google, one Amazon) that belong to two record that describe the same thing in the real world. We will use the gold standard to evaluate our algorithms.
Part 0: Preliminaries
We read in each of the files and create an RDD consisting of lines.
For each of the data files ("Google.csv", "Amazon.csv", and the samples), we want to parse the IDs out of each record. The IDs are the first column of the file (they are URLs for Google, and alphanumeric strings for Amazon). Omitting the headers, we load these data files into pair RDDs where the mapping ID is the key, and the value is a string consisting of the name/title, description, and manufacturer from the record.
The file format of an Amazon line is:
"id","title","description","manufacturer","price"
The file format of a Google line is:
"id","name","description","manufacturer","price"
End of explanation
for line in googleSmall.take(3):
print 'google: %s: %s\n' % (line[0], line[1])
for line in amazonSmall.take(3):
print 'amazon: %s: %s\n' % (line[0], line[1])
Explanation: Let's examine the lines that were just loaded in the two subset (small) files - one from Google and one from Amazon
End of explanation
# TODO: Replace <FILL IN> with appropriate code
quickbrownfox = 'A quick brown fox jumps over the lazy dog.'
split_regex = r'\W+'
def simpleTokenize(string):
A simple implementation of input string tokenization
Args:
string (str): input string
Returns:
list: a list of tokens
return filter(None, re.split(split_regex, string.lower()))
print simpleTokenize(quickbrownfox) # Should give ['a', 'quick', 'brown', ... ]
# TEST Tokenize a String (1a)
Test.assertEquals(simpleTokenize(quickbrownfox),
['a','quick','brown','fox','jumps','over','the','lazy','dog'],
'simpleTokenize should handle sample text')
Test.assertEquals(simpleTokenize(' '), [], 'simpleTokenize should handle empty string')
Test.assertEquals(simpleTokenize('!!!!123A/456_B/789C.123A'), ['123a','456_b','789c','123a'],
'simpleTokenize should handle puntuations and lowercase result')
Test.assertEquals(simpleTokenize('fox fox'), ['fox', 'fox'],
'simpleTokenize should not remove duplicates')
Explanation: Part 1: ER as Text Similarity - Bags of Words
A simple approach to entity resolution is to treat all records as strings and compute their similarity with a string distance function. In this part, we will build some components for performing bag-of-words text-analysis, and then use them to compute record similarity.
Bag-of-words is a conceptually simple yet powerful approach to text analysis.
The idea is to treat strings, a.k.a. documents, as unordered collections of words, or tokens, i.e., as bags of words.
Note on terminology: a "token" is the result of parsing the document down to the elements we consider "atomic" for the task at hand. Tokens can be things like words, numbers, acronyms, or other exotica like word-roots or fixed-length character strings.
Bag of words techniques all apply to any sort of token, so when we say "bag-of-words" we really mean "bag-of-tokens," strictly speaking.
Tokens become the atomic unit of text comparison. If we want to compare two documents, we count how many tokens they share in common. If we want to search for documents with keyword queries (this is what Google does), then we turn the keywords into tokens and find documents that contain them. The power of this approach is that it makes string comparisons insensitive to small differences that probably do not affect meaning much, for example, punctuation and word order.
1(a) Tokenize a String
Implement the function simpleTokenize(string) that takes a string and returns a list of non-empty tokens in the string. simpleTokenize should split strings using the provided regular expression. Since we want to make token-matching case insensitive, make sure all tokens are turned lower-case. Give an interpretation, in natural language, of what the regular expression, split_regex, matches.
If you need help with Regular Expressions, try the site regex101 where you can interactively explore the results of applying different regular expressions to strings. Note that \W includes the "_" character. You should use re.split() to perform the string split. Also, make sure you remove any empty tokens.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
stopfile = os.path.join(baseDir, inputPath, STOPWORDS_PATH)
stopwords = set(sc.textFile(stopfile).collect())
print 'These are the stopwords: %s' % stopwords
def tokenize(string):
An implementation of input string tokenization that excludes stopwords
Args:
string (str): input string
Returns:
list: a list of tokens without stopwords
return filter(lambda x: not x in stopwords and x != '', re.split(split_regex, string.lower()))
print tokenize(quickbrownfox) # Should give ['quick', 'brown', ... ]
# TEST Removing stopwords (1b)
Test.assertEquals(tokenize("Why a the?"), [], 'tokenize should remove all stopwords')
Test.assertEquals(tokenize("Being at the_?"), ['the_'], 'tokenize should handle non-stopwords')
Test.assertEquals(tokenize(quickbrownfox), ['quick','brown','fox','jumps','lazy','dog'],
'tokenize should handle sample text')
Explanation: (1b) Removing stopwords
Stopwords are common (English) words that do not contribute much to the content or meaning of a document (e.g., "the", "a", "is", "to", etc.). Stopwords add noise to bag-of-words comparisons, so they are usually excluded.
Using the included file "stopwords.txt", implement tokenize, an improved tokenizer that does not emit stopwords.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
amazonRecToToken = amazonSmall.map(lambda x: (x[0], tokenize(x[1])))
googleRecToToken = googleSmall.map(lambda x: (x[0], tokenize(x[1])))
def countTokens(vendorRDD):
Count and return the number of tokens
Args:
vendorRDD (RDD of (recordId, tokenizedValue)): Pair tuple of record ID to tokenized output
Returns:
count: count of all tokens
return vendorRDD.map(lambda x: len(x[1])).sum()
totalTokens = countTokens(amazonRecToToken) + countTokens(googleRecToToken)
print 'There are %s tokens in the combined datasets' % totalTokens
# TEST Tokenizing the small datasets (1c)
Test.assertEquals(totalTokens, 22520, 'incorrect totalTokens')
Explanation: (1c) Tokenizing the small datasets
Now let's tokenize the two small datasets. For each ID in a dataset, tokenize the values, and then count the total number of tokens.
How many tokens, total, are there in the two datasets?
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def findBiggestRecord(vendorRDD):
Find and return the record with the largest number of tokens
Args:
vendorRDD (RDD of (recordId, tokens)): input Pair Tuple of record ID and tokens
Returns:
list: a list of 1 Pair Tuple of record ID and tokens
return vendorRDD.takeOrdered(1, lambda x: -len(x[1]))
biggestRecordAmazon = findBiggestRecord(amazonRecToToken)
print 'The Amazon record with ID "%s" has the most tokens (%s)' % (biggestRecordAmazon[0][0],
len(biggestRecordAmazon[0][1]))
# TEST Amazon record with the most tokens (1d)
Test.assertEquals(biggestRecordAmazon[0][0], 'b000o24l3q', 'incorrect biggestRecordAmazon')
Test.assertEquals(len(biggestRecordAmazon[0][1]), 1547, 'incorrect len for biggestRecordAmazon')
Explanation: (1d) Amazon record with the most tokens
Which Amazon record has the biggest number of tokens?
In other words, you want to sort the records and get the one with the largest count of tokens.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def tf(tokens):
Compute TF
Args:
tokens (list of str): input list of tokens from tokenize
Returns:
dictionary: a dictionary of tokens to its TF values
weights = {}
count = 0
for token in tokens:
if token in weights:
weights[token] += 1
else:
weights[token] = 1
count += 1
for key, value in weights.items():
weights[key] = float(value)/count
return weights
print tf(tokenize(quickbrownfox)) # Should give { 'quick': 0.1666 ... }
# TEST Implement a TF function (2a)
tf_test = tf(tokenize(quickbrownfox))
Test.assertEquals(tf_test, {'brown': 0.16666666666666666, 'lazy': 0.16666666666666666,
'jumps': 0.16666666666666666, 'fox': 0.16666666666666666,
'dog': 0.16666666666666666, 'quick': 0.16666666666666666},
'incorrect result for tf on sample text')
tf_test2 = tf(tokenize('one_ one_ two!'))
Test.assertEquals(tf_test2, {'one_': 0.6666666666666666, 'two': 0.3333333333333333},
'incorrect result for tf test')
Explanation: Part 2: ER as Text Similarity - Weighted Bag-of-Words using TF-IDF
Bag-of-words comparisons are not very good when all tokens are treated the same: some tokens are more important than others. Weights give us a way to specify which tokens to favor. With weights, when we compare documents, instead of counting common tokens, we sum up the weights of common tokens. A good heuristic for assigning weights is called "Term-Frequency/Inverse-Document-Frequency," or TF-IDF for short.
TF
TF rewards tokens that appear many times in the same document. It is computed as the frequency of a token in a document, that is, if document d contains 100 tokens and token t appears in d 5 times, then the TF weight of t in d is 5/100 = 1/20. The intuition for TF is that if a word occurs often in a document, then it is more important to the meaning of the document.
IDF
IDF rewards tokens that are rare overall in a dataset. The intuition is that it is more significant if two documents share a rare word than a common one. IDF weight for a token, t, in a set of documents, U, is computed as follows:
Let N be the total number of documents in U
Find n(t), the number of documents in U that contain t
Then IDF(t) = N/n(t).
Note that n(t)/N is the frequency of t in U, and N/n(t) is the inverse frequency.
Note on terminology: Sometimes token weights depend on the document the token belongs to, that is, the same token may have a different weight when it's found in different documents. We call these weights local weights. TF is an example of a local weight, because it depends on the length of the source. On the other hand, some token weights only depend on the token, and are the same everywhere that token is found. We call these weights global, and IDF is one such weight.
TF-IDF
Finally, to bring it all together, the total TF-IDF weight for a token in a document is the product of its TF and IDF weights.
(2a) Implement a TF function
Implement tf(tokens) that takes a list of tokens and returns a Python dictionary mapping tokens to TF weights.
The steps your function should perform are:
Create an empty Python dictionary
For each of the tokens in the input tokens list, count 1 for each occurance and add the token to the dictionary
For each of the tokens in the dictionary, divide the token's count by the total number of tokens in the input tokens list
End of explanation
# TODO: Replace <FILL IN> with appropriate code
corpusRDD = amazonRecToToken.union(googleRecToToken)
# TEST Create a corpus (2b)
Test.assertEquals(corpusRDD.count(), 400, 'incorrect corpusRDD.count()')
Explanation: (2b) Create a corpus
Create a pair RDD called corpusRDD, consisting of a combination of the two small datasets, amazonRecToToken and googleRecToToken. Each element of the corpusRDD should be a pair consisting of a key from one of the small datasets (ID or URL) and the value is the associated value for that key from the small datasets.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def idfs(corpus):
Compute IDF
Args:
corpus (RDD): input corpus
Returns:
RDD: a RDD of (token, IDF value)
N = corpus.count()
uniqueTokens = corpus.flatMap(lambda x: list(set(x[1])))
tokenCountPairTuple = uniqueTokens.map(lambda x: (x, 1))
tokenSumPairTuple = tokenCountPairTuple.reduceByKey(lambda a, b: a + b)
return tokenSumPairTuple.map(lambda x: (x[0], float(N)/x[1]))
idfsSmall = idfs(corpusRDD)
uniqueTokenCount = idfsSmall.count()
print 'There are %s unique tokens in the small datasets.' % uniqueTokenCount
# TEST Implement an IDFs function (2c)
Test.assertEquals(uniqueTokenCount, 4772, 'incorrect uniqueTokenCount')
tokenSmallestIdf = idfsSmall.takeOrdered(1, lambda s: s[1])[0]
Test.assertEquals(tokenSmallestIdf[0], 'software', 'incorrect smallest IDF token')
Test.assertTrue(abs(tokenSmallestIdf[1] - 4.25531914894) < 0.0000000001,
'incorrect smallest IDF value')
Explanation: (2c) Implement an IDFs function
Implement idfs that assigns an IDF weight to every unique token in an RDD called corpus. The function should return an pair RDD where the key is the unique token and value is the IDF weight for the token.
Recall that the IDF weight for a token, t, in a set of documents, U, is computed as follows:
Let N be the total number of documents in U.
Find n(t), the number of documents in U that contain t.
Then IDF(t) = N/n(t).
The steps your function should perform are:
Calculate N. Think about how you can calculate N from the input RDD.
Create an RDD (not a pair RDD) containing the unique tokens from each document in the input corpus. For each document, you should only include a token once, even if it appears multiple times in that document.
For each of the unique tokens, count how many times it appears in the document and then compute the IDF for that token: N/n(t)
Use your idfs to compute the IDF weights for all tokens in corpusRDD (the combined small datasets).
How many unique tokens are there?
End of explanation
smallIDFTokens = idfsSmall.takeOrdered(11, lambda s: s[1])
print smallIDFTokens
Explanation: (2d) Tokens with the smallest IDF
Print out the 11 tokens with the smallest IDF in the combined small dataset.
End of explanation
import matplotlib.pyplot as plt
small_idf_values = idfsSmall.map(lambda s: s[1]).collect()
fig = plt.figure(figsize=(8,3))
plt.hist(small_idf_values, 50, log=True)
pass
Explanation: (2e) IDF Histogram
Plot a histogram of IDF values. Be sure to use appropriate scaling and bucketing for the data.
First plot the histogram using matplotlib
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def tfidf(tokens, idfs):
Compute TF-IDF
Args:
tokens (list of str): input list of tokens from tokenize
idfs (dictionary): record to IDF value
Returns:
dictionary: a dictionary of records to TF-IDF values
tfs = tf(tokens)
tfIdfDict = dict((k, tfs[k] * idfs[k]) for k in tokens if k in idfs)
return tfIdfDict
recb000hkgj8k = amazonRecToToken.filter(lambda x: x[0] == 'b000hkgj8k').collect()[0][1]
idfsSmallWeights = idfsSmall.collectAsMap()
rec_b000hkgj8k_weights = tfidf(recb000hkgj8k, idfsSmallWeights)
print 'Amazon record "b000hkgj8k" has tokens and weights:\n%s' % rec_b000hkgj8k_weights
# TEST Implement a TF-IDF function (2f)
Test.assertEquals(rec_b000hkgj8k_weights,
{'autocad': 33.33333333333333, 'autodesk': 8.333333333333332,
'courseware': 66.66666666666666, 'psg': 33.33333333333333,
'2007': 3.5087719298245617, 'customizing': 16.666666666666664,
'interface': 3.0303030303030303}, 'incorrect rec_b000hkgj8k_weights')
Explanation: (2f) Implement a TF-IDF function
Use your tf function to implement a tfidf(tokens, idfs) function that takes a list of tokens from a document and a Python dictionary of IDF weights and returns a Python dictionary mapping individual tokens to total TF-IDF weights.
The steps your function should perform are:
Calculate the token frequencies (TF) for tokens
Create a Python dictionary where each token maps to the token's frequency times the token's IDF weight
Use your tfidf function to compute the weights of Amazon product record 'b000hkgj8k'. To do this, we need to extract the record for the token from the tokenized small Amazon dataset and we need to convert the IDFs for the small dataset into a Python dictionary. We can do the first part, by using a filter() transformation to extract the matching record and a collect() action to return the value to the driver. For the second part, we use the collectAsMap() action to return the IDFs to the driver as a Python dictionary.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
import math
def dotprod(a, b):
Compute dot product
Args:
a (dictionary): first dictionary of record to value
b (dictionary): second dictionary of record to value
Returns:
dotProd: result of the dot product with the two input dictionaries
return sum(a[k] * b[k] for k in a.keys() if k in b.keys())
def norm(a):
Compute square root of the dot product
Args:
a (dictionary): a dictionary of record to value
Returns:
norm: a dictionary of tokens to its TF values
return math.sqrt(dotprod(a,a))
def cossim(a, b):
Compute cosine similarity
Args:
a (dictionary): first dictionary of record to value
b (dictionary): second dictionary of record to value
Returns:
cossim: dot product of two dictionaries divided by the norm of the first dictionary and
then by the norm of the second dictionary
return dotprod(a,b)/(norm(a) * norm(b))
testVec1 = {'foo': 2, 'bar': 3, 'baz': 5 }
testVec2 = {'foo': 1, 'bar': 0, 'baz': 20 }
dp = dotprod(testVec1, testVec2)
nm = norm(testVec1)
print dp, nm
# TEST Implement the components of a cosineSimilarity function (3a)
Test.assertEquals(dp, 102, 'incorrect dp')
Test.assertTrue(abs(nm - 6.16441400297) < 0.0000001, 'incorrrect nm')
Explanation: Part 3: ER as Text Similarity - Cosine Similarity
Now we are ready to do text comparisons in a formal way. The metric of string distance we will use is called cosine similarity. We will treat each document as a vector in some high dimensional space. Then, to compare two documents we compute the cosine of the angle between their two document vectors. This is much easier than it sounds.
The first question to answer is how do we represent documents as vectors? The answer is familiar: bag-of-words! We treat each unique token as a dimension, and treat token weights as magnitudes in their respective token dimensions. For example, suppose we use simple counts as weights, and we want to interpret the string "Hello, world! Goodbye, world!" as a vector. Then in the "hello" and "goodbye" dimensions the vector has value 1, in the "world" dimension it has value 2, and it is zero in all other dimensions.
The next question is: given two vectors how do we find the cosine of the angle between them? Recall the formula for the dot product of two vectors:
$$ a \cdot b = \| a \| \| b \| \cos \theta $$
Here $ a \cdot b = \sum a_i b_i $ is the ordinary dot product of two vectors, and $ \|a\| = \sqrt{ \sum a_i^2 } $ is the norm of $ a $.
We can rearrange terms and solve for the cosine to find it is simply the normalized dot product of the vectors. With our vector model, the dot product and norm computations are simple functions of the bag-of-words document representations, so we now have a formal way to compute similarity:
$$ similarity = \cos \theta = \frac{a \cdot b}{\|a\| \|b\|} = \frac{\sum a_i b_i}{\sqrt{\sum a_i^2} \sqrt{\sum b_i^2}} $$
Setting aside the algebra, the geometric interpretation is more intuitive. The angle between two document vectors is small if they share many tokens in common, because they are pointing in roughly the same direction. For that case, the cosine of the angle will be large. Otherwise, if the angle is large (and they have few words in common), the cosine is small. Therefore, cosine similarity scales proportionally with our intuitive sense of similarity.
(3a) Implement the components of a cosineSimilarity function
Implement the components of a cosineSimilarity function.
Use the tokenize and tfidf functions, and the IDF weights from Part 2 for extracting tokens and assigning them weights.
The steps you should perform are:
Define a function dotprod that takes two Python dictionaries and produces the dot product of them, where the dot product is defined as the sum of the product of values for tokens that appear in both dictionaries
Define a function norm that returns the square root of the dot product of a dictionary and itself
Define a function cossim that returns the dot product of two dictionaries divided by the norm of the first dictionary and then by the norm of the second dictionary
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def cosineSimilarity(string1, string2, idfsDictionary):
Compute cosine similarity between two strings
Args:
string1 (str): first string
string2 (str): second string
idfsDictionary (dictionary): a dictionary of IDF values
Returns:
cossim: cosine similarity value
w1 = tfidf(tokenize(string1), idfsDictionary)
w2 = tfidf(tokenize(string2), idfsDictionary)
return cossim(w1, w2)
cossimAdobe = cosineSimilarity('Adobe Photoshop',
'Adobe Illustrator',
idfsSmallWeights)
print cossimAdobe
# TEST Implement a cosineSimilarity function (3b)
Test.assertTrue(abs(cossimAdobe - 0.0577243382163) < 0.0000001, 'incorrect cossimAdobe')
Explanation: (3b) Implement a cosineSimilarity function
Implement a cosineSimilarity(string1, string2, idfsDictionary) function that takes two strings and a dictionary of IDF weights, and computes their cosine similarity in the context of some global IDF weights.
The steps you should perform are:
Apply your tfidf function to the tokenized first and second strings, using the dictionary of IDF weights
Compute and return your cossim function applied to the results of the two tfidf functions
End of explanation
# TODO: Replace <FILL IN> with appropriate code
crossSmall = (googleSmall
.cartesian(amazonSmall)
.cache())
def computeSimilarity(record):
Compute similarity on a combination record
Args:
record: a pair, (google record, amazon record)
Returns:
pair: a pair, (google URL, amazon ID, cosine similarity value)
googleRec = record[0]
amazonRec = record[1]
googleURL = googleRec[0]
amazonID = amazonRec[0]
googleValue = googleRec[1]
amazonValue = amazonRec[1]
cs = cosineSimilarity(googleValue, amazonValue, idfsSmallWeights)
return (googleURL, amazonID, cs)
similarities = (crossSmall
.map(computeSimilarity)
.cache())
def similar(amazonID, googleURL):
Return similarity value
Args:
amazonID: amazon ID
googleURL: google URL
Returns:
similar: cosine similarity value
return (similarities
.filter(lambda record: (record[0] == googleURL and record[1] == amazonID))
.collect()[0][2])
similarityAmazonGoogle = similar('b000o24l3q', 'http://www.google.com/base/feeds/snippets/17242822440574356561')
print 'Requested similarity is %s.' % similarityAmazonGoogle
# TEST Perform Entity Resolution (3c)
Test.assertTrue(abs(similarityAmazonGoogle - 0.000303171940451) < 0.0000001,
'incorrect similarityAmazonGoogle')
Explanation: (3c) Perform Entity Resolution
Now we can finally do some entity resolution!
For every product record in the small Google dataset, use your cosineSimilarity function to compute its similarity to every record in the small Amazon dataset. Then, build a dictionary mapping (Google URL, Amazon ID) tuples to similarity scores between 0 and 1.
We'll do this computation two different ways, first we'll do it without a broadcast variable, and then we'll use a broadcast variable
The steps you should perform are:
Create an RDD that is a combination of the small Google and small Amazon datasets that has as elements all pairs of elements (a, b) where a is in self and b is in other. The result will be an RDD of the form: [ ((Google URL1, Google String1), (Amazon ID1, Amazon String1)), ((Google URL1, Google String1), (Amazon ID2, Amazon String2)), ((Google URL2, Google String2), (Amazon ID1, Amazon String1)), ... ]
Define a worker function that given an element from the combination RDD computes the cosineSimlarity for the two records in the element
Apply the worker function to every element in the RDD
Now, compute the similarity between Amazon record b000o24l3q and Google record http://www.google.com/base/feeds/snippets/17242822440574356561.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def computeSimilarityBroadcast(record):
Compute similarity on a combination record, using Broadcast variable
Args:
record: a pair, (google record, amazon record)
Returns:
pair: a pair, (google URL, amazon ID, cosine similarity value)
googleRec = record[0]
amazonRec = record[1]
googleURL = googleRec[0]
amazonID = amazonRec[0]
googleValue = googleRec[1]
amazonValue = amazonRec[1]
cs = cosineSimilarity(googleValue, amazonValue, idfsSmallBroadcast.value)
return (googleURL, amazonID, cs)
idfsSmallBroadcast = sc.broadcast(idfsSmallWeights)
similaritiesBroadcast = (crossSmall
.map(computeSimilarity)
.cache())
def similarBroadcast(amazonID, googleURL):
Return similarity value, computed using Broadcast variable
Args:
amazonID: amazon ID
googleURL: google URL
Returns:
similar: cosine similarity value
return (similaritiesBroadcast
.filter(lambda record: (record[0] == googleURL and record[1] == amazonID))
.collect()[0][2])
similarityAmazonGoogleBroadcast = similarBroadcast('b000o24l3q', 'http://www.google.com/base/feeds/snippets/17242822440574356561')
print 'Requested similarity is %s.' % similarityAmazonGoogleBroadcast
# TEST Perform Entity Resolution with Broadcast Variables (3d)
from pyspark import Broadcast
Test.assertTrue(isinstance(idfsSmallBroadcast, Broadcast), 'incorrect idfsSmallBroadcast')
Test.assertEquals(len(idfsSmallBroadcast.value), 4772, 'incorrect idfsSmallBroadcast value')
Test.assertTrue(abs(similarityAmazonGoogleBroadcast - 0.000303171940451) < 0.0000001,
'incorrect similarityAmazonGoogle')
Explanation: (3d) Perform Entity Resolution with Broadcast Variables
The solution in (3c) works well for small datasets, but it requires Spark to (automatically) send the idfsSmallWeights variable to all the workers. If we didn't cache() similarities, then it might have to be recreated if we run similar() multiple times. This would cause Spark to send idfsSmallWeights every time.
Instead, we can use a broadcast variable - we define the broadcast variable in the driver and then we can refer to it in each worker. Spark saves the broadcast variable at each worker, so it is only sent once.
The steps you should perform are:
Define a computeSimilarityBroadcast function that given an element from the combination RDD computes the cosine simlarity for the two records in the element. This will be the same as the worker function computeSimilarity in (3c) except that it uses a broadcast variable.
Apply the worker function to every element in the RDD
Again, compute the similarity between Amazon record b000o24l3q and Google record http://www.google.com/base/feeds/snippets/17242822440574356561.
End of explanation
GOLDFILE_PATTERN = '^(.+),(.+)'
# Parse each line of a data file useing the specified regular expression pattern
def parse_goldfile_line(goldfile_line):
Parse a line from the 'golden standard' data file
Args:
goldfile_line: a line of data
Returns:
pair: ((key, 'gold', 1 if successful or else 0))
match = re.search(GOLDFILE_PATTERN, goldfile_line)
if match is None:
print 'Invalid goldfile line: %s' % goldfile_line
return (goldfile_line, -1)
elif match.group(1) == '"idAmazon"':
print 'Header datafile line: %s' % goldfile_line
return (goldfile_line, 0)
else:
key = '%s %s' % (removeQuotes(match.group(1)), removeQuotes(match.group(2)))
return ((key, 'gold'), 1)
goldfile = os.path.join(baseDir, inputPath, GOLD_STANDARD_PATH)
gsRaw = (sc
.textFile(goldfile)
.map(parse_goldfile_line)
.cache())
gsFailed = (gsRaw
.filter(lambda s: s[1] == -1)
.map(lambda s: s[0]))
for line in gsFailed.take(10):
print 'Invalid goldfile line: %s' % line
goldStandard = (gsRaw
.filter(lambda s: s[1] == 1)
.map(lambda s: s[0])
.cache())
print 'Read %d lines, successfully parsed %d lines, failed to parse %d lines' % (gsRaw.count(),
goldStandard.count(),
gsFailed.count())
assert (gsFailed.count() == 0)
assert (gsRaw.count() == (goldStandard.count() + 1))
Explanation: (3e) Perform a Gold Standard evaluation
First, we'll load the "gold standard" data and use it to answer several questions. We read and parse the Gold Standard data, where the format of each line is "Amazon Product ID","Google URL". The resulting RDD has elements of the form ("AmazonID GoogleURL", 'gold')
End of explanation
# TODO: Replace <FILL IN> with appropriate code
sims = similaritiesBroadcast.map(lambda x: (x[1] + " " + x[0], x[2]))
trueDupsRDD = (sims
.join(goldStandard).map(lambda x: (x[0], x[1][0])))
trueDupsCount = trueDupsRDD.count()
avgSimDups = trueDupsRDD.map(lambda x: x[1]).sum()/float(trueDupsCount)
nonDupsRDD = (sims
.leftOuterJoin(goldStandard).filter(lambda x: x[1][1] == None).map(lambda x: (x[0], x[1][0])))
avgSimNon = nonDupsRDD.map(lambda x: x[1]).sum()/float(nonDupsRDD.count())
print 'There are %s true duplicates.' % trueDupsCount
print 'The average similarity of true duplicates is %s.' % avgSimDups
print 'And for non duplicates, it is %s.' % avgSimNon
# TEST Perform a Gold Standard evaluation (3e)
Test.assertEquals(trueDupsCount, 146, 'incorrect trueDupsCount')
Test.assertTrue(abs(avgSimDups - 0.264332573435) < 0.0000001, 'incorrect avgSimDups')
Test.assertTrue(abs(avgSimNon - 0.00123476304656) < 0.0000001, 'incorrect avgSimNon')
Explanation: Using the "gold standard" data we can answer the following questions:
How many true duplicate pairs are there in the small datasets?
What is the average similarity score for true duplicates?
What about for non-duplicates?
The steps you should perform are:
Create a new sims RDD from the similaritiesBroadcast RDD, where each element consists of a pair of the form ("AmazonID GoogleURL", cosineSimilarityScore). An example entry from sims is: ('b000bi7uqs http://www.google.com/base/feeds/snippets/18403148885652932189', 0.40202896125621296)
Combine the sims RDD with the goldStandard RDD by creating a new trueDupsRDD RDD that has the just the cosine similarity scores for those "AmazonID GoogleURL" pairs that appear in both the sims RDD and goldStandard RDD. Hint: you can do this using the join() transformation.
Count the number of true duplicate pairs in the trueDupsRDD dataset
Compute the average similarity score for true duplicates in the trueDupsRDD datasets. Remember to use float for calculation
Create a new nonDupsRDD RDD that has the just the cosine similarity scores for those "AmazonID GoogleURL" pairs from the similaritiesBroadcast RDD that do not appear in both the sims RDD and gold standard RDD.
Compute the average similarity score for non-duplicates in the last datasets. Remember to use float for calculation
End of explanation
# TODO: Replace <FILL IN> with appropriate code
amazonFullRecToToken = amazon.map(lambda x: (x[0], tokenize(x[1])))
googleFullRecToToken = google.map(lambda x: (x[0], tokenize(x[1])))
print 'Amazon full dataset is %s products, Google full dataset is %s products' % (amazonFullRecToToken.count(),
googleFullRecToToken.count())
# TEST Tokenize the full dataset (4a)
Test.assertEquals(amazonFullRecToToken.count(), 1363, 'incorrect amazonFullRecToToken.count()')
Test.assertEquals(googleFullRecToToken.count(), 3226, 'incorrect googleFullRecToToken.count()')
Explanation: Part 4: Scalable ER
In the previous parts, we built a text similarity function and used it for small scale entity resolution. Our implementation is limited by its quadratic run time complexity, and is not practical for even modestly sized datasets. In this part, we will implement a more scalable algorithm and use it to do entity resolution on the full dataset.
Inverted Indices
To improve our ER algorithm from the earlier parts, we should begin by analyzing its running time. In particular, the algorithm above is quadratic in two ways. First, we did a lot of redundant computation of tokens and weights, since each record was reprocessed every time it was compared. Second, we made quadratically many token comparisons between records.
The first source of quadratic overhead can be eliminated with precomputation and look-up tables, but the second source is a little more tricky. In the worst case, every token in every record in one dataset exists in every record in the other dataset, and therefore every token makes a non-zero contribution to the cosine similarity. In this case, token comparison is unavoidably quadratic.
But in reality most records have nothing (or very little) in common. Moreover, it is typical for a record in one dataset to have at most one duplicate record in the other dataset (this is the case assuming each dataset has been de-duplicated against itself). In this case, the output is linear in the size of the input and we can hope to achieve linear running time.
An inverted index is a data structure that will allow us to avoid making quadratically many token comparisons. It maps each token in the dataset to the list of documents that contain the token. So, instead of comparing, record by record, each token to every other token to see if they match, we will use inverted indices to look up records that match on a particular token.
Note on terminology: In text search, a forward index maps documents in a dataset to the tokens they contain. An inverted index supports the inverse mapping.
Note: For this section, use the complete Google and Amazon datasets, not the samples
(4a) Tokenize the full dataset
Tokenize each of the two full datasets for Google and Amazon.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
fullCorpusRDD = amazonFullRecToToken.union(googleFullRecToToken)
idfsFull = idfs(fullCorpusRDD)
idfsFullCount = idfsFull.count()
print 'There are %s unique tokens in the full datasets.' % idfsFullCount
# Recompute IDFs for full dataset
idfsFullWeights = idfsFull.collectAsMap()
idfsFullBroadcast = sc.broadcast(idfsFullWeights)
# Pre-compute TF-IDF weights. Build mappings from record ID weight vector.
amazonWeightsRDD = amazonFullRecToToken.map(lambda x: (x[0], tfidf(x[1],idfsFullBroadcast.value)))
googleWeightsRDD = googleFullRecToToken.map(lambda x: (x[0], tfidf(x[1],idfsFullBroadcast.value)))
print 'There are %s Amazon weights and %s Google weights.' % (amazonWeightsRDD.count(),
googleWeightsRDD.count())
# TEST Compute IDFs and TF-IDFs for the full datasets (4b)
Test.assertEquals(idfsFullCount, 17078, 'incorrect idfsFullCount')
Test.assertEquals(amazonWeightsRDD.count(), 1363, 'incorrect amazonWeightsRDD.count()')
Test.assertEquals(googleWeightsRDD.count(), 3226, 'incorrect googleWeightsRDD.count()')
Explanation: (4b) Compute IDFs and TF-IDFs for the full datasets
We will reuse your code from above to compute IDF weights for the complete combined datasets.
The steps you should perform are:
Create a new fullCorpusRDD that contains the tokens from the full Amazon and Google datasets.
Apply your idfs function to the fullCorpusRDD
Create a broadcast variable containing a dictionary of the IDF weights for the full dataset.
For each of the Amazon and Google full datasets, create weight RDDs that map IDs/URLs to TF-IDF weighted token vectors.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
amazonNorms = amazonWeightsRDD.map(lambda x: (x[0], norm(x[1])))
amazonNormsBroadcast = sc.broadcast(amazonNorms.collectAsMap())
googleNorms = googleWeightsRDD.map(lambda x: (x[0], norm(x[1])))
googleNormsBroadcast = sc.broadcast(googleNorms.collectAsMap())
# TEST Compute Norms for the weights from the full datasets (4c)
Test.assertTrue(isinstance(amazonNormsBroadcast, Broadcast), 'incorrect amazonNormsBroadcast')
Test.assertEquals(len(amazonNormsBroadcast.value), 1363, 'incorrect amazonNormsBroadcast.value')
Test.assertTrue(isinstance(googleNormsBroadcast, Broadcast), 'incorrect googleNormsBroadcast')
Test.assertEquals(len(googleNormsBroadcast.value), 3226, 'incorrect googleNormsBroadcast.value')
Explanation: (4c) Compute Norms for the weights from the full datasets
We will reuse your code from above to compute norms of the IDF weights for the complete combined dataset.
The steps you should perform are:
Create two collections, one for each of the full Amazon and Google datasets, where IDs/URLs map to the norm of the associated TF-IDF weighted token vectors.
Convert each collection into a broadcast variable, containing a dictionary of the norm of IDF weights for the full dataset
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def invert(record):
Invert (ID, tokens) to a list of (token, ID)
Args:
record: a pair, (ID, token vector)
Returns:
pairs: a list of pairs of token to ID
pairs = []
for token in record[1]:
pairs.append((token, record[0]))
return pairs
amazonInvPairsRDD = (amazonWeightsRDD
.flatMap(invert)
.cache())
googleInvPairsRDD = (googleWeightsRDD
.flatMap(invert)
.cache())
print 'There are %s Amazon inverted pairs and %s Google inverted pairs.' % (amazonInvPairsRDD.count(),
googleInvPairsRDD.count())
# TEST Create inverted indicies from the full datasets (4d)
invertedPair = invert((1, {'foo': 2}))
Test.assertEquals(invertedPair[0][1], 1, 'incorrect invert result')
Test.assertEquals(amazonInvPairsRDD.count(), 111387, 'incorrect amazonInvPairsRDD.count()')
Test.assertEquals(googleInvPairsRDD.count(), 77678, 'incorrect googleInvPairsRDD.count()')
Explanation: (4d) Create inverted indicies from the full datasets
Build inverted indices of both data sources.
The steps you should perform are:
Create an invert function that given a pair of (ID/URL, TF-IDF weighted token vector), returns a list of pairs of (token, ID/URL). Recall that the TF-IDF weighted token vector is a Python dictionary with keys that are tokens and values that are weights.
Use your invert function to convert the full Amazon and Google TF-IDF weighted token vector datasets into two RDDs where each element is a pair of a token and an ID/URL that contain that token. These are inverted indicies.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def swap(record):
Swap (token, (ID, URL)) to ((ID, URL), token)
Args:
record: a pair, (token, (ID, URL))
Returns:
pair: ((ID, URL), token)
token = record[0]
keys = record[1]
return (keys, token)
commonTokens = (amazonInvPairsRDD
.join(googleInvPairsRDD).map(swap).groupByKey()
.cache())
print 'Found %d common tokens' % commonTokens.count()
# TEST Identify common tokens from the full dataset (4e)
Test.assertEquals(commonTokens.count(), 2441100, 'incorrect commonTokens.count()')
Explanation: (4e) Identify common tokens from the full dataset
We are now in position to efficiently perform ER on the full datasets. Implement the following algorithm to build an RDD that maps a pair of (ID, URL) to a list of tokens they share in common:
Using the two inverted indicies (RDDs where each element is a pair of a token and an ID or URL that contains that token), create a new RDD that contains only tokens that appear in both datasets. This will yield an RDD of pairs of (token, iterable(ID, URL)).
We need a mapping from (ID, URL) to token, so create a function that will swap the elements of the RDD you just created to create this new RDD consisting of ((ID, URL), token) pairs.
Finally, create an RDD consisting of pairs mapping (ID, URL) to all the tokens the pair shares in common
End of explanation
# TODO: Replace <FILL IN> with appropriate code
amazonWeightsBroadcast = sc.broadcast(amazonWeightsRDD.collectAsMap())
googleWeightsBroadcast = sc.broadcast(googleWeightsRDD.collectAsMap())
def fastCosineSimilarity(record):
Compute Cosine Similarity using Broadcast variables
Args:
record: ((ID, URL), tokens)
Returns:
pair: ((ID, URL), cosine similarity value)
amazonRec = record[0][0]
googleRec = record[0][1]
tokens = record[1]
s = sum(amazonWeightsBroadcast.value[amazonRec][i] * googleWeightsBroadcast.value[googleRec][i] for i in tokens)
value = s/(amazonNormsBroadcast.value[amazonRec] * googleNormsBroadcast.value[googleRec])
key = (amazonRec, googleRec)
return (key, value)
similaritiesFullRDD = (commonTokens
.map(fastCosineSimilarity)
.cache())
print similaritiesFullRDD.count()
# TEST Identify common tokens from the full dataset (4f)
similarityTest = similaritiesFullRDD.filter(lambda ((aID, gURL), cs): aID == 'b00005lzly' and gURL == 'http://www.google.com/base/feeds/snippets/13823221823254120257').collect()
Test.assertEquals(len(similarityTest), 1, 'incorrect len(similarityTest)')
Test.assertTrue(abs(similarityTest[0][1] - 4.286548414e-06) < 0.000000000001, 'incorrect similarityTest fastCosineSimilarity')
Test.assertEquals(similaritiesFullRDD.count(), 2441100, 'incorrect similaritiesFullRDD.count()')
Explanation: (4f) Identify common tokens from the full dataset
Use the data structures from parts (4a) and (4e) to build a dictionary to map record pairs to cosine similarity scores.
The steps you should perform are:
Create two broadcast dictionaries from the amazonWeights and googleWeights RDDs
Create a fastCosinesSimilarity function that takes in a record consisting of the pair ((Amazon ID, Google URL), tokens list) and computes the sum for each of the tokens in the token list of the products of the Amazon weight for the token times the Google weight for the token. The sum should then be divided by the norm for the Google URL and then divided by the norm for the Amazon ID. The function should return this value in a pair with the key being the (Amazon ID, Google URL). Make sure you use broadcast variables you created for both the weights and norms
Apply your fastCosinesSimilarity function to the common tokens from the full dataset
End of explanation
# Create an RDD of ((Amazon ID, Google URL), similarity score)
simsFullRDD = similaritiesFullRDD.map(lambda x: ("%s %s" % (x[0][0], x[0][1]), x[1]))
assert (simsFullRDD.count() == 2441100)
# Create an RDD of just the similarity scores
simsFullValuesRDD = (simsFullRDD
.map(lambda x: x[1])
.cache())
assert (simsFullValuesRDD.count() == 2441100)
# Look up all similarity scores for true duplicates
# This helper function will return the similarity score for records that are in the gold standard and the simsFullRDD (True positives), and will return 0 for records that are in the gold standard but not in simsFullRDD (False Negatives).
def gs_value(record):
if (record[1][1] is None):
return 0
else:
return record[1][1]
# Join the gold standard and simsFullRDD, and then extract the similarities scores using the helper function
trueDupSimsRDD = (goldStandard
.leftOuterJoin(simsFullRDD)
.map(gs_value)
.cache())
print 'There are %s true duplicates.' % trueDupSimsRDD.count()
assert(trueDupSimsRDD.count() == 1300)
Explanation: Part 5: Analysis
Now we have an authoritative list of record-pair similarities, but we need a way to use those similarities to decide if two records are duplicates or not. The simplest approach is to pick a threshold. Pairs whose similarity is above the threshold are declared duplicates, and pairs below the threshold are declared distinct.
To decide where to set the threshold we need to understand what kind of errors result at different levels. If we set the threshold too low, we get more false positives, that is, record-pairs we say are duplicates that in reality are not. If we set the threshold too high, we get more false negatives, that is, record-pairs that really are duplicates but that we miss.
ER algorithms are evaluated by the common metrics of information retrieval and search called precision and recall. Precision asks of all the record-pairs marked duplicates, what fraction are true duplicates? Recall asks of all the true duplicates in the data, what fraction did we successfully find? As with false positives and false negatives, there is a trade-off between precision and recall. A third metric, called F-measure, takes the harmonic mean of precision and recall to measure overall goodness in a single value:
$$ Fmeasure = 2 \frac{precision * recall}{precision + recall} $$
Note: In this part, we use the "gold standard" mapping from the included file to look up true duplicates, and the results of Part 4.
Note: In this part, you will not be writing any code. We've written all of the code for you. Run each cell and then answer the quiz questions on Studio.
(5a) Counting True Positives, False Positives, and False Negatives
We need functions that count True Positives (true duplicates above the threshold), and False Positives and False Negatives:
We start with creating the simsFullRDD from our similaritiesFullRDD that consists of a pair of ((Amazon ID, Google URL), simlarity score)
From this RDD, we create an RDD consisting of only the similarity scores
To look up the similarity scores for true duplicates, we perform a left outer join using the goldStandard RDD and simsFullRDD and extract the
End of explanation
from pyspark.accumulators import AccumulatorParam
class VectorAccumulatorParam(AccumulatorParam):
# Initialize the VectorAccumulator to 0
def zero(self, value):
return [0] * len(value)
# Add two VectorAccumulator variables
def addInPlace(self, val1, val2):
for i in xrange(len(val1)):
val1[i] += val2[i]
return val1
# Return a list with entry x set to value and all other entries set to 0
def set_bit(x, value, length):
bits = []
for y in xrange(length):
if (x == y):
bits.append(value)
else:
bits.append(0)
return bits
# Pre-bin counts of false positives for different threshold ranges
BINS = 101
nthresholds = 100
def bin(similarity):
return int(similarity * nthresholds)
# fpCounts[i] = number of entries (possible false positives) where bin(similarity) == i
zeros = [0] * BINS
fpCounts = sc.accumulator(zeros, VectorAccumulatorParam())
def add_element(score):
global fpCounts
b = bin(score)
fpCounts += set_bit(b, 1, BINS)
simsFullValuesRDD.foreach(add_element)
# Remove true positives from FP counts
def sub_element(score):
global fpCounts
b = bin(score)
fpCounts += set_bit(b, -1, BINS)
trueDupSimsRDD.foreach(sub_element)
def falsepos(threshold):
fpList = fpCounts.value
return sum([fpList[b] for b in range(0, BINS) if float(b) / nthresholds >= threshold])
def falseneg(threshold):
return trueDupSimsRDD.filter(lambda x: x < threshold).count()
def truepos(threshold):
return trueDupSimsRDD.count() - falsenegDict[threshold]
Explanation: The next step is to pick a threshold between 0 and 1 for the count of True Positives (true duplicates above the threshold). However, we would like to explore many different thresholds. To do this, we divide the space of thresholds into 100 bins, and take the following actions:
We use Spark Accumulators to implement our counting function. We define a custom accumulator type, VectorAccumulatorParam, along with functions to initialize the accumulator's vector to zero, and to add two vectors. Note that we have to use the += operator because you can only add to an accumulator.
We create a helper function to create a list with one entry (bit) set to a value and all others set to 0.
We create 101 bins for the 100 threshold values between 0 and 1.
Now, for each similarity score, we can compute the false positives. We do this by adding each similarity score to the appropriate bin of the vector. Then we remove true positives from the vector by using the gold standard data.
We define functions for computing false positive and negative and true positives, for a given threshold.
End of explanation
# Precision = true-positives / (true-positives + false-positives)
# Recall = true-positives / (true-positives + false-negatives)
# F-measure = 2 x Recall x Precision / (Recall + Precision)
def precision(threshold):
tp = trueposDict[threshold]
return float(tp) / (tp + falseposDict[threshold])
def recall(threshold):
tp = trueposDict[threshold]
return float(tp) / (tp + falsenegDict[threshold])
def fmeasure(threshold):
r = recall(threshold)
p = precision(threshold)
return 2 * r * p / (r + p)
Explanation: (5b) Precision, Recall, and F-measures
We define functions so that we can compute the Precision, Recall, and F-measure as a function of threshold value:
Precision = true-positives / (true-positives + false-positives)
Recall = true-positives / (true-positives + false-negatives)
F-measure = 2 x Recall x Precision / (Recall + Precision)
End of explanation
thresholds = [float(n) / nthresholds for n in range(0, nthresholds)]
falseposDict = dict([(t, falsepos(t)) for t in thresholds])
falsenegDict = dict([(t, falseneg(t)) for t in thresholds])
trueposDict = dict([(t, truepos(t)) for t in thresholds])
precisions = [precision(t) for t in thresholds]
recalls = [recall(t) for t in thresholds]
fmeasures = [fmeasure(t) for t in thresholds]
print precisions[0], fmeasures[0]
assert (abs(precisions[0] - 0.000532546802671) < 0.0000001)
assert (abs(fmeasures[0] - 0.00106452669505) < 0.0000001)
fig = plt.figure()
plt.plot(thresholds, precisions)
plt.plot(thresholds, recalls)
plt.plot(thresholds, fmeasures)
plt.legend(['Precision', 'Recall', 'F-measure'])
pass
Explanation: (5c) Line Plots
We can make line plots of precision, recall, and F-measure as a function of threshold value, for thresholds between 0.0 and 1.0. You can change nthresholds (above in part (5a)) to change the threshold values to plot.
End of explanation |
5,537 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parallelization with QInfer
Setup
We begin by enabling Python 3–style division, as is recommended for use in Python 2.7.
Step1: Next, we import the IPython parallelization library ipyparallel, as well as QInfer itself and some useful things from the Python standard library.
Step2: We'll make some nice graphical examples later, so we also import Matplotlib and point it to the ggplot stylesheet if available. We'll also tell Matplotlib to plot inline in our notebook.
Step3: Next, we setup the parallelization client. To start a cluster running, it's convienent to use the Jupyter Notebook extension. If you're running ipyparallel 5.0 or later, you can install the Notebook extension by running
Step4: Once we get a client up and running, the next step is to make a direct view of its engines. Since we want to use all of the engines provided by the client, we index it by a slice [
Step5: Next, we'll create a simple test model to try out parallelization support.
Step6: Having made this model, we can then wrap it in DirectViewParallelizedModel to parallelize its likelihood function.
Step7: Let's try it out by making a prior and specifying an experiment design heuristic.
Step8: The rest is handled by perf_test_multiple, which will run our example many times and report on its performance. In this case, parallelizing an individual model is a bit silly, since the simple precession likelihood is fast enough that communication costs blow away the savings from parallelization. That said, it makes an illustrative example. In practice, running a parallel model makes the most sense if the model likelihood is slow to evaluate, and if you're interested in a small number of runs.
Step9: Let's plot the performance data to make sure that our test actually works.
Step10: Note that, in the above plot, we see both the exponential scaling we expect as well the errors caused by numerical precision once we get very close to the true model.
As an alternative to parallelizing over model parameters, we can parallelize over trials instead. That is, we send each trial to a different engine as they are able to handle new tasks. To do so, we'll use the LoadBalancedView offered by ipyparallel. Parallelizing at the trial level instead is a much faster approach in this case, as it requires much less communication between the engines and the main process. | Python Code:
from __future__ import division
Explanation: Parallelization with QInfer
Setup
We begin by enabling Python 3–style division, as is recommended for use in Python 2.7.
End of explanation
import ipyparallel as ipp
import qinfer as qi
from functools import partial
Explanation: Next, we import the IPython parallelization library ipyparallel, as well as QInfer itself and some useful things from the Python standard library.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
try:
plt.style.use('ggplot')
except:
pass
Explanation: We'll make some nice graphical examples later, so we also import Matplotlib and point it to the ggplot stylesheet if available. We'll also tell Matplotlib to plot inline in our notebook.
End of explanation
client = ipp.Client()
print(client)
Explanation: Next, we setup the parallelization client. To start a cluster running, it's convienent to use the Jupyter Notebook extension. If you're running ipyparallel 5.0 or later, you can install the Notebook extension by running:
$ ipcluster nbextension enable
Full instructions for installing the Jupyter Notebook extension are available on the ipyparallel site. In any case, once a cluster has been started, we can make a client that connects to it by using the Client() class.
End of explanation
dview = client[:]
print(dview)
Explanation: Once we get a client up and running, the next step is to make a direct view of its engines. Since we want to use all of the engines provided by the client, we index it by a slice [:].
End of explanation
serial_model = qi.BinomialModel(qi.SimplePrecessionModel())
serial_model
Explanation: Next, we'll create a simple test model to try out parallelization support.
End of explanation
parallel_model = qi.DirectViewParallelizedModel(serial_model, dview)
parallel_model
Explanation: Having made this model, we can then wrap it in DirectViewParallelizedModel to parallelize its likelihood function.
End of explanation
prior = qi.UniformDistribution([0, 1])
heuristic_class = partial(qi.ExpSparseHeuristic, t_field='x', other_fields={'n_meas': 20})
Explanation: Let's try it out by making a prior and specifying an experiment design heuristic.
End of explanation
with qi.timing() as t:
performance = qi.perf_test_multiple(
100, parallel_model, 6000, prior, 200,
heuristic_class, progressbar=qi.IPythonProgressBar
)
print("Time elapsed: {:0.2f} s".format(t.delta_t))
Explanation: The rest is handled by perf_test_multiple, which will run our example many times and report on its performance. In this case, parallelizing an individual model is a bit silly, since the simple precession likelihood is fast enough that communication costs blow away the savings from parallelization. That said, it makes an illustrative example. In practice, running a parallel model makes the most sense if the model likelihood is slow to evaluate, and if you're interested in a small number of runs.
End of explanation
plt.semilogy(performance['loss'].mean(axis=0))
plt.xlabel('# of Experiments')
plt.ylabel('Bayes Risk')
Explanation: Let's plot the performance data to make sure that our test actually works.
End of explanation
lbview = client.load_balanced_view()
with qi.timing() as t:
performance = qi.perf_test_multiple(
100, serial_model, 6000, prior, 200, heuristic_class,
progressbar=qi.IPythonProgressBar, apply=lbview.apply
)
print("Time elapsed: {:0.2f} s".format(t.delta_t))
plt.semilogy(performance['loss'].mean(axis=0))
plt.xlabel('# of Experiments')
plt.ylabel('Bayes Risk')
Explanation: Note that, in the above plot, we see both the exponential scaling we expect as well the errors caused by numerical precision once we get very close to the true model.
As an alternative to parallelizing over model parameters, we can parallelize over trials instead. That is, we send each trial to a different engine as they are able to handle new tasks. To do so, we'll use the LoadBalancedView offered by ipyparallel. Parallelizing at the trial level instead is a much faster approach in this case, as it requires much less communication between the engines and the main process.
End of explanation |
5,538 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Usage
This section gives a quick overview of some features and conventions that are common to all the main analysis tools. While the main analysis tools will be briefly referenced here, later sections will cover them in full.
Syntax
As PySCeSToolbox was designed to work on top of PySCeS, many of its conventions are employed in this project. The syntax (or naming scheme) for referring to model variables and parameters is the most obvious legacy. Syntax is briefly described in the table below and relates to the provided example model (for input file syntax refer to the PySCeS model descriptor language documentation)
Step1: Results that can be accessed via scan_results
Step2: e.g. The first 10 data points for the scan results
Step3: Results can be saved using the default path as discussed in #Saving and default directories# with the save_results method
Step4: Or they can be saved to a specified location
Step5: Finally, a ScanFig object can be created using the plot method
Step6: ScanFig
The ScanFig class provides the actual plotting object. This tool allows users to display figures with results directly in the Notebook and to control which data is displayed on the figure by use of an interactive widget based interface. As mentioned and shown above they are created by the plot method of a Data2D object, which means that a user never has the need to instantiate ScanFig directly.
Features
Interactive plotting via the interact method.
Script based plot generation where certain lines, or categories of lines (based on the type of information they represent), can be enabled and disabled via toggle_line or toggle_category methods.
Saving of plots with the save method.
Customisation of figures using standard matplotlib functionality.
Usage Example
Below is an usage example of ScanFig using the scan_figure instance created in the previous section. Here results from the parameter scan of Vf2 as generated by Scan1 is shown.
Step7: The Figure shown above is empty - to show lines we need to click on the buttons. First we will click on the Flux Rates button which will allow any of the lines that fall into the category Flux Rates to be enabled. Then we click the other buttons
Step8: .. note
Step9: In the example below we set the Flux Rates visibility to False, but we set the J_R1 line visibility to True. Finally we use the show method instead of interact to display the figure.
Step10: The figure axes can also be adjusted via the adjust_figure method. Recall that the Vf2 scan was performed for a logarithmic scale rather than a linear scale. We will therefore set the x axis to log and its minimum value to 1. These settings are applied by clicking the Apply button.
Step11: The underlying matplotlib objects can be accessed through the fig and ax fields for the figure and axes, respectively. This allows for manipulation of the figures using matplotlib's functionality.
Step12: Finally the plot can be saved using the save method (or equivalently by pressing the save button) without specifying a path where the file will be saved as an svg vector image to the default directory as discussed under #Saving and default directories#
Step13: A file name together with desired extension (and image format) can also be specified
Step14: Tables
In PySCeSToolbox, results are frequently stored in an dictionary-like structure belonging to an analysis object. In most cases the dictionary will be named with _results appended to the type of results (e.g. Control coefficient results in SymCa are saved as cc_results while the parametrised internal metabolite scan results of RateChar are saved as scan_results).
In most cases the results stored are structured so that a single dictionary key is mapped to a single result (or result object). In these cases simply inspecting the variable in the IPython/Jupyter Notebook displays these results in an html style table where the variable name is displayed together with it's value e.g. for cc_results each control coefficient will be displayed next to its value at steady-state.
Finally, any 2D data-structure commonly used in together with PyCSeS and PySCeSToolbox can be displayed as an html table (e.g. list of lists, NumPy arrays, SymPy matrices).
Usage Example
Below we will construct a list of lists and display it as an html table.Captions can be either plain text or contain html tags.
Step15: By default floats are all formatted according to the argument float_fmt which defaults to %.2f (using the standard Python formatter string syntax). A formatter function can be passed to as the formatter argument which allows for more customisation.
Below we instantiate such a formatter using the formatter_factory function. Here all float values falling within the range set up by min_val and max_val (which includes the minimum, but excludes the maximum) will be formatted according to default_fmt, while outliers will be formatted according to outlier_fmt.
Step16: The constructed formatter takes a number (e.g. float, int, etc.) as argument and returns a formatter string according to the previously setup parameters.
Step17: Using this formatter with the previously constructed list_of_lists lead to a differently formatted html representation of the data
Step18: Graphic Representation of Metabolic Networks
PySCeSToolbox includes functionality for displaying interactive graph representations of metabolic networks through the ModelGraph tool. The main purpose of this feature is to allow for the visualisation of control patterns in SymCa. Currently, this tool is fairly limited in terms of its capabilities and therefore does not represent a replacement for more fully featured tools such as (cell designer? Or ???). One such limitation is that no automatic layout capabilities are included, and nodes representing species and concentrations have to be laid out by hand. Nonetheless it is useful for quickly visualising the structure of pathway and, as previously mentioned, for visualising the importance of various control patterns in SymCa.
Features
Displays interactive (d3.js based) reaction networks in the notebook.
Layouts can be saved and applied to other similar networks.
Usage Example
The main use case is for visualising control patterns. However, ModelGraph can be used in this capacity, the graph layout has to be defined. Below we will set up the layout for the example_model.
First we load the model and instantiate a ModelGraph object using the model. The show method displays the graph.
Step19: Unless a layout has been previously defined, the species and reaction nodes will be placed randomly. Nodes are snap to an invisible grid.
Step20: A layout file for the example_model is included (see link for details) and can be loaded by specifying the location of the layout file on the disk during ModelGraph instantiation.
Step21: Clicking the Save Layout button saves this layout to the ~/Pysces/example_model/model_graph or C | Python Code:
# PySCeS model instantiation using the `example_model.py` file
# with name `mod`
mod = pysces.model('example_model')
mod.SetQuiet()
# Parameter scan setup and execution
# Here we are changing the value of `Vf2` over logarithmic
# scale from `log10(1)` (or 0) to log10(100) (or 2) for a
# 100 points.
mod.scan_in = 'Vf2'
mod.scan_out = ['J_R1','J_R2','J_R3']
mod.Scan1(numpy.logspace(0,2,100))
# Instantiation of `Data2D` object with name `scan_data`
column_names = [mod.scan_in] + mod.scan_out
scan_data = psctb.utils.plotting.Data2D(mod=mod,
column_names=column_names,
data_array=mod.scan_res)
Explanation: Basic Usage
This section gives a quick overview of some features and conventions that are common to all the main analysis tools. While the main analysis tools will be briefly referenced here, later sections will cover them in full.
Syntax
As PySCeSToolbox was designed to work on top of PySCeS, many of its conventions are employed in this project. The syntax (or naming scheme) for referring to model variables and parameters is the most obvious legacy. Syntax is briefly described in the table below and relates to the provided example model (for input file syntax refer to the PySCeS model descriptor language documentation):
|Description | Syntax description | PySCeS example | Rendered LaTeX example |
|------------------------------------------|------------------------------------------------|--------------------|----------------------------------------------------|
| Parameters | As defined in model file | Keq2 | $Keq2$ |
| Species | As defined in model file | S1 | $S1$ |
| Reactions | As defined in model file | R1 | $R1$ |
| Steady state species | “_ss” appended to model definition | S1_ss | $S1_{ss}$ |
| Steady state reaction rates (Flux) | “J_” prepended to model definition | J_R1 | $J_{R1}$ |
| Control coefficients | In the format “ccJreaction_reaction” | ccJR1_R2 | $C^{JR1}{R2}$ |
| Elasticity coefficients | In the format “ecreaction_modifier” | ecR1_S1 or ecR2_Vf1 | $\varepsilon^{R1}{S1}$ or $\varepsilon^{R2}{Vf2}$ |
| Response coefficients | In the format “rcJreaction_parameter” | rcJR3_Vf3 | $R^{JR3}{Vf3}$ |
| Partial response coefficients | In the format “prcJreaction_parameter_reaction” | prcJR3_X2_R2 | $^{R2}R^{JR3}{X2}$ |
| Control patterns | CPn where n is an number assigned to a specific control pattern | CP4 | $CP4$ |
| Flux contribution by specific term | In the format "J_reaction_term" | J_R1_binding | $J{R1_{binding}}$ |
| Elasticity contribution by specific term | In the format "pecreaction_modifier_term" | pecR1_S1_binding | $\varepsilon^{R1_{binding}}_{S1}$ |
.. note:: Any underscores (_) in model defined variables or parameters will be removed when rendering to LaTeX to ensure consistency.
Saving and Default Directories
Whenever any analysis tool is used for the first time on a specific model, a directory is created within the PySCeS output directory that corresponds to the model name. A second directory which corresponds to the analysis tool name will be created within the first. These directories serve a dual purpose:
The fist, and most pertinent to the user, is for providing a default location for saving results. PySCeSToolbox allows users to save results to any arbitrary location on the file system, however when no location is provided, results will be saved to the default directory corresponding to the model name and analysis method as described above. We consider this a fairly intuitive and convenient system that is especially useful for outputting small sets of results. Result saving functionality is usually provided by a save_results method for each respective analysis tool. Exceptions are RateChar where multiple types of results may be saved, each with their own method, and ScanFig where figures are saved simply with a save method.
The second purpose is to provide a location for writing temporary files and internal data that is used to save “analysis sessions” for later loading. In this case specifying the output destination is not supported in most cases and these features depend on the default directory. Session saving functionality is provided only for tools that take significant amounts of time to generate results and will always be provided by a save_session method and a corresponding load_session method will read these results from disk.
.. note:: Depending on your OS the default PySCeS directory will be either ~/Pysces or C:\Pysces. PySCeSToolbox will therefore create the following type of folder structure: ~/Pysces/model_name/analysis_method/ or C:\Pysces\model_name\analysis_method\
Plotting and Displaying Results
As already mentioned previously, PySCeSToolbox includes the functionality to plot results generated by its tools. Typically these plots will either contain results from a parameter scan where some metabolic variables are plotted against a change in parameter, or they will contain results from a time simulation where the evolution of metabolic variables over a certain time period are plotted.
Data2D
The Data2D class provides functionality for capturing raw parameter scan/simulation results and provides an interface to the actual plotting tool ScanFig. It is used internally by other tools in PySCeSToolbox and a Data2D object will be created and returned automatically after performing a parameter scan with any of the do_par_scan methods provided by these tools.
Features
Access to scan/simulation results through its scan_results dictionary.
The ability to save results in the form of a csv file using the save_results method.
The ability to generate a ScanFig object via the plot method.
Usage example
Below is an usage example of Data2D, where results from a PySCeS parameter scan are saved to a
object.
End of explanation
# Each key represents a field through which results can be accessed
scan_data.scan_results.keys()
Explanation: Results that can be accessed via scan_results:
End of explanation
scan_data.scan_results.scan_results[:10,:]
Explanation: e.g. The first 10 data points for the scan results:
End of explanation
scan_data.save_results()
Explanation: Results can be saved using the default path as discussed in #Saving and default directories# with the save_results method:
End of explanation
# This path leads to the Pysces root folder
data_file_name = '~/Pysces/example_mod_Vf2_scan.csv'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32':
data_file_name = psctb.utils.misc.unix_to_windows_path(data_file_name)
else:
data_file_name = path.expanduser(data_file_name)
scan_data.save_results(file_name=data_file_name)
Explanation: Or they can be saved to a specified location:
End of explanation
# Instantiation of `ScanFig` object with name `scan_figure`
scan_figure = scan_data.plot()
Explanation: Finally, a ScanFig object can be created using the plot method:
End of explanation
scan_figure.interact()
Explanation: ScanFig
The ScanFig class provides the actual plotting object. This tool allows users to display figures with results directly in the Notebook and to control which data is displayed on the figure by use of an interactive widget based interface. As mentioned and shown above they are created by the plot method of a Data2D object, which means that a user never has the need to instantiate ScanFig directly.
Features
Interactive plotting via the interact method.
Script based plot generation where certain lines, or categories of lines (based on the type of information they represent), can be enabled and disabled via toggle_line or toggle_category methods.
Saving of plots with the save method.
Customisation of figures using standard matplotlib functionality.
Usage Example
Below is an usage example of ScanFig using the scan_figure instance created in the previous section. Here results from the parameter scan of Vf2 as generated by Scan1 is shown.
End of explanation
# The four method calls below are equivalent to clicking the category buttons
# scan_figure.toggle_category('Flux Rates',True)
# scan_figure.toggle_category('J_R1',True)
# scan_figure.toggle_category('J_R2',True)
# scan_figure.toggle_category('J_R3',True)
scan_figure.interact()
Explanation: The Figure shown above is empty - to show lines we need to click on the buttons. First we will click on the Flux Rates button which will allow any of the lines that fall into the category Flux Rates to be enabled. Then we click the other buttons:
End of explanation
print 'Line names : ', scan_figure.line_names
print 'Category names : ', scan_figure.category_names
Explanation: .. note:: Certain buttons act as filters for results that fall into their category. In the case above the Flux Rates button determines the visibility of the lines that fall into the Flux Rates category. In essence it overwrites the state of the buttons for the individual line categories. This feature is useful when multiple categories of results (species concentrations, elasticities, control patterns etc.) appear on the same plot by allowing to toggle the visibility of all the lines in a category.
We can also toggle the visibility with the toggle_line and toggle_category methods. Here toggle_category has the exact same effect as the buttons in the above example, while toggle_line bypasses any category filtering. The line and category names can be accessed via line_names and category_names:
End of explanation
scan_figure.toggle_category('Flux Rates',False)
scan_figure.toggle_line('J_R1',True)
scan_figure.show()
Explanation: In the example below we set the Flux Rates visibility to False, but we set the J_R1 line visibility to True. Finally we use the show method instead of interact to display the figure.
End of explanation
scan_figure.adjust_figure()
Explanation: The figure axes can also be adjusted via the adjust_figure method. Recall that the Vf2 scan was performed for a logarithmic scale rather than a linear scale. We will therefore set the x axis to log and its minimum value to 1. These settings are applied by clicking the Apply button.
End of explanation
scan_figure.fig.set_size_inches((6,4))
scan_figure.ax.set_ylabel('Rate')
scan_figure.line_names
scan_figure.show()
Explanation: The underlying matplotlib objects can be accessed through the fig and ax fields for the figure and axes, respectively. This allows for manipulation of the figures using matplotlib's functionality.
End of explanation
scan_figure.save()
Explanation: Finally the plot can be saved using the save method (or equivalently by pressing the save button) without specifying a path where the file will be saved as an svg vector image to the default directory as discussed under #Saving and default directories#:
End of explanation
# This path leads to the Pysces root folder
fig_file_name = '~/Pysces/example_mod_Vf2_scan.png'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32':
fig_file_name = psctb.utils.misc.unix_to_windows_path(fig_file_name)
else:
fig_file_name = path.expanduser(fig_file_name)
scan_figure.save(file_name=fig_file_name)
Explanation: A file name together with desired extension (and image format) can also be specified:
End of explanation
list_of_lists = [['a','b','c'],[1.2345,0.6789,0.0001011],[12,13,14]]
psctb.utils.misc.html_table(list_of_lists,
caption='Example')
Explanation: Tables
In PySCeSToolbox, results are frequently stored in an dictionary-like structure belonging to an analysis object. In most cases the dictionary will be named with _results appended to the type of results (e.g. Control coefficient results in SymCa are saved as cc_results while the parametrised internal metabolite scan results of RateChar are saved as scan_results).
In most cases the results stored are structured so that a single dictionary key is mapped to a single result (or result object). In these cases simply inspecting the variable in the IPython/Jupyter Notebook displays these results in an html style table where the variable name is displayed together with it's value e.g. for cc_results each control coefficient will be displayed next to its value at steady-state.
Finally, any 2D data-structure commonly used in together with PyCSeS and PySCeSToolbox can be displayed as an html table (e.g. list of lists, NumPy arrays, SymPy matrices).
Usage Example
Below we will construct a list of lists and display it as an html table.Captions can be either plain text or contain html tags.
End of explanation
formatter = psctb.utils.misc.formatter_factory(min_val=0.1,
max_val=10,
default_fmt='%.1f',
outlier_fmt='%.2e')
Explanation: By default floats are all formatted according to the argument float_fmt which defaults to %.2f (using the standard Python formatter string syntax). A formatter function can be passed to as the formatter argument which allows for more customisation.
Below we instantiate such a formatter using the formatter_factory function. Here all float values falling within the range set up by min_val and max_val (which includes the minimum, but excludes the maximum) will be formatted according to default_fmt, while outliers will be formatted according to outlier_fmt.
End of explanation
print formatter(0.09) # outlier
print formatter(0.1) # min for default
print formatter(2) # within range for default
print formatter(9) # max int for default
print formatter(10) # outlier
Explanation: The constructed formatter takes a number (e.g. float, int, etc.) as argument and returns a formatter string according to the previously setup parameters.
End of explanation
psctb.utils.misc.html_table(list_of_lists,
caption='Example',
formatter=formatter, # Previously constructed formatter
first_row_headers=True) # The first row can be set as the header
Explanation: Using this formatter with the previously constructed list_of_lists lead to a differently formatted html representation of the data:
End of explanation
model_graph = psctb.ModelGraph(mod)
Explanation: Graphic Representation of Metabolic Networks
PySCeSToolbox includes functionality for displaying interactive graph representations of metabolic networks through the ModelGraph tool. The main purpose of this feature is to allow for the visualisation of control patterns in SymCa. Currently, this tool is fairly limited in terms of its capabilities and therefore does not represent a replacement for more fully featured tools such as (cell designer? Or ???). One such limitation is that no automatic layout capabilities are included, and nodes representing species and concentrations have to be laid out by hand. Nonetheless it is useful for quickly visualising the structure of pathway and, as previously mentioned, for visualising the importance of various control patterns in SymCa.
Features
Displays interactive (d3.js based) reaction networks in the notebook.
Layouts can be saved and applied to other similar networks.
Usage Example
The main use case is for visualising control patterns. However, ModelGraph can be used in this capacity, the graph layout has to be defined. Below we will set up the layout for the example_model.
First we load the model and instantiate a ModelGraph object using the model. The show method displays the graph.
End of explanation
model_graph.show()
Explanation: Unless a layout has been previously defined, the species and reaction nodes will be placed randomly. Nodes are snap to an invisible grid.
End of explanation
# This path leads to the provided layout file
path_to_layout = '~/Pysces/psc/example_model_layout.dict'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32':
path_to_layout = psctb.utils.misc.unix_to_windows_path(path_to_layout)
else:
path_to_layout = path.expanduser(path_to_layout)
model_graph = psctb.ModelGraph(mod, pos_dic=path_to_layout)
model_graph.show()
Explanation: A layout file for the example_model is included (see link for details) and can be loaded by specifying the location of the layout file on the disk during ModelGraph instantiation.
End of explanation
model_graph = psctb.ModelGraph(mod)
model_graph.show()
Explanation: Clicking the Save Layout button saves this layout to the ~/Pysces/example_model/model_graph or C:\\Pysces\example_model\model_graph directory for later use. The Save Image Button wil save an svg image of the graph to the same location.
Now any future instantiation of a ModelGraph object for example_model will use the saved layout automatically.
End of explanation |
5,539 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Who Am I?
Chris Fregly
Research Scientist @ PipelineIO
Video Series Author "High Performance Tensorflow in Production" @ OReilly (Coming Soon)
Founder @ Advanced Spark and Tensorflow Meetup
Github Repo
DockerHub Repo
Slideshare
YouTube
Who Was I?
Software Engineer @ Netflix, Databricks, IBM Spark Tech Center
Infrastructure and Tools of this Talk
Docker
Images, Containers
Useful Docker Image
Step5: Commit and Deploy New Tensorflow AI Model
Commit Model to Github
Step6: Airflow Workflow Deploys New Model through Github Post-Commit Webhook to Triggers
Step7: Train and Deploy Spark ML Model (Airbnb Model, Mutable Deploy)
Scale Out Spark Training Cluster
Kubernetes CLI
Step8: Weavescope Kubernetes AWS Cluster Visualization
Step9: Generate PMML from Spark ML Model
Step10: Step 0
Step14: Step 1
Step15: Step 2
Step16: Step 3
Step17: Step 4
Step18: Step 5
Step19: Step 6
Step20: Step 7
Step21: Step 8
Step22: Push PMML to Live, Running Spark ML Model Server (Mutable)
Step23: Deploy Java-based Model (Simple Model, Mutable Deploy)
Step24: Deploy Java Model (HttpClient Model, Mutable Deploy)
Step25: Load Test and Compare Cloud Providers (AWS and Google)
Monitor Performance Across Cloud Providers
NetflixOSS Services Dashboard (Hystrix)
Step26: Grafana + Prometheus Dashboard
Step27: Start Load Tests
Run JMeter Tests from Local Laptop (Limited by Laptop)
Run Headless JMeter Tests from Training Clusters in Cloud
Step28: End Load Tests
Step29: Rolling Deploy Tensorflow AI (Simple Model, Immutable Deploy)
Kubernetes CLI | Python Code:
import numpy as np
import os
import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter
import time
# make things wide
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def=None, width=1200, height=800, max_const_size=32, ungroup_gradients=False):
if not graph_def:
graph_def = tf.get_default_graph().as_graph_def()
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
data = str(strip_def)
if ungroup_gradients:
data = data.replace('"gradients/', '"b_')
#print(data)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(data), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:{}px;height:{}px;border:0" srcdoc="{}"></iframe>
.format(width, height, code.replace('"', '"'))
display(HTML(iframe))
# If this errors out, increment the `export_version` variable, restart the Kernel, and re-run
#flags = tf.app.flags
#FLAGS = flags.FLAGS
#flags.DEFINE_integer("batch_size", 10, "The batch size to train")
batch_size = 10
#flags.DEFINE_integer("epoch_number", 10, "Number of epochs to run trainer")
epoch_number = 10
#flags.DEFINE_integer("steps_to_validate", 1,"Steps to validate and print loss")
steps_to_validate = 1
#flags.DEFINE_string("checkpoint_dir", "./checkpoint/", "indicates the checkpoint dirctory")
checkpoint_dir = "./checkpoint/"
#flags.DEFINE_string("model_path", "./model/", "The export path of the model")
#flags.DEFINE_string("model_path", "/root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/", "The export path of the model")
model_path = "/root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/"
#flags.DEFINE_integer("export_version", 27, "The version number of the model")
from datetime import datetime
seconds_since_epoch = int(datetime.now().strftime("%s"))
export_version = seconds_since_epoch
# If this errors out, increment the `export_version` variable, restart the Kernel, and re-run
def main():
# Define training data
x = np.ones(FLAGS.batch_size)
y = np.ones(FLAGS.batch_size)
# Define the model
X = tf.placeholder(tf.float32, shape=[None], name="X")
Y = tf.placeholder(tf.float32, shape=[None], name="yhat")
w = tf.Variable([1.0], name="weight")
b = tf.Variable([1.0], name="bias")
loss = tf.square(Y - tf.matmul(X, w) - b)
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
predict_op = tf.mul(X, w) + b
saver = tf.train.Saver()
checkpoint_dir = FLAGS.checkpoint_dir
checkpoint_file = checkpoint_dir + "/checkpoint.ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
# Start the session
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
print("Continue training from the model {}".format(ckpt.model_checkpoint_path))
saver.restore(sess, ckpt.model_checkpoint_path)
saver_def = saver.as_saver_def()
print(saver_def.filename_tensor_name)
print(saver_def.restore_op_name)
# Start training
start_time = time.time()
for epoch in range(FLAGS.epoch_number):
sess.run(train_op, feed_dict={X: x, Y: y})
# Start validating
if epoch % FLAGS.steps_to_validate == 0:
end_time = time.time()
print("[{}] Epoch: {}".format(end_time - start_time, epoch))
saver.save(sess, checkpoint_file)
tf.train.write_graph(sess.graph_def, checkpoint_dir, 'trained_model.pb', as_text=False)
tf.train.write_graph(sess.graph_def, checkpoint_dir, 'trained_model.txt', as_text=True)
start_time = end_time
# Print model variables
w_value, b_value = sess.run([w, b])
print("The model of w: {}, b: {}".format(w_value, b_value))
# Export the model
print("Exporting trained model to {}".format(FLAGS.model_path))
model_exporter = exporter.Exporter(saver)
model_exporter.init(
sess.graph.as_graph_def(),
named_graph_signatures={
'inputs': exporter.generic_signature({"features": X}),
'outputs': exporter.generic_signature({"prediction": predict_op})
})
model_exporter.export(FLAGS.model_path, tf.constant(export_version), sess)
print('Done exporting!')
if __name__ == "__main__":
main()
show_graph()
Explanation: Who Am I?
Chris Fregly
Research Scientist @ PipelineIO
Video Series Author "High Performance Tensorflow in Production" @ OReilly (Coming Soon)
Founder @ Advanced Spark and Tensorflow Meetup
Github Repo
DockerHub Repo
Slideshare
YouTube
Who Was I?
Software Engineer @ Netflix, Databricks, IBM Spark Tech Center
Infrastructure and Tools of this Talk
Docker
Images, Containers
Useful Docker Image: AWS + GPU + Docker + Tensorflow + Spark
Kubernetes
Container Orchestration Across Clusters
Weavescope
Kubernetes Cluster Visualization
Jupyter Notebooks
What We're Using Here for Everything!
Airflow
Invoke Any Type of Workflow on Any Type of Schedule
Github
Commit New Model to Github, Airflow Workflow Triggered for Continuous Deployment
DockerHub
Maintains Docker Images
Continuous Deployment
Not Just for Code, Also for ML/AI Models!
Canary Release
Deploy and Compare New Model Alongside Existing
Metrics and Dashboards
Not Just System Metrics, ML/AI Model Prediction Metrics
NetflixOSS-based
Prometheus
Grafana
Elasticsearch
Separate Cluster Concerns
Training/Admin Cluster
Prediction Cluster
Hybrid Cloud Deployment for eXtreme High Availability (XHA)
AWS and Google Cloud
Apache Spark
Tensorflow + Tensorflow Serving
Types of Model Deployment
KeyValue
ie. Recommendations
In-memory: Redis, Memcache
On-disk: Cassandra, RocksDB
First-class Servable in Tensorflow Serving
PMML
It's Useful and Well-Supported
Apple, Cisco, Airbnb, HomeAway, etc
Please Don't Re-build It - Reduce Your Technical Debt!
Native Code (CPU and GPU)
Hand-coded (Python + Pickling)
ie. Generate Java Code from PMML
Tensorflow Models
freeze_graph.py: Combine Tensorflow Graph (Static) with Trained Weights (Checkpoints) into Single Deployable Model
Model Deployments and Rollbacks
Mutable
Each New Model is Deployed to Live, Running Container
Immutable
Each New Model is a New Docker Image
Optimizing Tensorflow Models for Serving
Python Scripts
optimize_graph_for_inference.py
Pete Warden's Blog
Graph Transform Tool
Compile (Tensorflow 1.0+)
XLA Compiler
Compiles 3 graph operations (input, operation, output) into 1 operation
Removes need for Tensorflow Runtime (20 MB is significant on tiny devices)
Allows new backends for hardware-specific optimizations (better portability)
tfcompile
Convert Graph into executable code
Compress/Distill Ensemble Models
Convert ensembles or other complex models into smaller models
Re-score training data with output of model being distilled
Train smaller model to produce same output
Output of smaller model learns more information than original label
Optimizing Model Serving Runtime Environment
Throughput
Option 1: Add more Tensorflow Serving servers behind load balancer
Option 2: Enable request batching in each Tensorflow Serving
Option Trade-offs: Higher Latency (bad) for Higher Throughput (good)
$TENSORFLOW_SERVING_HOME/bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server
--port=9000
--model_name=tensorflow_minimal
--model_base_path=/root/models/tensorflow_minimal/export
--enable_batching=true
--max_batch_size=1000000
--batch_timeout_micros=10000
--max_enqueued_batches=1000000
Latency
The deeper the model, the longer the latency
Start inference in parallel where possible (ie. user inference in parallel with item inference)
Pre-load common inputs from database (ie. user attributes, item attributes)
Pre-compute/partial-compute common inputs (ie. popular word embeddings)
Memory
Word embeddings are huge!
Use hashId for each word
Off-load embedding matrices to parameter server and share between serving servers
Demos!!
Train and Deploy Tensorflow AI Model (Simple Model, Immutable Deploy)
Train Tensorflow AI Model
End of explanation
!ls -l /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export
!ls -l /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/00000027
!git status
!git add --all /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/00000027/
!git status
!git commit -m "updated tensorflow model"
!git status
# If this fails with "Permission denied", use terminal within jupyter to manually `git push`
!git push
Explanation: Commit and Deploy New Tensorflow AI Model
Commit Model to Github
End of explanation
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML
html = '<iframe width=100% height=500px src="http://demo.pipeline.io:8080/admin">'
display(HTML(html))
Explanation: Airflow Workflow Deploys New Model through Github Post-Commit Webhook to Triggers
End of explanation
!kubectl scale --context=awsdemo --replicas=2 rc spark-worker-2-0-1
!kubectl get pod --context=awsdemo
Explanation: Train and Deploy Spark ML Model (Airbnb Model, Mutable Deploy)
Scale Out Spark Training Cluster
Kubernetes CLI
End of explanation
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML
html = '<iframe width=100% height=500px src="http://kubernetes-aws.demo.pipeline.io">'
display(HTML(html))
Explanation: Weavescope Kubernetes AWS Cluster Visualization
End of explanation
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler, StandardScaler
from pyspark.ml.feature import OneHotEncoder, StringIndexer
from pyspark.ml import Pipeline, PipelineModel
from pyspark.ml.regression import LinearRegression
# You may need to Reconnect (more than Restart) the Kernel to pick up changes to these sett
import os
master = '--master spark://spark-master-2-1-0:7077'
conf = '--conf spark.cores.max=1 --conf spark.executor.memory=512m'
packages = '--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.1'
jars = '--jars /root/lib/jpmml-sparkml-package-1.0-SNAPSHOT.jar'
py_files = '--py-files /root/lib/jpmml.py'
os.environ['PYSPARK_SUBMIT_ARGS'] = master \
+ ' ' + conf \
+ ' ' + packages \
+ ' ' + jars \
+ ' ' + py_files \
+ ' ' + 'pyspark-shell'
print(os.environ['PYSPARK_SUBMIT_ARGS'])
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
Explanation: Generate PMML from Spark ML Model
End of explanation
df = spark.read.format("csv") \
.option("inferSchema", "true").option("header", "true") \
.load("s3a://datapalooza/airbnb/airbnb.csv.bz2")
df.registerTempTable("df")
print(df.head())
print(df.count())
Explanation: Step 0: Load Libraries and Data
End of explanation
df_filtered = df.filter("price >= 50 AND price <= 750 AND bathrooms > 0.0 AND bedrooms is not null")
df_filtered.registerTempTable("df_filtered")
df_final = spark.sql(
select
id,
city,
case when state in('NY', 'CA', 'London', 'Berlin', 'TX' ,'IL', 'OR', 'DC', 'WA')
then state
else 'Other'
end as state,
space,
cast(price as double) as price,
cast(bathrooms as double) as bathrooms,
cast(bedrooms as double) as bedrooms,
room_type,
host_is_super_host,
cancellation_policy,
cast(case when security_deposit is null
then 0.0
else security_deposit
end as double) as security_deposit,
price_per_bedroom,
cast(case when number_of_reviews is null
then 0.0
else number_of_reviews
end as double) as number_of_reviews,
cast(case when extra_people is null
then 0.0
else extra_people
end as double) as extra_people,
instant_bookable,
cast(case when cleaning_fee is null
then 0.0
else cleaning_fee
end as double) as cleaning_fee,
cast(case when review_scores_rating is null
then 80.0
else review_scores_rating
end as double) as review_scores_rating,
cast(case when square_feet is not null and square_feet > 100
then square_feet
when (square_feet is null or square_feet <=100) and (bedrooms is null or bedrooms = 0)
then 350.0
else 380 * bedrooms
end as double) as square_feet
from df_filtered
).persist()
df_final.registerTempTable("df_final")
df_final.select("square_feet", "price", "bedrooms", "bathrooms", "cleaning_fee").describe().show()
print(df_final.count())
print(df_final.schema)
# Most popular cities
spark.sql(
select
state,
count(*) as ct,
avg(price) as avg_price,
max(price) as max_price
from df_final
group by state
order by count(*) desc
).show()
# Most expensive popular cities
spark.sql(
select
city,
count(*) as ct,
avg(price) as avg_price,
max(price) as max_price
from df_final
group by city
order by avg(price) desc
).filter("ct > 25").show()
Explanation: Step 1: Clean, Filter, and Summarize the Data
End of explanation
continuous_features = ["bathrooms", \
"bedrooms", \
"security_deposit", \
"cleaning_fee", \
"extra_people", \
"number_of_reviews", \
"square_feet", \
"review_scores_rating"]
categorical_features = ["room_type", \
"host_is_super_host", \
"cancellation_policy", \
"instant_bookable", \
"state"]
Explanation: Step 2: Define Continous and Categorical Features
End of explanation
[training_dataset, validation_dataset] = df_final.randomSplit([0.8, 0.2])
Explanation: Step 3: Split Data into Training and Validation
End of explanation
continuous_feature_assembler = VectorAssembler(inputCols=continuous_features, outputCol="unscaled_continuous_features")
continuous_feature_scaler = StandardScaler(inputCol="unscaled_continuous_features", outputCol="scaled_continuous_features", \
withStd=True, withMean=False)
Explanation: Step 4: Continous Feature Pipeline
End of explanation
categorical_feature_indexers = [StringIndexer(inputCol=x, \
outputCol="{}_index".format(x)) \
for x in categorical_features]
categorical_feature_one_hot_encoders = [OneHotEncoder(inputCol=x.getOutputCol(), \
outputCol="oh_encoder_{}".format(x.getOutputCol() )) \
for x in categorical_feature_indexers]
Explanation: Step 5: Categorical Feature Pipeline
End of explanation
feature_cols_lr = [x.getOutputCol() \
for x in categorical_feature_one_hot_encoders]
feature_cols_lr.append("scaled_continuous_features")
feature_assembler_lr = VectorAssembler(inputCols=feature_cols_lr, \
outputCol="features_lr")
Explanation: Step 6: Assemble our Features and Feature Pipeline
End of explanation
linear_regression = LinearRegression(featuresCol="features_lr", \
labelCol="price", \
predictionCol="price_prediction", \
maxIter=10, \
regParam=0.3, \
elasticNetParam=0.8)
estimators_lr = \
[continuous_feature_assembler, continuous_feature_scaler] \
+ categorical_feature_indexers + categorical_feature_one_hot_encoders \
+ [feature_assembler_lr] + [linear_regression]
pipeline = Pipeline(stages=estimators_lr)
pipeline_model = pipeline.fit(training_dataset)
print(pipeline_model)
Explanation: Step 7: Train a Linear Regression Model
End of explanation
from jpmml import toPMMLBytes
model_bytes = toPMMLBytes(spark, training_dataset, pipeline_model)
print(model_bytes.decode("utf-8"))
Explanation: Step 8: Convert PipelineModel to PMML
End of explanation
import urllib.request
namespace = 'default'
model_name = 'airbnb'
version = '1'
update_url = 'http://prediction-pmml-aws.demo.pipeline.io/update-pmml/%s/%s/%s' % (namespace, model_name, version)
update_headers = {}
update_headers['Content-type'] = 'application/xml'
req = urllib.request.Request(update_url, \
headers=update_headers, \
data=model_bytes)
resp = urllib.request.urlopen(req)
print(resp.status) # Should return Http Status 200
import urllib.parse
import json
namespace = 'default'
model_name = 'airbnb'
version = '1'
evaluate_url = 'http://prediction-pmml-aws.demo.pipeline.io/evaluate-pmml/%s/%s/%s' % (namespace, model_name, version)
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"bathrooms":5.0, \
"bedrooms":4.0, \
"security_deposit":175.00, \
"cleaning_fee":25.0, \
"extra_people":1.0, \
"number_of_reviews": 2.0, \
"square_feet": 250.0, \
"review_scores_rating": 2.0, \
"room_type": "Entire home/apt", \
"host_is_super_host": "0.0", \
"cancellation_policy": "flexible", \
"instant_bookable": "1.0", \
"state": "CA"}'
encoded_input_params = input_params.encode('utf-8')
req = urllib.request.Request(evaluate_url, \
headers=evaluate_headers, \
data=encoded_input_params)
resp = urllib.request.urlopen(req)
print(resp.read())
Explanation: Push PMML to Live, Running Spark ML Model Server (Mutable)
End of explanation
from urllib import request
sourceBytes = ' \n\
private String str; \n\
\n\
public void initialize(Map<String, Object> args) { \n\
} \n\
\n\
public Object predict(Map<String, Object> inputs) { \n\
String id = (String)inputs.get("id"); \n\
\n\
return id.equals("21619"); \n\
} \n\
'.encode('utf-8')
from urllib import request
namespace = 'default'
model_name = 'java_equals'
version = '1'
update_url = 'http://prediction-java-aws.demo.pipeline.io/update-java/%s/%s/%s' % (namespace, model_name, version)
update_headers = {}
update_headers['Content-type'] = 'text/plain'
req = request.Request("%s" % update_url, headers=update_headers, data=sourceBytes)
resp = request.urlopen(req)
generated_code = resp.read()
print(generated_code.decode('utf-8'))
from urllib import request
namespace = 'default'
model_name = 'java_equals'
version = '1'
evaluate_url = 'http://prediction-java-aws.demo.pipeline.io/evaluate-java/%s/%s/%s' % (namespace, model_name, version)
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"id":"21618"}'
encoded_input_params = input_params.encode('utf-8')
req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)
resp = request.urlopen(req)
print(resp.read()) # Should return false
from urllib import request
namespace = 'default'
model_name = 'java_equals'
version = '1'
evaluate_url = 'http://prediction-java-aws.demo.pipeline.io/evaluate-java/%s/%s/%s' % (namespace, model_name, version)
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"id":"21619"}'
encoded_input_params = input_params.encode('utf-8')
req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)
resp = request.urlopen(req)
print(resp.read()) # Should return true
Explanation: Deploy Java-based Model (Simple Model, Mutable Deploy)
End of explanation
from urllib import request
sourceBytes = ' \n\
public Map<String, Object> data = new HashMap<String, Object>(); \n\
private String host = "http://prediction-keyvalue-aws.demo.pipeline.io/";\n\
private String path = "evaluate-keyvalue/default/1"; \n\
\n\
public void initialize(Map<String, Object> args) { \n\
data.put("url", host + path); \n\
} \n\
\n\
public Object predict(Map<String, Object> inputs) { \n\
try { \n\
String userId = (String)inputs.get("userId"); \n\
String itemId = (String)inputs.get("itemId"); \n\
String url = data.get("url") + "/" + userId + "/" + itemId; \n\
\n\
return org.apache.http.client.fluent.Request \n\
.Get(url) \n\
.execute() \n\
.returnContent(); \n\
\n\
} catch(Exception exc) { \n\
System.out.println(exc); \n\
throw exc; \n\
} \n\
} \n\
'.encode('utf-8')
from urllib import request
namespace = 'default'
model_name = 'java_httpclient'
version = '1'
update_url = 'http://prediction-java-aws.demo.pipeline.io/update-java/%s/%s/%s' % (namespace, model_name, version)
update_headers = {}
update_headers['Content-type'] = 'text/plain'
req = request.Request("%s" % update_url, headers=update_headers, data=sourceBytes)
resp = request.urlopen(req)
generated_code = resp.read()
print(generated_code.decode('utf-8'))
from urllib import request
namespace = 'default'
model_name = 'java_httpclient'
version = '1'
evaluate_url = 'http://prediction-java-aws.demo.pipeline.io/evaluate-java/%s/%s/%s' % (namespace, model_name, version)
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"userId":"21619", "itemId":"10006"}'
encoded_input_params = input_params.encode('utf-8')
req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)
resp = request.urlopen(req, timeout=3)
print(resp.read()) # Should return false
Explanation: Deploy Java Model (HttpClient Model, Mutable Deploy)
End of explanation
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML
html = '<iframe width=1200px height=500px src="http://hystrix.demo.pipeline.io/hystrix-dashboard/monitor/monitor.html?streams=%5B%7B%22name%22%3A%22Predictions%20-%20AWS%22%2C%22stream%22%3A%22http%3A%2F%2Fturbine-aws.demo.pipeline.io%2Fturbine.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22Predictions%20-%20GCP%22%2C%22stream%22%3A%22http%3A%2F%2Fturbine-gcp.demo.pipeline.io%2Fturbine.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%5D">'
display(HTML(html))
Explanation: Load Test and Compare Cloud Providers (AWS and Google)
Monitor Performance Across Cloud Providers
NetflixOSS Services Dashboard (Hystrix)
End of explanation
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML
html = '<iframe width=1200px height=500px src="http://grafana.demo.pipeline.io">'
display(HTML(html))
Explanation: Grafana + Prometheus Dashboard
End of explanation
# Spark ML - PMML - Airbnb
!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-airbnb-rc.yaml
!kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-airbnb-rc.yaml
# Codegen - Java - Simple
!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-equals-rc.yaml
!kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-equals-rc.yaml
# Tensorflow AI - Tensorflow Serving - Simple
!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-minimal-rc.yaml
!kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-minimal-rc.yaml
Explanation: Start Load Tests
Run JMeter Tests from Local Laptop (Limited by Laptop)
Run Headless JMeter Tests from Training Clusters in Cloud
End of explanation
!kubectl delete --context=awsdemo rc loadtest-aws-airbnb
!kubectl delete --context=gcpdemo rc loadtest-aws-airbnb
!kubectl delete --context=awsdemo rc loadtest-aws-equals
!kubectl delete --context=gcpdemo rc loadtest-aws-equals
!kubectl delete --context=awsdemo rc loadtest-aws-minimal
!kubectl delete --context=gcpdemo rc loadtest-aws-minimal
Explanation: End Load Tests
End of explanation
!kubectl rolling-update prediction-tensorflow --context=awsdemo --image-pull-policy=Always --image=fluxcapacitor/prediction-tensorflow
!kubectl get pod --context=awsdemo
!kubectl rolling-update prediction-tensorflow --context=gcpdemo --image-pull-policy=Always --image=fluxcapacitor/prediction-tensorflow
!kubectl get pod --context=gcpdemo
Explanation: Rolling Deploy Tensorflow AI (Simple Model, Immutable Deploy)
Kubernetes CLI
End of explanation |
5,540 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align = 'center'> Neural Networks Demystified </h1>
<h2 align = 'center'> Part 2
Step1: <h3 align = 'center'> Variables </h3>
|Code Symbol | Math Symbol | Definition | Dimensions
|
Step2: Each input value, or element in matrix X, needs to be multiplied by a corresponding weight and then added together with all the other results for each neuron. This is a complex operation, but if we take the three outputs we're looking for as a single row of a matrix, and place all our individual weights into a matrix of weights, we can create the exact behavior we need by multiplying our input data matrix by our weight matrix. Using matrix multiplication allows us to pass multiple inputs through at once by simply adding rows to the matrix X. From here on out, we'll refer to these matrics as X, W one, and z two, where z two the activity of our second layer. Notice that each entry in z is a sum of weighted inputs to each hidden neuron. Z is of size 3 by 3, one row for each example, and one column for each hidden unit.
We now have our first official formula, $z^{(2)} = XW^{(1)}$. Matrix notation is really nice here, becuase it allows us to express the complex underlying process in a single line!
$$
z^{(2)} = XW^{(1)} \tag{1}\
$$
Now that we have the activities for our second layer, z two, we need to apply the activation function. We'll independently apply the function to each entry in matrix z using a python method for this called sigmoid, because we’re using a sigmoid as our activation function. Using numpy is really nice here, because we can pass in a scalar, vector, or matrix, Numpy will apply the activation function element-wise, and return a result of the same dimension as it was given.
Step3: We now have our second formula for forward propagation, using f to denote our activation function, we can write that a two, our second layer activity, is equal to f of z two. a two will be a matrix of the same size as z two, 3 by 3.
$$
a^{(2)} = f(z^{(2)}) \tag{2}\
$$
To finish forward propagation we need to propagate a two all the way to the output, yhat. We've already done the heavy lifting in the previous layer, so all we have to do now is multiply a two by our senond layer weights W2 and apply one more activation funcion. W2 will be of size 3x1, one weight for each synapse. Multiplying a2, a 3 by 3, by W2, a 3 by 1 results in a 3 by 1 matrix z three, the activity or our third layer. z3 has three activity values, one for each example. Last but not least, we'll apply our activation function to z three yielding our official estimate of your test score, yHat.
$$
z^{(3)} = a^{(2)}W^{(2)} \tag{3}\
$$
$$
\hat{y} = f(z^{(3)}) \tag{4}\
$$
We need to implement our forward propagation formulas in python. First we'll initialize our weight matrices in our init method. For starting values, we'll use random numbers.
We'll implement forward propagation in our forward method, using numpy's built in dot method for matrix multiplication and our own sigmoid method. | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('UJwK6jAStmg')
Explanation: <h1 align = 'center'> Neural Networks Demystified </h1>
<h2 align = 'center'> Part 2: Forward Propagation </h2>
<h4 align = 'center' > @stephencwelch </h4>
End of explanation
#Import code from last time
%pylab inline
from partOne import *
print X.shape, y.shape
class Neural_Network(object):
def __init__(self):
#Define Hyperparameters
self.inputLayerSize = 2
self.outputLayerSize = 1
self.hiddenLayerSize = 3
def forward(self, X):
#Propagate inputs though network
Explanation: <h3 align = 'center'> Variables </h3>
|Code Symbol | Math Symbol | Definition | Dimensions
| :-: | :-: | :-: | :-: |
|X|$$X$$|Input Data, each row in an example| (numExamples, inputLayerSize)|
|y |$$y$$|target data|(numExamples, outputLayerSize)|
|W1 | $$W^{(1)}$$ | Layer 1 weights | (inputLayerSize, hiddenLayerSize) |
|W2 | $$W^{(2)}$$ | Layer 2 weights | (hiddenLayerSize, outputLayerSize) |
|z2 | $$z^{(2)}$$ | Layer 2 activation | (numExamples, hiddenLayerSize) |
|a2 | $$a^{(2)}$$ | Layer 2 activity | (numExamples, hiddenLayerSize) |
|z3 | $$z^{(3)}$$ | Layer 3 activation | (numExamples, outputLayerSize) |
Last time, we setup our neural network on paper. This time, we’ll implement it in the programming language python. We’ll build our network as a python class and our init method will take care of instantiating important constants and variables. We’ll make these values accessible to the whole class by placing a self dot in front of each variable name.
Our network has 2 inputs, 3 hidden units, and 1 output. These are examples of hyperparameters. Hyperparameters are constants that establish the structure and behavior of a neural network, but are not updated as we train the network. Our learning algorithm is not capable of, for example, deciding that it needs another hidden unit, this is something that WE must decide on before training. What a neural network does learn are parameters, specifically the weights on the synapses.
We’ll take care of moving data through our network in a method called forward. Rather than pass inputs through the network one at a time, we’re going to use matrices to pass through multiple inputs at once. Doing this allows for big computational speedups, especially when using tools like MATLAB or Numpy. Our input data matrix, X, is of dimension 3 by 2, because we have 3, 2-dimensional examples. Our corresponding output data, y, is of dimension 3 by 1.
End of explanation
def sigmoid(z):
#Apply sigmoid activation function to scalar, vector, or matrix
return 1/(1+np.exp(-z))
testInput = np.arange(-6,6,0.01)
plot(testInput, sigmoid(testInput), linewidth= 2)
grid(1)
sigmoid(1)
sigmoid(np.array([-1,0,1]))
sigmoid(np.random.randn(3,3))
Explanation: Each input value, or element in matrix X, needs to be multiplied by a corresponding weight and then added together with all the other results for each neuron. This is a complex operation, but if we take the three outputs we're looking for as a single row of a matrix, and place all our individual weights into a matrix of weights, we can create the exact behavior we need by multiplying our input data matrix by our weight matrix. Using matrix multiplication allows us to pass multiple inputs through at once by simply adding rows to the matrix X. From here on out, we'll refer to these matrics as X, W one, and z two, where z two the activity of our second layer. Notice that each entry in z is a sum of weighted inputs to each hidden neuron. Z is of size 3 by 3, one row for each example, and one column for each hidden unit.
We now have our first official formula, $z^{(2)} = XW^{(1)}$. Matrix notation is really nice here, becuase it allows us to express the complex underlying process in a single line!
$$
z^{(2)} = XW^{(1)} \tag{1}\
$$
Now that we have the activities for our second layer, z two, we need to apply the activation function. We'll independently apply the function to each entry in matrix z using a python method for this called sigmoid, because we’re using a sigmoid as our activation function. Using numpy is really nice here, because we can pass in a scalar, vector, or matrix, Numpy will apply the activation function element-wise, and return a result of the same dimension as it was given.
End of explanation
class Neural_Network(object):
def __init__(self):
#Define Hyperparameters
self.inputLayerSize = 2
self.outputLayerSize = 1
self.hiddenLayerSize = 3
#Weights (parameters)
self.W1 = np.random.randn(self.inputLayerSize, self.hiddenLayerSize)
self.W2 = np.random.randn(self.hiddenLayerSize, self.outputLayerSize)
def forward(self, X):
#Propagate inputs though network
self.z2 = np.dot(X, self.W1)
self.a2 = self.sigmoid(self.z2)
self.z3 = np.dot(self.a2, self.W2)
yHat = self.sigmoid(self.z3)
return yHat
def sigmoid(self, z):
#Apply sigmoid activation function to scalar, vector, or matrix
return 1/(1+np.exp(-z))
Explanation: We now have our second formula for forward propagation, using f to denote our activation function, we can write that a two, our second layer activity, is equal to f of z two. a two will be a matrix of the same size as z two, 3 by 3.
$$
a^{(2)} = f(z^{(2)}) \tag{2}\
$$
To finish forward propagation we need to propagate a two all the way to the output, yhat. We've already done the heavy lifting in the previous layer, so all we have to do now is multiply a two by our senond layer weights W2 and apply one more activation funcion. W2 will be of size 3x1, one weight for each synapse. Multiplying a2, a 3 by 3, by W2, a 3 by 1 results in a 3 by 1 matrix z three, the activity or our third layer. z3 has three activity values, one for each example. Last but not least, we'll apply our activation function to z three yielding our official estimate of your test score, yHat.
$$
z^{(3)} = a^{(2)}W^{(2)} \tag{3}\
$$
$$
\hat{y} = f(z^{(3)}) \tag{4}\
$$
We need to implement our forward propagation formulas in python. First we'll initialize our weight matrices in our init method. For starting values, we'll use random numbers.
We'll implement forward propagation in our forward method, using numpy's built in dot method for matrix multiplication and our own sigmoid method.
End of explanation |
5,541 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'sandbox-3', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: CAS
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
5,542 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Presentation
This notebook is modified version (with added comments) of presentation given by Bart Grasza on 2017.07.15
Goals of the notebook
This notebook has multiple purposes
Step1: Python lambda - anonymous function
(you can skip this section if you know python'a lambda function)
Before we dive into Keras Lambda layer, you should get familiar with python lambda function. You can read more about it e.g. here
Step2: If argument passed is list of lists, you can refer to it's elements in standard fashion
Step3: Lambda function can be "un-anomized"
Step4: Keras Lambda layer - custom operations on model
Below is simple keras model (fed with randomly generated data) to show how you can perform custom operations on data from previous layer. Here we're taking data from dense_layer and multiply it by 2
Step5: Model using custom operation (cosine similarity) on "output"
All the examples in Keras online documentation and in http
Step6: I skip the details of Keras implementation of above equation, instead focus on the Lambda layer.
Notes
Step8: Let's have a look at backend implementation.
If you git clone Keras repository (https
Step9: As you can see, the keras l2_normalize is very thin layer on top of tensorflow tf.nn.l2_normalize function.
What would happen if we just replace all keras functions with tensorflow equivalents?
Below you can see exactly the same model as above but directly using TensorFlow methods
Step10: It still works!
What are benefits of doing that?
Tensorflow has very rich library of functions, some of them are not available in Keras, so this way you can still use all goodies from TensorFlow!
What are the drawbacks?
The moment you start directly using TensorFlow, your code stops working in Theano. It's not a big problem if you made a choice of always using single Keras backend.
Example
Step11: Let's try to break down this equation.
First, notice that $h_{T_a}^{(a)}$ and $h_{T_b}^{(b)}$ represent sentences transformed into vectors representation.
The exp() is simply $e^x$ and if you plot it, it looks like this
Step12: If we zoom in $x$ axis to $<-1, 1>$, we can see that for $x$ == 0 the value is 1
Step13: Now we can flip it horizontaly by $f(x)$ axis by just adding minus sign to $x$
Step14: The higher value of $x$, the closer $f(x)$ value is to 0!
Now we just need to find a way how to combine 2 vectors so that
Step15: Writing custom layer
When should you use Lambda and when you need to write custom layer?
Quote from
Step16: And while our draw_model_graph helper function shows this
Step17: It's actually closer to this
Step18: The same graph represented as screenshot from Tensorboard (with hidden Dropout layers)
First, the "main" model
Step19: and the same model but with accuracy
( Compare this graph elements with our calc_accuracy method used by Lambda layer )
Step20: How to execute our new accuracy operation?
Simply use K.function().
Keras documentation says
Step21: In order to make it work, we should tell Keras/TensorFlow in which phase - train or test phase - we want to execute our K.function()
Why? Because some layers have to know in which phase they are, e.g. Dropout() should be skipped in test phase.
Here is our code extended with additional input (in TensorFlow called Placeholder) with phase K.learning_phase() set to 0 which means it's a test phase.
Step22: You can compare this output with original model.evaluate() output to see they are similar.
In other words, K.function() (similarly to TensorFlow's session.run() ) allows you to run custom operations on sub-parts of your graph.
Below example shows how to get output from our model's second Dense() layer | Python Code:
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
from keras.layers.embeddings import Embedding
from keras.layers.core import Dense, Dropout, Lambda
from keras.layers import Input, GlobalAveragePooling1D
from keras.layers.convolutional import Conv1D
from keras.layers.merge import concatenate
from keras import backend as K
from keras.layers.normalization import BatchNormalization
from keras.models import Model
from keras.optimizers import Adam
# Keras model graph
from IPython.display import SVG, Image
from keras.utils.vis_utils import model_to_dot
# ... and drawing helper
def draw_model_graph(model):
return SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg'))
Explanation: Presentation
This notebook is modified version (with added comments) of presentation given by Bart Grasza on 2017.07.15
Goals of the notebook
This notebook has multiple purposes:
what is and how to use Keras Lambda() layer
how to use it to implement e.g. math operations from ML scientific papers
how Keras is using underlying backends
what is the difference between Keras model and Theano/TensorFlow graph
The bigger idea behind all of this is:
TensorFlow/Theano is powerful but much more complex than Keras. You can easily end up writing 10 times more code. You should stick to Keras, but what if at some point you feel limited by what can be achieved using Keras? What if instead of switching to pure TensorFlow/Theano, you could use them from within Keras? I will try to explain this concept with theory and code examples.
Note: This notebook focuses on TensorFlow being used as Keras backend.
End of explanation
a = [1, 2, 3, 4]
map(lambda x: x*2, a)
Explanation: Python lambda - anonymous function
(you can skip this section if you know python'a lambda function)
Before we dive into Keras Lambda layer, you should get familiar with python lambda function. You can read more about it e.g. here: http://www.secnetix.de/olli/Python/lambda_functions.hawk
Below I show how lambda works together with python map() method, which is very similar to how it's being used in Keras Lambda layer
End of explanation
a = [[1, 1],
[2, 2],
[3, 3],
[4, 4]]
map(lambda x: x[0]*x[1], a)
Explanation: If argument passed is list of lists, you can refer to it's elements in standard fashion:
End of explanation
a = [1, 2, 3, 4]
# Syntax:
# map(lambda x: x*2, a)
# is similar to:
def times_two(x):
return x*2
map(times_two, a)
Explanation: Lambda function can be "un-anomized"
End of explanation
X1 = np.random.rand(1000, 50)
X2 = np.random.rand(1000, 50)
Y = np.random.rand(1000,)
input = Input(shape=(50,))
dense_layer = Dense(30)(input)
##################
merge_layer = Lambda(lambda x: x*2)(dense_layer)
##################
output = Dense(1)(merge_layer)
model = Model(inputs=input, outputs=output)
model.compile(loss='mse', optimizer='adam')
model.fit(x=X1, y=Y)
draw_model_graph(model)
Explanation: Keras Lambda layer - custom operations on model
Below is simple keras model (fed with randomly generated data) to show how you can perform custom operations on data from previous layer. Here we're taking data from dense_layer and multiply it by 2
End of explanation
Image(filename='images/cos_distance.png')
Explanation: Model using custom operation (cosine similarity) on "output"
All the examples in Keras online documentation and in http://fast.ai course show how to build a model with output being fully connected ( Dense() ) layer.
Here I show the example of model which, given 2 vectors on the input, expects on the output cosine similarity score between these vectors. Cosine similarity can have values from < -1, 1 >, where close to 1 represents 2 vectors being similar/almost similar. Close to -1 represents 2 vectors being very different.
The exact equation is:
End of explanation
input_1 = Input(shape=(50,))
d1 = Dense(30)(input_1)
input_2 = Input(shape=(50,))
d2 = Dense(30)(input_2)
def merge_layer(layer_input):
x1_l2_norm = K.l2_normalize(layer_input[0], axis=1)
print(x1_l2_norm)
print(type(x1_l2_norm))
x2_l2_norm = K.l2_normalize(layer_input[1], axis=1)
mat_mul = K.dot(x1_l2_norm,
K.transpose(x2_l2_norm))
return K.sum(mat_mul, axis=1)
output = Lambda(merge_layer, output_shape=(1,))([d1, d2])
model = Model(inputs=[input_1, input_2], outputs=output)
model.compile(loss='mse', optimizer='adam')
model.fit(x=[X1, X2], y=Y)
draw_model_graph(model)
Explanation: I skip the details of Keras implementation of above equation, instead focus on the Lambda layer.
Notes:
- our output layer isn't Dense() but Lambda().
- print() statement shows example Tensor from Lambda calculation being type tensorflow.python.framework.ops.Tensor
- Keras documentation for Lamda layer says that for TensorFlow you don't need to specify output_shape but it's not always the truth. In the case of model below where it accepts 2 inputs, it couldn't figure out correct output shape.
End of explanation
# copied from Keras, file: tensorflow_backend.py
def l2_normalize(x, axis):
Normalizes a tensor wrt the L2 norm alongside the specified axis.
# Arguments
x: Tensor or variable.
axis: axis along which to perform normalization.
# Returns
A tensor.
if axis < 0:
axis %= len(x.get_shape())
return tf.nn.l2_normalize(x, dim=axis)
Explanation: Let's have a look at backend implementation.
If you git clone Keras repository (https://github.com/fchollet/keras), open file: keras/backend/tensorflow_backend.py
and look for l2_normalize(x, axis) method, you will see this:
End of explanation
input_1 = Input(shape=(50,))
d1 = Dense(30)(input_1)
input_2 = Input(shape=(50,))
d2 = Dense(30)(input_2)
def merge_layer(layer_input):
x1_l2_norm = tf.nn.l2_normalize(layer_input[0], dim=1) # notice axis -> dim
x2_l2_norm = tf.nn.l2_normalize(layer_input[1], dim=1)
mat_mul = tf.matmul(x1_l2_norm,
tf.transpose(x2_l2_norm))
return tf.reduce_sum(mat_mul, axis=1)
output = Lambda(merge_layer, output_shape=(1,))([d1, d2])
model = Model(inputs=[input_1, input_2], outputs=output)
model.compile(loss='mse', optimizer='adam')
model.fit(x=[X1, X2], y=Y)
Explanation: As you can see, the keras l2_normalize is very thin layer on top of tensorflow tf.nn.l2_normalize function.
What would happen if we just replace all keras functions with tensorflow equivalents?
Below you can see exactly the same model as above but directly using TensorFlow methods
End of explanation
Image(filename='images/vec_sim.png', width=400)
Explanation: It still works!
What are benefits of doing that?
Tensorflow has very rich library of functions, some of them are not available in Keras, so this way you can still use all goodies from TensorFlow!
What are the drawbacks?
The moment you start directly using TensorFlow, your code stops working in Theano. It's not a big problem if you made a choice of always using single Keras backend.
Example: how to use what we've learned so far to implement more complex NN
In this section we will slightly divert from main topic and show how you can implement part of the real scientific paper.
Paper we're going to use is: http://www.mit.edu/~jonasm/info/MuellerThyagarajan_AAAI16.pdf
The goal is to train NN to measure similarity between 2 sentences. You can read paper for details how the sentences are being transformed into vectors, here we only focus on the last part where each sentence is already in vector format.
The idea is that the closer the value of compared sentences to 1 is, the sentences are more similar. The closer the value to 0 means that sentences are different.
In the PDF, If you scroll to the bottom of page 3, you can see following equation:
End of explanation
x = np.linspace(-5, 5, 50)
y = np.exp(x)
plt.figure(figsize=(10, 5))
plt.plot(x, y)
Explanation: Let's try to break down this equation.
First, notice that $h_{T_a}^{(a)}$ and $h_{T_b}^{(b)}$ represent sentences transformed into vectors representation.
The exp() is simply $e^x$ and if you plot it, it looks like this:
End of explanation
x = np.linspace(-1, 1, 50)
y = np.exp(x)
plt.figure(figsize=(10,10))
plt.plot(x, y)
Explanation: If we zoom in $x$ axis to $<-1, 1>$, we can see that for $x$ == 0 the value is 1
End of explanation
x = np.linspace(0, 6, 50)
y = np.exp(-x)
plt.figure(figsize=(10,5))
plt.plot(x, y)
Explanation: Now we can flip it horizontaly by $f(x)$ axis by just adding minus sign to $x$: $e^{-x}$
The plot changes to:
End of explanation
input_1 = Input(shape=(50,))
d1 = Dense(30)(input_1)
input_2 = Input(shape=(50,))
d2 = Dense(30)(input_2)
def merge_layer(layer_input):
v1 = layer_input[0]
v2 = layer_input[1]
# L1 distance operations
sub_op = tf.subtract(v1, v2)
print("tensor: %s" % sub_op)
print("sub_op.shape: %s" % sub_op.shape)
abs_op = tf.abs(sub_op)
print("abs_op.shape: %s" % abs_op.shape)
sum_op = tf.reduce_sum(abs_op, axis=-1)
print("sum_op.shape: %s" % sum_op.shape)
# ... followed by exp(-x) part
reverse_op = -sum_op
out = tf.exp(reverse_op)
return out
# The same but in single line
# pred = tf.exp(-tf.reduce_sum(tf.abs(tf.subtract(v1, v2)), axis=-1))
output = Lambda(merge_layer, output_shape=(1,))([d1, d2])
model = Model(inputs=[input_1, input_2], outputs=output)
model.compile(loss='mse', optimizer='adam')
m = model.fit(x=[X1, X2], y=Y)
Explanation: The higher value of $x$, the closer $f(x)$ value is to 0!
Now we just need to find a way how to combine 2 vectors so that:
- when they are similar, they will give $x$ close to 0 and
- the more different they are, the value of $x$ should be higher.
To achieve that, authors used L1 distance (https://en.wikipedia.org/wiki/Taxicab_geometry ), which basically is a absolute (method tf.abs() ) difference between values in each dimention. It is represented by $\lVert \mathbf{p - q} \rVert_1$ in our original equation.
All above can be represented by code in merge_layer(layer_input):
End of explanation
from keras.datasets import mnist
import keras
num_classes = 10
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print(y_train.shape)
input = Input(shape=(784,))
y_input = Input(shape=(10,)) # <--
dense1 = Dense(512, activation='relu')(input)
x = Dropout(0.2)(dense1)
dense2 = Dense(512, activation='relu')(x)
x = Dropout(0.2)(dense2)
output = Dense(10, activation='softmax')(x)
def calc_accuracy(x):
y = tf.argmax(x[0], axis=-1)
predictions = tf.argmax(x[1], axis=-1)
comparison = tf.cast(tf.equal(predictions, y), dtype=tf.float32)
return tf.reduce_sum(comparison) * 1. / len(x_test)
accuracy = Lambda(calc_accuracy)([y_input, output]) # <--
model = Model(inputs=input, outputs=output)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=2)
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Explanation: Writing custom layer
When should you use Lambda and when you need to write custom layer?
Quote from:
https://keras.io/layers/writing-your-own-keras-layers/ :
"For simple, stateless custom operations, you are probably better off using layers.core.Lambda layers. But for any custom operation that has trainable weights, you should implement your own layer."
-----------------------
Full list of Keras backend functions with types
List below represents all functions from Keras repository tensorflow_backend.py
I'm showing them here so that you could quickly see how many useful methods it has, notice many similar methods implemented in Numpy.
INTERNAL UTILS
get_uid(prefix='')
reset_uids()
clear_session()
manual_variable_initialization(value)
learning_phase()
set_learning_phase(value)
get_session()
set_session(session)
VARIABLE MANIPULATION
_convert_string_dtype(dtype)
_to_tensor(x, dtype)
is_sparse(tensor)
to_dense(tensor)
variable(value, dtype=None, name=None)
_initialize_variables()
constant(value, dtype=None, shape=None, name=None)
is_keras_tensor(x)
placeholder(shape=None, ndim=None, dtype=None, sparse=False, name=None)
shape(x)
int_shape(x)
ndim(x)
dtype(x)
eval(x)
zeros(shape, dtype=None, name=None)
ones(shape, dtype=None, name=None)
eye(size, dtype=None, name=None)
zeros_like(x, dtype=None, name=None)
ones_like(x, dtype=None, name=None)
identity(x)
random_uniform_variable(shape, low, high, dtype=None, name=None, seed=None)
random_normal_variable(shape, mean, scale, dtype=None, name=None, seed=None)
count_params(x)
cast(x, dtype)
UPDATES OPS
update(x, new_x)
update_add(x, increment)
update_sub(x, decrement)
moving_average_update(x, value, momentum)
LINEAR ALGEBRA
dot(x, y)
batch_dot(x, y, axes=None)
transpose(x)
gather(reference, indices)
ELEMENT-WISE OPERATIONS
_normalize_axis(axis, ndim)
max(x, axis=None, keepdims=False)
min(x, axis=None, keepdims=False)
sum(x, axis=None, keepdims=False)
prod(x, axis=None, keepdims=False)
cumsum(x, axis=0)
cumprod(x, axis=0)
var(x, axis=None, keepdims=False)
std(x, axis=None, keepdims=False)
mean(x, axis=None, keepdims=False)
any(x, axis=None, keepdims=False)
all(x, axis=None, keepdims=False)
argmax(x, axis=-1)
argmin(x, axis=-1)
square(x)
abs(x)
sqrt(x)
exp(x)
log(x)
logsumexp(x, axis=None, keepdims=False)
round(x)
sign(x)
pow(x, a)
clip(x, min_value, max_value)
equal(x, y)
not_equal(x, y)
greater(x, y)
greater_equal(x, y)
less(x, y)
less_equal(x, y)
maximum(x, y)
minimum(x, y)
sin(x)
cos(x)
normalize_batch_in_training(x, gamma, beta,
batch_normalization(x, mean, var, beta, gamma, epsilon=1e-3)
SHAPE OPERATIONS
concatenate(tensors, axis=-1)
reshape(x, shape)
permute_dimensions(x, pattern)
resize_images(x, height_factor, width_factor, data_format)
resize_volumes(x, depth_factor, height_factor, width_factor, data_format)
repeat_elements(x, rep, axis)
repeat(x, n)
arange(start, stop=None, step=1, dtype='int32')
tile(x, n)
flatten(x)
batch_flatten(x)
expand_dims(x, axis=-1)
squeeze(x, axis)
temporal_padding(x, padding=(1, 1))
spatial_2d_padding(x, padding=((1, 1), (1, 1)), data_format=None)
spatial_3d_padding(x, padding=((1, 1), (1, 1), (1, 1)), data_format=None)
stack(x, axis=0)
one_hot(indices, num_classes)
reverse(x, axes)
VALUE MANIPULATION
get_value(x)
batch_get_value(ops)
set_value(x, value)
batch_set_value(tuples)
get_variable_shape(x)
print_tensor(x, message='')
GRAPH MANIPULATION
(class) class Function(object)
function(inputs, outputs, updates=None, **kwargs)
gradients(loss, variables)
stop_gradient(variables)
CONTROL FLOW
rnn(step_function, inputs, initial_states,
_step(time, output_ta_t, *states)
_step(time, output_ta_t, *states)
switch(condition, then_expression, else_expression)
then_expression_fn()
else_expression_fn()
in_train_phase(x, alt, training=None)
in_test_phase(x, alt, training=None)
NN OPERATIONS
relu(x, alpha=0., max_value=None)
elu(x, alpha=1.)
softmax(x)
softplus(x)
softsign(x)
categorical_crossentropy(output, target, from_logits=False)
sparse_categorical_crossentropy(output, target, from_logits=False)
binary_crossentropy(output, target, from_logits=False)
sigmoid(x)
hard_sigmoid(x)
tanh(x)
dropout(x, level, noise_shape=None, seed=None)
l2_normalize(x, axis)
in_top_k(predictions, targets, k)
CONVOLUTIONS
_preprocess_deconv3d_output_shape(x, shape, data_format)
_preprocess_deconv_output_shape(x, shape, data_format)
_preprocess_conv2d_input(x, data_format)
_preprocess_conv3d_input(x, data_format)
_preprocess_conv2d_kernel(kernel, data_format)
_preprocess_conv3d_kernel(kernel, data_format)
_preprocess_padding(padding)
_postprocess_conv2d_output(x, data_format)
_postprocess_conv3d_output(x, data_format)
conv1d(x, kernel, strides=1, padding='valid', data_format=None, dilation_rate=1)
conv2d(x, kernel, strides=(1, 1), padding='valid', data_format=None, dilation_rate=1)
conv2d_transpose(x, kernel, output_shape, strides=(1, 1), padding='valid', data_format=None)
separable_conv2d(x, depthwise_kernel, pointwise_kernel, strides=(1, 1), padding='valid', data_format=None, dilation_rate=(1, 1))
depthwise_conv2d(x, depthwise_kernel, strides=(1, 1), padding='valid', data_format=None, dilation_rate=(1, 1))
conv3d(x, kernel, strides=(1, 1, 1), padding='valid', data_format=None, dilation_rate=(1, 1, 1))
conv3d_transpose(x, kernel, output_shape, strides=(1, 1, 1), padding='valid', data_format=None)
pool2d(x, pool_size, strides=(1, 1), padding='valid', data_format=None, pool_mode='max')
pool3d(x, pool_size, strides=(1, 1, 1), padding='valid', data_format=None, pool_mode='max')
bias_add(x, bias, data_format=None)
RANDOMNESS
random_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None)
random_uniform(shape, minval=0.0, maxval=1.0, dtype=None, seed=None)
random_binomial(shape, p=0.0, dtype=None, seed=None)
truncated_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None)
CTC
Tensorflow has a native implemenation, but it uses sparse tensors
and therefore requires a wrapper for Keras. The functions below convert
dense to sparse tensors and also wraps up the beam search code that is
in tensorflow's CTC implementation
ctc_label_dense_to_sparse(labels, label_lengths)
range_less_than(_, current_input)
ctc_batch_cost(y_true, y_pred, input_length, label_length)
ctc_decode(y_pred, input_length, greedy=True, beam_width=100, top_paths=1)
HIGH ORDER FUNCTIONS
map_fn(fn, elems, name=None, dtype=None)
foldl(fn, elems, initializer=None, name=None)
foldr(fn, elems, initializer=None, name=None)
local_conv1d(inputs, kernel, kernel_size, strides, data_format=None)
local_conv2d(inputs, kernel, kernel_size, strides, output_shape, data_format=None)
-----------------------
Computational graph
So far all our Keras models have been based on Model() class. But with our new knowledge I encourage you to stop thinking of your artificial neural network implemented in Keras as of only Model() being stack of layers, one after another.
To fully understand what I mean by that (and the code below), first read introduction to TensorFlow: https://www.tensorflow.org/get_started/get_started .
Now, armed with this new knowledge, it will be easier to understand why and how the code below works.
The original code (MNIST from Keras repository) has been extended to show how you can write Keras code without being restricted only to Keras Model() class, but instead build graph in similar way you would do using pure Tensorflow.
Why would you do that? Because Keras is much simpler and verbose than TensorFlow, it allows you to move faster. Whenever you'd feel restricted by using standard Model() approach, you can get away by extending your graph in a similar way I present below.
In this example we extend the graph to calculate accuracy (I'm aware that the accuracy is in-built into Keras model.fit() or model.evaluate() methods, but the purpose of this example is to show how you can add more operations to the graph).
Having said that, original MNIST graph has been extended with:
another Input: y_input (in TensorFlow called Placeholder) to input true Y values you will be checking your model's output against
Lambda layer accuracy to perform actual calculation
End of explanation
draw_model_graph(model)
Explanation: And while our draw_model_graph helper function shows this
End of explanation
Image(filename='images/graph_with_acc.png', width=500)
Explanation: It's actually closer to this:
(blue color represents our Model())
End of explanation
Image(filename='images/tensorboard_graph_1.png', height=500)
Explanation: The same graph represented as screenshot from Tensorboard (with hidden Dropout layers)
First, the "main" model:
End of explanation
Image(filename='images/tensorboard_graph_2.png', height=500)
Explanation: and the same model but with accuracy
( Compare this graph elements with our calc_accuracy method used by Lambda layer )
End of explanation
# Note: This code will return error, read comment below to understand why.
accuracy_fn = K.function(inputs=[input, y_input],
outputs=[accuracy])
accuracy_fn([x_test, y_test])
Explanation: How to execute our new accuracy operation?
Simply use K.function().
Keras documentation says:
def function(inputs, outputs, updates=None, **kwargs):
Instantiates a Keras function.
# Arguments
inputs: List of placeholder tensors.
outputs: List of output tensors.
updates: List of update ops.
**kwargs: Passed to `tf.Session.run`.
# Returns
Output values as Numpy arrays.
If you'd check the original implementation, you would see that it's TensorFlow wrapper of session.run() with your inputs being used in feed_dict arguments:
session.run([outputs], feed_dict=inputs_dictionary)
Refer to introduction to TensorFlow if you don't understand what I'm talking about.
So, to calculate accuracy using K.function() on our test set, we need to do following:
End of explanation
accuracy_fn = K.function(inputs=[input, y_input, K.learning_phase()],
outputs=[accuracy])
accuracy_fn([x_test, y_test, 0])
Explanation: In order to make it work, we should tell Keras/TensorFlow in which phase - train or test phase - we want to execute our K.function()
Why? Because some layers have to know in which phase they are, e.g. Dropout() should be skipped in test phase.
Here is our code extended with additional input (in TensorFlow called Placeholder) with phase K.learning_phase() set to 0 which means it's a test phase.
End of explanation
# input_values is an single 25x25 flattened "image" with random values
input_values = np.random.rand(1, 784)
second_layer = K.function(inputs=[input],
outputs=[dense1])
dense1_output = second_layer([input_values, 0])[0]
print("Our dense1 layer output shape is")
dense1_output.shape
Explanation: You can compare this output with original model.evaluate() output to see they are similar.
In other words, K.function() (similarly to TensorFlow's session.run() ) allows you to run custom operations on sub-parts of your graph.
Below example shows how to get output from our model's second Dense() layer
End of explanation |
5,543 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image classification from scratch
Author
Step1: Load the data
Step2: Now we have a PetImages folder which contain two subfolders, Cat and Dog. Each
subfolder contains image files for each category.
Step3: Filter out corrupted images
When working with lots of real-world image data, corrupted images are a common
occurence. Let's filter out badly-encoded images that do not feature the string "JFIF"
in their header.
Step4: Generate a Dataset
Step5: Visualize the data
Here are the first 9 images in the training dataset. As you can see, label 1 is "dog"
and label 0 is "cat".
Step6: Using image data augmentation
When you don't have a large image dataset, it's a good practice to artificially
introduce sample diversity by applying random yet realistic transformations to the
training images, such as random horizontal flipping or small random rotations. This
helps expose the model to different aspects of the training data while slowing down
overfitting.
Step7: Let's visualize what the augmented samples look like, by applying data_augmentation
repeatedly to the first image in the dataset
Step8: Standardizing the data
Our image are already in a standard size (180x180), as they are being yielded as
contiguous float32 batches by our dataset. However, their RGB channel values are in
the [0, 255] range. This is not ideal for a neural network;
in general you should seek to make your input values small. Here, we will
standardize values to be in the [0, 1] by using a Rescaling layer at the start of
our model.
Two options to preprocess the data
There are two ways you could be using the data_augmentation preprocessor
Step9: Build a model
We'll build a small version of the Xception network. We haven't particularly tried to
optimize the architecture; if you want to do a systematic search for the best model
configuration, consider using
KerasTuner.
Note that
Step10: Train the model
Step11: We get to ~96% validation accuracy after training for 50 epochs on the full dataset.
Run inference on new data
Note that data augmentation and dropout are inactive at inference time. | Python Code:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
Explanation: Image classification from scratch
Author: fchollet<br>
Date created: 2020/04/27<br>
Last modified: 2020/04/28<br>
Description: Training an image classifier from scratch on the Kaggle Cats vs Dogs dataset.
Introduction
This example shows how to do image classification from scratch, starting from JPEG
image files on disk, without leveraging pre-trained weights or a pre-made Keras
Application model. We demonstrate the workflow on the Kaggle Cats vs Dogs binary
classification dataset.
We use the image_dataset_from_directory utility to generate the datasets, and
we use Keras image preprocessing layers for image standardization and data augmentation.
Setup
End of explanation
!curl -O https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip
!unzip -q kagglecatsanddogs_3367a.zip
!ls
Explanation: Load the data: the Cats vs Dogs dataset
Raw data download
First, let's download the 786M ZIP archive of the raw data:
End of explanation
!ls PetImages
Explanation: Now we have a PetImages folder which contain two subfolders, Cat and Dog. Each
subfolder contains image files for each category.
End of explanation
import os
num_skipped = 0
for folder_name in ("Cat", "Dog"):
folder_path = os.path.join("PetImages", folder_name)
for fname in os.listdir(folder_path):
fpath = os.path.join(folder_path, fname)
try:
fobj = open(fpath, "rb")
is_jfif = tf.compat.as_bytes("JFIF") in fobj.peek(10)
finally:
fobj.close()
if not is_jfif:
num_skipped += 1
# Delete corrupted image
os.remove(fpath)
print("Deleted %d images" % num_skipped)
Explanation: Filter out corrupted images
When working with lots of real-world image data, corrupted images are a common
occurence. Let's filter out badly-encoded images that do not feature the string "JFIF"
in their header.
End of explanation
image_size = (180, 180)
batch_size = 32
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
"PetImages",
validation_split=0.2,
subset="training",
seed=1337,
image_size=image_size,
batch_size=batch_size,
)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
"PetImages",
validation_split=0.2,
subset="validation",
seed=1337,
image_size=image_size,
batch_size=batch_size,
)
Explanation: Generate a Dataset
End of explanation
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(int(labels[i]))
plt.axis("off")
Explanation: Visualize the data
Here are the first 9 images in the training dataset. As you can see, label 1 is "dog"
and label 0 is "cat".
End of explanation
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
]
)
Explanation: Using image data augmentation
When you don't have a large image dataset, it's a good practice to artificially
introduce sample diversity by applying random yet realistic transformations to the
training images, such as random horizontal flipping or small random rotations. This
helps expose the model to different aspects of the training data while slowing down
overfitting.
End of explanation
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
Explanation: Let's visualize what the augmented samples look like, by applying data_augmentation
repeatedly to the first image in the dataset:
End of explanation
train_ds = train_ds.prefetch(buffer_size=32)
val_ds = val_ds.prefetch(buffer_size=32)
Explanation: Standardizing the data
Our image are already in a standard size (180x180), as they are being yielded as
contiguous float32 batches by our dataset. However, their RGB channel values are in
the [0, 255] range. This is not ideal for a neural network;
in general you should seek to make your input values small. Here, we will
standardize values to be in the [0, 1] by using a Rescaling layer at the start of
our model.
Two options to preprocess the data
There are two ways you could be using the data_augmentation preprocessor:
Option 1: Make it part of the model, like this:
python
inputs = keras.Input(shape=input_shape)
x = data_augmentation(inputs)
x = layers.Rescaling(1./255)(x)
... # Rest of the model
With this option, your data augmentation will happen on device, synchronously
with the rest of the model execution, meaning that it will benefit from GPU
acceleration.
Note that data augmentation is inactive at test time, so the input samples will only be
augmented during fit(), not when calling evaluate() or predict().
If you're training on GPU, this is the better option.
Option 2: apply it to the dataset, so as to obtain a dataset that yields batches of
augmented images, like this:
python
augmented_train_ds = train_ds.map(
lambda x, y: (data_augmentation(x, training=True), y))
With this option, your data augmentation will happen on CPU, asynchronously, and will
be buffered before going into the model.
If you're training on CPU, this is the better option, since it makes data augmentation
asynchronous and non-blocking.
In our case, we'll go with the first option.
Configure the dataset for performance
Let's make sure to use buffered prefetching so we can yield data from disk without
having I/O becoming blocking:
End of explanation
def make_model(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
# Image augmentation block
x = data_augmentation(inputs)
# Entry block
x = layers.Rescaling(1.0 / 255)(x)
x = layers.Conv2D(32, 3, strides=2, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.Conv2D(64, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
previous_block_activation = x # Set aside residual
for size in [128, 256, 512, 728]:
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(size, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(size, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(3, strides=2, padding="same")(x)
# Project residual
residual = layers.Conv2D(size, 1, strides=2, padding="same")(
previous_block_activation
)
x = layers.add([x, residual]) # Add back residual
previous_block_activation = x # Set aside next residual
x = layers.SeparableConv2D(1024, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.GlobalAveragePooling2D()(x)
if num_classes == 2:
activation = "sigmoid"
units = 1
else:
activation = "softmax"
units = num_classes
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(units, activation=activation)(x)
return keras.Model(inputs, outputs)
model = make_model(input_shape=image_size + (3,), num_classes=2)
keras.utils.plot_model(model, show_shapes=True)
Explanation: Build a model
We'll build a small version of the Xception network. We haven't particularly tried to
optimize the architecture; if you want to do a systematic search for the best model
configuration, consider using
KerasTuner.
Note that:
We start the model with the data_augmentation preprocessor, followed by a
Rescaling layer.
We include a Dropout layer before the final classification layer.
End of explanation
epochs = 50
callbacks = [
keras.callbacks.ModelCheckpoint("save_at_{epoch}.h5"),
]
model.compile(
optimizer=keras.optimizers.Adam(1e-3),
loss="binary_crossentropy",
metrics=["accuracy"],
)
model.fit(
train_ds, epochs=epochs, callbacks=callbacks, validation_data=val_ds,
)
Explanation: Train the model
End of explanation
img = keras.preprocessing.image.load_img(
"PetImages/Cat/6779.jpg", target_size=image_size
)
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create batch axis
predictions = model.predict(img_array)
score = predictions[0]
print(
"This image is %.2f percent cat and %.2f percent dog."
% (100 * (1 - score), 100 * score)
)
Explanation: We get to ~96% validation accuracy after training for 50 epochs on the full dataset.
Run inference on new data
Note that data augmentation and dropout are inactive at inference time.
End of explanation |
5,544 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q2
In this question, we'll again look at classification, but with a more sophisticated algorithm. We'll also use the Iris dataset again.
Part A
In this question, you'll use a powerful classification technique known as Support Vector Machines, or SVMs.
SVMs work by finding a line (or, in high dimensions, a hyperplane) that best separates data points that belong to different classes. SVMs are flexible enough to range from this fairly straightforward version, all the way to extremely complex incarnations that use nonlinear kernels to project the original data into extremely high-dimensional space, where (in theory) the data are easier to separate.
SVMs can also enforce a penalty which "allows" for a certain amount of classification error, thereby making the decision boundary more fluid. This penalty term can be increased to make the decision boundary less permeable, or decreased to allow for more errors.
In this part, you'll write code to train a linear SVM. In your code
Step1: Part B
In this part, you'll write an accompanying function to test the classification accuracy of your trained model.
In your code
Step2: Part C
In this part, you'll test the functions you just wrote.
The following code contains a cross-validation loop | Python Code:
import sklearn.svm as svm
import numpy as np
np.random.seed(13775)
X = np.random.random((20, 2))
y = np.random.randint(2, size = 20)
m1 = train_svm(X, y, 100.0)
assert m1.C == 100.0
np.testing.assert_allclose(m1.coef_, np.array([[ 0.392707, -0.563687]]), rtol=1e-6)
import numpy as np
np.random.seed(598497)
X = np.random.random((20, 2))
y = np.random.randint(2, size = 20)
m2 = train_svm(X, y, 10000.0)
assert m2.C == 10000.0
np.testing.assert_allclose(m2.coef_, np.array([[ -0.345056, -0.6118 ]]), rtol=1e-6)
Explanation: Q2
In this question, we'll again look at classification, but with a more sophisticated algorithm. We'll also use the Iris dataset again.
Part A
In this question, you'll use a powerful classification technique known as Support Vector Machines, or SVMs.
SVMs work by finding a line (or, in high dimensions, a hyperplane) that best separates data points that belong to different classes. SVMs are flexible enough to range from this fairly straightforward version, all the way to extremely complex incarnations that use nonlinear kernels to project the original data into extremely high-dimensional space, where (in theory) the data are easier to separate.
SVMs can also enforce a penalty which "allows" for a certain amount of classification error, thereby making the decision boundary more fluid. This penalty term can be increased to make the decision boundary less permeable, or decreased to allow for more errors.
In this part, you'll write code to train a linear SVM. In your code:
Define a function train_svm().
train_svm should take 3 arguments: a data matrix X, a target array y, and a floating-point penalty strength term C.
It should return a trained SVM model.
Your function should 1) create a linear SVM model, initialized with the correct penalty term, and 2) train (or fit) the model with a dataset and its labels. Look at the scikit-learn documentation for Linear SVC.
(The "C" in "SVC" means "Support Vector Classifier, as scikit-learn also has SVM implementations that can be used for regression)
End of explanation
np.random.seed(58982)
X = np.random.random((100, 2))
y = np.random.randint(2, size = 100)
m2 = train_svm(X[:75], y[:75], 100.0)
acc2 = test_svm(X[75:], y[75:], m2)
np.testing.assert_allclose(acc2, 0.36, rtol = 1e-4)
np.random.seed(99766)
X = np.random.random((20, 2))
y = np.random.randint(2, size = 20)
m2 = train_svm(X[:18], y[:18], 10.0)
acc2 = test_svm(X[18:], y[18:], m2)
np.testing.assert_allclose(acc2, 0.5)
Explanation: Part B
In this part, you'll write an accompanying function to test the classification accuracy of your trained model.
In your code:
Define a function test_svm().
test_svm should take 3 arguments: a data matrix X, a target array y, and a trained SVM model. It should return a prediction accuracy between 0 (completely incorrect) and 1 (completely correct).
Your function can use the score() method available on the SVM model. Look at the scikit-learn documentation for K-Nearest Neighbors.
End of explanation
import numpy as np
import sklearn.datasets as datasets
import sklearn.cross_validation as cv
# Set up the iris data.
iris = datasets.load_iris()
X = iris.data[:, :2]
y = iris.target
# Some variables you're welcome to change, if you want.
C = 1.0 # SVM penalty term
folds = 5 # The "k" in "k-fold cross-validation"
# Set up the cross-validation loop.
kfold = cv.KFold(X.shape[0], n_folds = folds, shuffle = True, random_state = 10)
for train, test in kfold:
# YOUR CODE HERE.
### BEGIN SOLUTION
### END SOLUTION
Explanation: Part C
In this part, you'll test the functions you just wrote.
The following code contains a cross-validation loop: it uses built-in scikit-learn tools to automate the task of implementing robust k-fold cross-validation. Incorporate your code into the core of the loop to extract sub-portions of the data matrix X and corresponding sub-portions of the target array y for training and testing.
In the following code:
Implement training and testing of the SVM in each cross-validation loop. The point is that, in each loop, the training and testing sets are different than the previous loop.
Keep track of the average classification accuracy. Print it at the end.
End of explanation |
5,545 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Monty-Hall</center>
Problema cunoscuta sub numele Monty Hall sau Let's make a deal este o problema de probabilitati a carei solutie
pare nefireasca. Ea este legata de emisiunile/spectacolele TV cu aceste nume.
Formularea problemei
Step1: Solutia pe care ar da-o un necunoscator al teoriei probabilitatilor
Step2: De exemplu, daca amplasarea premiilor este A=['c', 'm', 'c'] atunci rnd.randint(0,2) genereaza aleator un indice $i$ al
elementelor listei $A$.
A.pop(i) returneaza
elementul A[i] si il sterge din lista (vezi
aici metode pentru liste).
Ilustrarea prin rularea liniilor de cod aferente
Step3: A.index('c') returneaza indicele primului element din lista ramasa, care este egal cu 'c' si astfel prezentatorul
alege o usa in spatele careia este o capra
Step4: Inainte de a experimenta jocul sintetizam principiul de implementare
Step5: Orice varianta ati incerca, probabilitatea de castig este aproximativ 0.33, daca nu schimba usa aleasa initial
si respectiv aproximativ 0.66, daca o schimba.
Deci daca jucati, alegeti sa schimbati usa!!
... si in final, solutia data de xkcd | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('mhlc7peGlGg#t=15')
Explanation: <center> Monty-Hall</center>
Problema cunoscuta sub numele Monty Hall sau Let's make a deal este o problema de probabilitati a carei solutie
pare nefireasca. Ea este legata de emisiunile/spectacolele TV cu aceste nume.
Formularea problemei:
Emisiune concurs: in spatele a trei usi este plasata o masina (sau un bilet de vacanta,
o suma mare de bani), respectiv doua capre (sau orice altceva avand o valoare cu mult mai redusa fata de oferta din spatele usii cu valoare).
Concurentul alege o usa, dar nu o deschide. Prezentatorul in schimb deschide o alta usa in spatele careia stie sigur ca este o capra si apoi il intreaba pe concurent: vrei sa schimbi usa?
Intrebarea careia incercam sa-i raspundem teoretic si prin simularea jocului este urmatoarea: ar trebui sa schimbe concurentul usa pentru a-si mari sansa de a o nimeri pe cea "valoroasa"?
Video-ul urmator prezinta ideea de baza a concursului Let's make a deal:
End of explanation
import random as rnd
from __future__ import division # importand division m/n este evaluat ca float(m)/n
def alegeUsa(A): #alege dintr-o lista de 3 elemente din care doua sunt setate pe 'c' si una pe'm'
Conc=A.pop(rnd.randint(0, 2))#alegerea concurentului
Prez= A.pop(A.index('c'))#alegerea prezentatorului emisiunii
return Conc, Prez
Explanation: Solutia pe care ar da-o un necunoscator al teoriei probabilitatilor:
Codificam masina cu "m" si capra cu "c". Avem $3$ posibilitati de plasare a masinii si celor doua capre
in trei incaperi:
$\Omega={(m, c, c), (c, m, c), (c, c, m)}$
Daca concurentul alege o usa la intamplare el are sansa de a castiga de $1/3$.
Dupa ce i s-a aratat o usa in care este o capra, suntem tentati sa spunem ca sansa de a castiga, alegand oricare din usile ramase
este:
$P(\mbox{castig}|U_d=c)=1/2$, unde $(U_d=c)$ este evenimentul ca in spatele usii deschise de prezentator sa fie o capra.
Cu alte cuvinte, rationamentul comun conduce la ideea ca n-are importanta daca concurentul pastreaza usa aleasa sau schimba usa. Dar este incorect!!!!!
Solutia folosind formula lui Bayes
Notam cu $H_k$, $k=1,2,3$, ipotezele privind amplasarea premiilor in spatele celor trei usi, si anume
$(m,c,c)$, $(c,m,c)$, $(c,c,m)$.
Probabilitatile apriori ale acestor ipoteze sunt
$P(H_k)=1/3$, $k=1,2,3$.
Presupunem pentru simplitatea prezentarii ca concurentul alege usa 1, iar prezentatorul deschide usa 2, in spatele careia stie sigur ca este o capra.
Calculul probabilitatii de a castiga daca concurentul nu schimba usa aleasa
Conform formulei lui Bayes, probabilitatea ca masina sa fie plasata in spatele usii alese
de concurent, rectificata de informatia ca in spatele usii $2$ e o capra este:
$P(H_1|U_2)=\displaystyle\frac{P(H_1)P(U_2|H_1)}{P(H_1)P(U_2|H_1)+P(H_2)P(U_2|H_2)+P(H_3)P(U_2|H_3)}$
Deoarece $P(U_2|H_2)=0$ (adica stiind ca masina este in spatele usii 2, probabilitatea ca prezentatorul sa deschida aceasta usa este 0), avem:
$P(H_1|U_2)=\displaystyle\frac{P(H_1)P(U_2|H_1)}{P(H_1)P(U_2|H_1)+P(H_3)P(U_2|H_3)}$
$P(U_2|H_1)=1/2$, deoarece stiind ca masina este in spatele usii 1 (avem conditionare de evenimentul $H_1)$) prezentatorul alege uniform intre cele doua usi $2$ si $3$.
Daca deschiderea unei usi este conditionata de $H_3$, adica stie ca in spatele usii 3 este masina, iar concurentul a ales usa 1, nu-i ramane decat sa deschida usa 2, si nici intr-un caz usa 3. Prin urmare avem
$P(U_2|H_3)=1$.
Inlocuind in formula lui Bayes rezulta ca:
$P(H_1|U_2)=\displaystyle\frac{(1/3)(1/2)}{(1/3)(1/2)+(1/3) 1}=\displaystyle\frac{1/6}{1/6+1/3}=1/3$
Deoarece $H_1\cup H_2\cup H_3=\Omega$, avem ca:
$P_{U_2}(H_1\cup H_2\cup H_3)=P_{U_2}(\Omega)=1$.
Ipotezele $H_1, H_2, H_3$ fiind mutual exclusive, iar functia $P_{U_2}$ fiind o functie de probabilitate, rezulta ca:
$P(H_1|U_2)+P(H_2|U_2)+P(H_3|U_2)=1$.
Evident, $P(H_2|U_2)=0$ si astfel probabilitatea ca masina sa fie in spatele usii 3, stiind ca in spatele celei de-a doua este o capra, este:
$P(H_3|U_2)=1-P(H_1|U_2)=1-1/3=2/3$.
Deci daca concurentul se hotaraste sa schimbe usa 1, aleasa initial, cu unica posibilitate, adica usa 3, atunci sansa de a castiga masina este dubla fata de cazul in care pastreaza usa initiala.
Rationamentul si rezultatul este identic pentru orice alta alegere a concurentului si respectiv a prezentatorului.
Simularea jocului Monty Hall
Sa ilustram acest rezultat prin simulare in Python a jocului. Adica repetam experimentul alegerii
de un numar $Nr$ de ori, fie cu optiunea de schimabare a usii alese initial, fie fara schimbare si numaram de cate ori castiga, adica nimereste usa ce are in spatele ei o masina.
End of explanation
A=['c', 'm', 'c']
Conc=A.pop(rnd.randint(0, 2))
print Conc
print A
Explanation: De exemplu, daca amplasarea premiilor este A=['c', 'm', 'c'] atunci rnd.randint(0,2) genereaza aleator un indice $i$ al
elementelor listei $A$.
A.pop(i) returneaza
elementul A[i] si il sterge din lista (vezi
aici metode pentru liste).
Ilustrarea prin rularea liniilor de cod aferente:
End of explanation
def Monty_Hall(A, schimba=True):
#A este tripletul ce da amplasarea obiectelor in spatele usilor
Conc, Prez=alegeUsa(A)# Concurentul si prezentatorul aleg cate o usa conform regulii jocului
if (schimba):
alegerea=A[0]#in lista mai exista un singur element si aceasta este alegerea noua
else:
alegerea=Conc
return alegerea
def simulare_MH(Nr, optiune):#repeta experimentul MH de Nr ori cu optiunea schimba (True sau False)
nrcastig=0
for n in range(Nr):
A=rnd.choice([['m', 'c', 'c'],['c', 'm', 'c'],['c', 'c', 'm']])# alege la intamplare o amplasare
# a premiilor din cele 3 posibile
rez=Monty_Hall(A, schimba=optiune)
if rez=='m':
nrcastig+=1
return nrcastig
Explanation: A.index('c') returneaza indicele primului element din lista ramasa, care este egal cu 'c' si astfel prezentatorul
alege o usa in spatele careia este o capra: Prez=A.pop(A.index('c')).
Dupa executia acestei linii in lista A
ramane doar un singur element.
End of explanation
opt_schimb=True# optiunea schimba sau nu
Nr=1000
nrc=simulare_MH(Nr, opt_schimb)
print "Probabilitatea sa castige daca schimba usa aleasa=", nrc/Nr
nrc=simulare_MH(Nr, False)
print "Probabilitatea sa castige daca nu schimba usa aleasa=", nrc/Nr
Explanation: Inainte de a experimenta jocul sintetizam principiul de implementare:
Jocul se repeta de Nr ori, fie cu optiunea alege o noua usa dupa ce prezentatorul a deschis usa cu capra, fie nu alege.
La inceputul jocului se alege la intamplare o varianta de amplasare a masinii (din cele 3 posibile)
functia alegeUsa face alegerea aleatoare a usii pentru concurent si pentru prezentator.
End of explanation
from IPython.display import Image
Image(url='http://imgs.xkcd.com/comics/monty_hall.png')
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Orice varianta ati incerca, probabilitatea de castig este aproximativ 0.33, daca nu schimba usa aleasa initial
si respectiv aproximativ 0.66, daca o schimba.
Deci daca jucati, alegeti sa schimbati usa!!
... si in final, solutia data de xkcd
End of explanation |
5,546 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text Vectorization
Step1: Let's remove stop words and include bigrams...
Step2: tf-idf weighting
tf - Term Frequency
idf - Inverse Document Frequency
The tf-idf weight of a term is the product of its tf weight and its idf weight.
$W_{t,d} = log(1+tf_{t,d}) \cdot log_{10}(\frac{N}{df_{t}})$
Best known weighting scheme in information retrieval
Note | Python Code:
# Import python libs
import sqlite3 as sqlite # work with sqlite databases
import os # used to set working directory
import pandas as pd # process data with pandas dataframe
import numpy as np
# Setup pandas display options
pd.options.display.max_colwidth = 500
# Constants
small_sqlite = "example_db.sqlite"
# Set working directory
os.chdir('../Data/')
# Read sqlite query results into a pandas DataFrame
con = sqlite.connect(small_sqlite)
df = pd.read_sql_query("SELECT * from Documents", con)
con.close()
df.head()
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(df['NOTE_TEXT'].tolist())
X
X.toarray()
vectorizer.get_feature_names()
Explanation: Text Vectorization
End of explanation
vectorizer2 = CountVectorizer(stop_words='english', ngram_range=(1, 2))
X2 = vectorizer2.fit_transform(df['NOTE_TEXT'].tolist())
vectorizer2.get_feature_names()
Explanation: Let's remove stop words and include bigrams...
End of explanation
from sklearn.feature_extraction.text import TfidfTransformer
transformer = TfidfTransformer(use_idf=True)
tfidf_result = transformer.fit_transform(X2)
def display_scores(vectorizer, tfidf_result):
scores = zip(vectorizer.get_feature_names(),
np.asarray(tfidf_result.sum(axis=0)).ravel())
sorted_scores = sorted(scores, key=lambda x: x[1], reverse=True)
for item in sorted_scores:
print "{0:20} Score: {1}".format(item[0], item[1])
display_scores(vectorizer2, tfidf_result)
from nltk.stem.porter import *
from nltk.tokenize import word_tokenize
import string
stemmer = PorterStemmer()
def stem_tokens(tokens, stemmer):
stemmed = []
for item in tokens:
stemmed.append(stemmer.stem(item))
return stemmed
def tokenize(text):
tokens = word_tokenize(text)
tokens = [i for i in tokens if i not in string.punctuation]
stems = stem_tokens(tokens, stemmer)
return stems
vectorizer3 = CountVectorizer(tokenizer=tokenize, stop_words='english', ngram_range=(1, 2))
X3 = vectorizer3.fit_transform(df['NOTE_TEXT'].tolist())
vectorizer3.get_feature_names()
Explanation: tf-idf weighting
tf - Term Frequency
idf - Inverse Document Frequency
The tf-idf weight of a term is the product of its tf weight and its idf weight.
$W_{t,d} = log(1+tf_{t,d}) \cdot log_{10}(\frac{N}{df_{t}})$
Best known weighting scheme in information retrieval
Note: the “-” in tf-idf is a hyphen, not a minus sign!
Increases with the number of occurrences within a document
Increases with the rarity of the term in the collection
End of explanation |
5,547 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 style="font-size
Step1: <h2> Part 2
Step2: Getting the view as map of coordinates and values
Use Case
Step3: Getting the number of cells
Use Case
Step4: Getting data as CSV fomat
Use Case
Step5: Getting data as (pandas) dataframe
Use Case
Step6: Getting data as (pandas) pivot dataframe
Use Case
Step7: Getting data in custom JSON format
Use Case
Step8: Getting cell values
Use Case
Step9: Getting row elements and cell values only
Use Case
Step11: Getting data with attributes values
To get attributes values, you will need to get the data from an MDX query
Step12: Part 3 | Python Code:
#import pandas to get data from csv file
import pandas as pd
# pd.read_csv will store the information into a pandas dataframe called df
df = pd.read_csv('reading_data.csv')
#A pandas dataframe has lots of cool pre-built functions such as:
# print the result
df.head()
#write data to csv
df.to_csv('my_new_filePyPal.csv')
# Find all unique values for one column
df.Country.unique()
Explanation: <h1 style="font-size:42px; text-align:center; margin-bottom:30px;"><span style="color:SteelBlue">TM1py:</span>Reading Data</h1>
<hr>
Going through all the different ways to get data into your Python scripts
Part 1: Reading data from a CSV file
Introduction to pandas
End of explanation
#import TM1py services
from TM1py.Services import TM1Service
from TM1py.Utils import Utils
#TM1 credentials
ADDRESS = "localhost"
PORT = 8009
USER = "admin"
PWD = "apple"
SSL = True
#Connect to the TM1 instance
tm1 = TM1Service(address=ADDRESS, port=PORT, user=USER, password=PWD, ssl=SSL)
# Cube view used in this notbook
cube_name = 'Bike Shares'
view_name = '2014 to 2017 Counts by Day'
Explanation: <h2> Part 2: Reading data from TM1 </h2>
<span> Going through all TM1py options to load data from TM1 into our Jupyter notebook</span>
<h3>Setting up connection to TM1</h3>
End of explanation
# query first 5 cells from the cube view as coordinate-cell dictionary
cells = tm1.cubes.cells.execute_view(cube_name=cube_name, view_name=view_name, private=False, top=5)
cells
# print first entries from coordinates-cell dictionary instead of [Version].[Version].[Actual] returns Actual
for element_unique_names, cell in cells.items():
# extract element names from unique-element-names
element_names = Utils.element_names_from_element_unique_names(
element_unique_names=element_unique_names)
# take value from cell
value = cell["Value"]
print(element_names, value)
Explanation: Getting the view as map of coordinates and values
Use Case: Get the values with all intersections
End of explanation
#tm1.cubes.cells.execute_view_csv or tm1.cubes.cells.execute_mdx_csv
%time
df_cellcount= tm1.cubes.cells.execute_view_cellcount(cube_name=cube_name, view_name=view_name, private=False)
df_cellcount
Explanation: Getting the number of cells
Use Case: Check how many cells are you going to work with
Note: Very fast
End of explanation
#tm1.cubes.cells.execute_view_csv or tm1.cubes.cells.execute_mdx_csv
%time
csv = tm1.cubes.cells.execute_view_csv(cube_name=cube_name, view_name=view_name, private=False)
#diplay the result as CSV format
csv[0:200]
#diplay first 10 lines of the result
for line in csv.split("\r\n")[0:10]:
print(line)
#diplay last 20 lines of the result
for line in csv.split("\r\n")[-10:]:
print(line)
Explanation: Getting data as CSV fomat
Use Case: Get your data as CSV format
Note: Very fast
End of explanation
%time
df = tm1.cubes.cells.execute_view_dataframe(cube_name=cube_name, view_name=view_name, private=False)
df.head()
df.to_csv(view_name+"Pypal.csv")
Explanation: Getting data as (pandas) dataframe
Use Case: Get your data as a pandas dataframe
Note: useful for further data analysis in python
End of explanation
%time
df_pivot = tm1.cubes.cells.execute_view_dataframe_pivot(cube_name=cube_name, view_name=view_name, private=False)
# print first 5 records
df_pivot.head()
# print last 5 records
df_pivot.tail()
Explanation: Getting data as (pandas) pivot dataframe
Use Case: Get your data as a pandas dataframe following your view structure
Note: useful for further data analysis in python
End of explanation
%time
raw_json = tm1.cubes.cells.execute_view_raw(
cube_name=cube_name,
view_name=view_name,
private=False,
elem_properties=["Type"],
cell_properties=["RuleDerived", "Value"])
# print full response
raw_json
# Extract cube name from response
raw_json['Cube']
Explanation: Getting data in custom JSON format
Use Case: Query additional information, such as:
Cell is RuleDerived
Cell is Consolidated
Member properties
Attribute Values
Note: very flexible. Not fast.
End of explanation
%time
values = tm1.cubes.cells.execute_view_values(cube_name=cube_name, view_name=view_name, private=False)
# extract first ten values
first_ten = list(values)[0:10]
# print first ten values
print(first_ten)
Explanation: Getting cell values
Use Case: sometimes you are only interested in the cell values. Skipping the elements in the response increases performance
Note: Fast and light
End of explanation
rows_and_values = tm1.cubes.cells.execute_view_rows_and_values(
cube_name=cube_name,
view_name=view_name,
private=False,
element_unique_names=False)
for row_elements, values_by_row in rows_and_values.items():
print(row_elements, values_by_row)
Explanation: Getting row elements and cell values only
Use Case: sometimes elements in columns and titles are irrelevant. Skipping these elements in the response increases performance
Note: Faster than querying everything
End of explanation
mdx =
WITH MEMBER [Bike Shares Measure].[City Alias] AS [}ElementAttributes_City].([}ElementAttributes_City].[City Alias])
SELECT
NON EMPTY {[Date].Members}*{TM1SubsetAll([City])} ON ROWS,
NON EMPTY {[Bike Shares Measure].[Count], [Bike Shares Measure].[City Alias] } ON COLUMNS
FROM [Bike Shares]
WHERE ([Version].[Actual],[Bike Shares Measure].[Count])
# get table'ish dataframe
data = tm1.cubes.cells.execute_mdx(mdx)
#Build pandas dataframe
df = Utils.build_pandas_dataframe_from_cellset(data, multiindex=False)
print(df)
# get pivot dataframe
pivot = tm1.cubes.cells.execute_mdx_dataframe_pivot(mdx)
print(pivot)
Explanation: Getting data with attributes values
To get attributes values, you will need to get the data from an MDX query
End of explanation
#library for HTTP / REST Request against Webservices
import requests
#standard library for JSON parsing, manipulation
import json
# Define constants
STATION = 'GHCND:USW00014732'
FROM, TO = '2017-01-01', '2017-01-04'
HEADERS = {"token": 'yyqEBOAbHVbtXkfAmZuPNfnSXvdfyhgn'}
url = 'https://www.ncdc.noaa.gov/cdo-web/api/v2/data?' \
'datasetid=GHCND&' \
'startdate=' + FROM + '&' \
'enddate=' + TO + '&' \
'limit=1000&' \
'datatypeid=TMIN&' \
'datatypeid=TAVG&' \
'datatypeid=TMAX&' \
'stationid=' + STATION
print(url)
#Execute the URL against the NOAA API to get the results
#Prettyprint first three items from result-set
response = requests.get(url, headers=HEADERS).json()
results = response["results"]
print(json.dumps(results[0:3], indent=2))
#Rearrange the data
cells = dict()
for record in results:
value = record['value'] / 10
coordinates = ("Actual", record['date'][0:10], "NYC", record['datatype'])
cells[coordinates] = value
for coordinate, value in cells.items():
print(coordinate, value)
# Write values back to TM1
tm1.cubes.cells.write_values("Weather Data", cells)
Explanation: Part 3: Reading from APIs
Getting data from a web service, introduction to a JSON file
The code below has been extracted from this article: Upload weather data from web service
End of explanation |
5,548 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FEniCS overview
https
Step1: Set domain and mesh
Step2: List of other default meshes are available here
Also you can create your own mesh in the following manner
Step3: Set function space
Step4: The first argument is a mesh
The second argument is a family of functions that we use
Step5: Function $u_D$ is written as Expression object that is directly translated into efficient C++ code
The first argument is a string that represent mathematical equation in a valid C++ syntax
If inside Expression some constant appeares, they should be clearly specified
python
f = Expression("exp(-kappa*pow(pi, 2)*t)*sin(pi*k*x[0])", degree=2, kappa=1.0, t=0, k=4)
The second parameter indicates the degree of function space that will be used to evaluare expression in every local element. So, the optimal value of this parameter is the same as in function space or greater by one or two
Function boundary checks whether requested point x lies on the Dirichlet boundary or not. Argument on_boundary is True if requested point lies on the physical boundary of the mesh and False otherwise
Set variational form
Step6: Function $u$ in variational form is called trial function
Auxilliary function $v$ is called test function
Variational forms of right- and left-hand sides are very similar to the mathematical notation
dot is used for vectors and inner is used for matrices
Compute solution
Step7: Redefine u from TrialFunction to solution Function
In solver we pass, variational form with sign ==, unknown solution and bondary conditions
Step8: Compute error of the solution
$$
E = \sqrt{\int_{\Omega} (u - u_D)^2 dx}
$$
Step9: Maximum of the difference between boundary function and solution on mesh vertices | Python Code:
%matplotlib notebook
from __future__ import print_function
import fenics
import matplotlib.pyplot as plt
Explanation: FEniCS overview
https://fenicsproject.org/
Overview
FEniCS means Finite Elements + Computational Software and 'ni' just "sits nicely in the middle"
Finite elements solver for PDE
Easy-to-use notation to translate scientific models into efficient code
C++ and Python interface
Easy installation with
conda install -c conda-forge fenics
Comprehensive tutorial and FEniCS book are available online
To install package for mesh generation
conda install -c conda-forge mshr
Important! To use FEniCS inside separate environment Jupyter Notebook
conda create -n fenics_env python=3.6
source activate fenics_env
conda install -c conda-forge fenics
conda install notebook
conda install -c conda-forge matplotlib
conda install -c conda-forge mshr
jupyter notebook
Main steps
Identify the computational domain $(\Omega)$, the PDE, its boundary conditions, and source terms $(f)$
Reformulate the PDE as a finite element variational problem
Write a Python program which defines the computational domain, the variational problem, the boundary conditions, and source terms, using the corresponding FEniCS abstractions
Call FEniCS to solve the boundary-value problem and visualize the results
Variational form
Finite elements methods starts from variational form of PDE. Consider Poisson equation with Dirichlet boundary conditions
$$
\begin{align}
-\Delta u &= f(x), \quad x \in \Omega\
u(x) &= u_D(x), \quad x \in \partial \Omega
\end{align}
$$
To get its variational form one needs:
1. Multiply both sides by test function $v$ such that $v \equiv 0$ on the boundary $\partial \Omega$
2. Intergate both sides over the domain $\Omega$ and apply integration-by-parts theorem
$$
\begin{align}
& -\int_{\Omega} \nabla^2 u \cdot v dx = \int_{\Omega} f \cdot v dx \
& \int_{\Omega} \nabla u \cdot \nabla v dx - \underbrace{\int_{\partial \Omega} \frac{\partial u}{\partial n} v ds}{=0} = \int{\Omega} f \cdot v dx
\end{align}
$$
Therefore, variational form of Poisson equation with Dirichlet boundary conditions is
$$
\int_{\Omega} \nabla u \cdot \nabla v dx = \int_{\Omega} f \cdot v dx
$$
Now let's see how these form is converted to FEniCS code...
FEniCS example
Domain $\Omega = [0, 1]^2$
$f = -6$
$u_D = 1 + x^2 + y^2$ for $(x, y) \in \partial \Omega$
End of explanation
mesh = fenics.UnitSquareMesh(8, 8)
_ = fenics.plot(mesh)
Explanation: Set domain and mesh
End of explanation
import dolfin
import mshr
import math
domain_vertices = [dolfin.Point(0.0, 0.0),
dolfin.Point(10.0, 0.0),
dolfin.Point(10.0, 2.0),
dolfin.Point(8.0, 2.0),
dolfin.Point(7.5, 1.0),
dolfin.Point(2.5, 1.0),
dolfin.Point(2.0, 4.0),
dolfin.Point(0.0, 4.0),
dolfin.Point(0.0, 0.0)]
p = mshr.Polygon(domain_vertices);
fenics.plot(mshr.generate_mesh(p, 20))
R = 1.1
r = 0.9
x, y, z = 1, -1, 1
# Create geometry
s1 = mshr.Sphere(dolfin.Point(0, 0, 0), 1)
s2 = mshr.Sphere(dolfin.Point(x, y, z), r)
b1 = mshr.Box(dolfin.Point(-2, -2, -0.03), dolfin.Point(2, 2, 0.03))
geometry = s1 - b1 - s2
death_star_mesh = mshr.generate_mesh(geometry, 10)
# Plot mesh
fenics.plot(death_star_mesh, color="grey")
plt.xlabel("x")
plt.ylabel("y")
Explanation: List of other default meshes are available here
Also you can create your own mesh in the following manner
End of explanation
V = fenics.FunctionSpace(mesh, 'P', 1)
Explanation: Set function space
End of explanation
u_D = fenics.Expression('1 + x[0]*x[0] + 2*x[1]*x[1]', degree=2)
def boundary(x, on_boundary):
return on_boundary
bc = fenics.DirichletBC(V, u_D, boundary)
Explanation: The first argument is a mesh
The second argument is a family of functions that we use: "P" is a Lagrange polynomial. "DP" means discontinuous polynomial. The list of supported function spaces is available here
The third argument is a degree of the polynomial
Set Dirichlet boundary conditions
End of explanation
u = fenics.TrialFunction(V)
v = fenics.TestFunction(V)
f = fenics.Constant(-6.0) # Or f = Expression(’-6’, degree=0)
# Left-hand side
a = fenics.dot(fenics.grad(u), fenics.grad(v))*fenics.dx
# Right-hand side
L = f*v*fenics.dx
Explanation: Function $u_D$ is written as Expression object that is directly translated into efficient C++ code
The first argument is a string that represent mathematical equation in a valid C++ syntax
If inside Expression some constant appeares, they should be clearly specified
python
f = Expression("exp(-kappa*pow(pi, 2)*t)*sin(pi*k*x[0])", degree=2, kappa=1.0, t=0, k=4)
The second parameter indicates the degree of function space that will be used to evaluare expression in every local element. So, the optimal value of this parameter is the same as in function space or greater by one or two
Function boundary checks whether requested point x lies on the Dirichlet boundary or not. Argument on_boundary is True if requested point lies on the physical boundary of the mesh and False otherwise
Set variational form
End of explanation
u = fenics.Function(V)
fenics.solve(a == L, u, bc)
Explanation: Function $u$ in variational form is called trial function
Auxilliary function $v$ is called test function
Variational forms of right- and left-hand sides are very similar to the mathematical notation
dot is used for vectors and inner is used for matrices
Compute solution
End of explanation
# Plot solution and mesh
fenics.plot(u)
Explanation: Redefine u from TrialFunction to solution Function
In solver we pass, variational form with sign ==, unknown solution and bondary conditions
End of explanation
error_L2 = fenics.errornorm(u_D, u, 'L2')
print("Error in L2 norm = {}".format(error_L2))
error_H1 = fenics.errornorm(u_D, u, 'H1')
print("Error in H1 norm = {}".format(error_H1))
Explanation: Compute error of the solution
$$
E = \sqrt{\int_{\Omega} (u - u_D)^2 dx}
$$
End of explanation
vertex_values_u_D = u_D.compute_vertex_values(mesh)
vertex_values_u = u.compute_vertex_values(mesh)
import numpy as np
error_max = np.max(np.abs(vertex_values_u_D - vertex_values_u))
print('Error max =', error_max)
plt.contourf(vertex_values_u.reshape(9, 9).T)
plt.colorbar()
Explanation: Maximum of the difference between boundary function and solution on mesh vertices:
$$
\max_{x_i \in M} |u(x_i) - u_D(x_i)|
$$
End of explanation |
5,549 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dependencies
Step1: Loading Data
First, we want to create our word vectors. For simplicity, we're going to be using a pretrained model.
As one of the biggest players in the ML game, Google was able to train a Word2Vec model on a massive Google News dataset that contained over 100 billion different words! From that model, Google was able to create 3 million word vectors, each with a dimensionality of 300.
In an ideal scenario, we'd use those vectors, but since the word vectors matrix is quite large (3.6 GB!), we'll be using a much more manageable matrix that is trained using GloVe, a similar word vector generation model. The matrix will contain 400,000 word vectors, each with a dimensionality of 50.
We're going to be importing two different data structures, one will be a Python list with the 400,000 words, and one will be a 400,000 x 50 dimensional embedding matrix that holds all of the word vector values.
Step2: We can search our word list for a word like "baseball", and then access its corresponding vector through the embedding matrix.
Step3: Now that we have our vectors, our first step is taking an input sentence and then constructing the its vector representation. Let's say that we have the input sentence "I thought the movie was incredible and inspiring". In order to get the word vectors, we can use Tensorflow's embedding lookup function. This function takes in two arguments, one for the embedding matrix (the wordVectors matrix in our case), and one for the ids of each of the words. The ids vector can be thought of as the integerized representation of the training set. This is basically just the row index of each of the words. Let's look at a quick example to make this concrete.
Step4: TODO### Insert image
The 10 x 50 output should contain the 50 dimensional word vectors for each of the 10 words in the sequence.
Step5: Before creating the ids matrix for the whole training set, let’s first take some time to visualize the type of data that we have. This will help us determine the best value for setting our maximum sequence length. In the previous example, we used a max length of 10, but this value is largely dependent on the inputs you have.
The training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews. Each of the reviews is stored in a txt file that we need to parse through. The positive reviews are stored in one directory and the negative reviews are stored in another. The following piece of code will determine total and average number of words in each review.
Step6: We can also use the Matplot library to visualize this data in a histogram format.
Step7: From the histogram as well as the average number of words per file, we can safely say that most reviews will fall under 250 words, which is the max sequence length value we will set.
Step8: Data
Step9: Parameters
Step10: Separating train and test data
The training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews.
Let's first give a positive label [1, 0] to the first 12500 reviews, and a negative label [0, 1] to the other reviews.
Step11: Then, let's shuffle the data and use 90% of the reviews for training and the other 10% for testing.
Step12: Verifying if the train and test data have enough positive and negative examples
Step13: Input functions
Step14: Creating the Estimator model
Step16: Create and Run Experiment
Step17: Making Predictions
First let's generate our own sentences to see how the model classifies them. | Python Code:
# Tensorflow
import tensorflow as tf
print('Tested with TensorFlow 1.2.0')
print('Your TensorFlow version:', tf.__version__)
# Feeding function for enqueue data
from tensorflow.python.estimator.inputs.queues import feeding_functions as ff
# Rnn common functions
from tensorflow.contrib.learn.python.learn.estimators import rnn_common
# Model builder
from tensorflow.python.estimator import model_fn as model_fn_lib
# Run an experiment
from tensorflow.contrib.learn.python.learn import learn_runner
# Helpers for data processing
import pandas as pd
import numpy as np
import argparse
import random
Explanation: Dependencies
End of explanation
# data from: http://ai.stanford.edu/~amaas/data/sentiment/
TRAIN_INPUT = 'data/train.csv'
TEST_INPUT = 'data/test.csv'
# data manually generated
MY_TEST_INPUT = 'data/mytest.csv'
# wordtovec
# https://nlp.stanford.edu/projects/glove/
# the matrix will contain 400,000 word vectors, each with a dimensionality of 50.
word_list = np.load('word_list.npy')
word_list = word_list.tolist() # originally loaded as numpy array
word_list = [word.decode('UTF-8') for word in word_list] # encode words as UTF-8
print('Loaded the word list, length:', len(word_list))
word_vector = np.load('word_vector.npy')
print ('Loaded the word vector, shape:', word_vector.shape)
Explanation: Loading Data
First, we want to create our word vectors. For simplicity, we're going to be using a pretrained model.
As one of the biggest players in the ML game, Google was able to train a Word2Vec model on a massive Google News dataset that contained over 100 billion different words! From that model, Google was able to create 3 million word vectors, each with a dimensionality of 300.
In an ideal scenario, we'd use those vectors, but since the word vectors matrix is quite large (3.6 GB!), we'll be using a much more manageable matrix that is trained using GloVe, a similar word vector generation model. The matrix will contain 400,000 word vectors, each with a dimensionality of 50.
We're going to be importing two different data structures, one will be a Python list with the 400,000 words, and one will be a 400,000 x 50 dimensional embedding matrix that holds all of the word vector values.
End of explanation
baseball_index = word_list.index('baseball')
print('Example: baseball')
print(word_vector[baseball_index])
Explanation: We can search our word list for a word like "baseball", and then access its corresponding vector through the embedding matrix.
End of explanation
max_seq_length = 10 # maximum length of sentence
num_dims = 50 # dimensions for each word vector
first_sentence = np.zeros((max_seq_length), dtype='int32')
first_sentence[0] = word_list.index("i")
first_sentence[1] = word_list.index("thought")
first_sentence[2] = word_list.index("the")
first_sentence[3] = word_list.index("movie")
first_sentence[4] = word_list.index("was")
first_sentence[5] = word_list.index("incredible")
first_sentence[6] = word_list.index("and")
first_sentence[7] = word_list.index("inspiring")
# first_sentence[8] = 0
# first_sentence[9] = 0
print(first_sentence.shape)
print(first_sentence) # shows the row index for each word
Explanation: Now that we have our vectors, our first step is taking an input sentence and then constructing the its vector representation. Let's say that we have the input sentence "I thought the movie was incredible and inspiring". In order to get the word vectors, we can use Tensorflow's embedding lookup function. This function takes in two arguments, one for the embedding matrix (the wordVectors matrix in our case), and one for the ids of each of the words. The ids vector can be thought of as the integerized representation of the training set. This is basically just the row index of each of the words. Let's look at a quick example to make this concrete.
End of explanation
with tf.Session() as sess:
print(tf.nn.embedding_lookup(word_vector, first_sentence).eval().shape)
Explanation: TODO### Insert image
The 10 x 50 output should contain the 50 dimensional word vectors for each of the 10 words in the sequence.
End of explanation
from os import listdir
from os.path import isfile, join
positiveFiles = ['positiveReviews/' + f for f in listdir('positiveReviews/') if isfile(join('positiveReviews/', f))]
negativeFiles = ['negativeReviews/' + f for f in listdir('negativeReviews/') if isfile(join('negativeReviews/', f))]
numWords = []
for pf in positiveFiles:
with open(pf, "r", encoding='utf-8') as f:
line=f.readline()
counter = len(line.split())
numWords.append(counter)
print('Positive files finished')
for nf in negativeFiles:
with open(nf, "r", encoding='utf-8') as f:
line=f.readline()
counter = len(line.split())
numWords.append(counter)
print('Negative files finished')
numFiles = len(numWords)
print('The total number of files is', numFiles)
print('The total number of words in the files is', sum(numWords))
print('The average number of words in the files is', sum(numWords)/len(numWords))
Explanation: Before creating the ids matrix for the whole training set, let’s first take some time to visualize the type of data that we have. This will help us determine the best value for setting our maximum sequence length. In the previous example, we used a max length of 10, but this value is largely dependent on the inputs you have.
The training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews. Each of the reviews is stored in a txt file that we need to parse through. The positive reviews are stored in one directory and the negative reviews are stored in another. The following piece of code will determine total and average number of words in each review.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(numWords, 50)
plt.xlabel('Sequence Length')
plt.ylabel('Frequency')
plt.axis([0, 1200, 0, 8000])
plt.show()
Explanation: We can also use the Matplot library to visualize this data in a histogram format.
End of explanation
max_seq_len = 250
Explanation: From the histogram as well as the average number of words per file, we can safely say that most reviews will fall under 250 words, which is the max sequence length value we will set.
End of explanation
ids_matrix = np.load('ids_matrix.npy').tolist()
Explanation: Data
End of explanation
# Parameters for training
STEPS = 15000
BATCH_SIZE = 32
# Parameters for data processing
REVIEW_KEY = 'review'
SEQUENCE_LENGTH_KEY = 'sequence_length'
Explanation: Parameters
End of explanation
POSITIVE_REVIEWS = 12500
# copying sequences
data_sequences = [np.asarray(v, dtype=np.int32) for v in ids_matrix]
# generating labels
data_labels = [[1, 0] if i < POSITIVE_REVIEWS else [0, 1] for i in range(len(ids_matrix))]
# also creating a length column, this will be used by the Dynamic RNN
# see more about it here: https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
data_length = [max_seq_len for i in range(len(ids_matrix))]
Explanation: Separating train and test data
The training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews.
Let's first give a positive label [1, 0] to the first 12500 reviews, and a negative label [0, 1] to the other reviews.
End of explanation
data = list(zip(data_sequences, data_labels, data_length))
random.shuffle(data) # shuffle
data = np.asarray(data)
# separating train and test data
limit = int(len(data) * 0.9)
train_data = data[:limit]
test_data = data[limit:]
Explanation: Then, let's shuffle the data and use 90% of the reviews for training and the other 10% for testing.
End of explanation
LABEL_INDEX = 1
def _number_of_pos_labels(df):
pos_labels = 0
for value in df:
if value[LABEL_INDEX] == [1, 0]:
pos_labels += 1
return pos_labels
pos_labels_train = _number_of_pos_labels(train_data)
total_labels_train = len(train_data)
pos_labels_test = _number_of_pos_labels(test_data)
total_labels_test = len(test_data)
print('Total number of positive labels:', pos_labels_train + pos_labels_test)
print('Proportion of positive labels on the Train data:', pos_labels_train/total_labels_train)
print('Proportion of positive labels on the Test data:', pos_labels_test/total_labels_test)
Explanation: Verifying if the train and test data have enough positive and negative examples
End of explanation
def get_input_fn(df, batch_size, num_epochs=1, shuffle=True):
def input_fn():
sequences = np.asarray([v for v in df[:,0]], dtype=np.int32)
labels = np.asarray([v for v in df[:,1]], dtype=np.int32)
length = np.asarray(df[:,2], dtype=np.int32)
# https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/data
dataset = (
tf.contrib.data.Dataset.from_tensor_slices((sequences, labels, length)) # reading data from memory
.repeat(num_epochs) # repeat dataset the number of epochs
.batch(batch_size)
)
# for our "manual" test we don't want to shuffle the data
if shuffle:
dataset = dataset.shuffle(buffer_size=100000)
# create iterator
review, label, length = dataset.make_one_shot_iterator().get_next()
features = {
REVIEW_KEY: review,
SEQUENCE_LENGTH_KEY: length,
}
return features, label
return input_fn
features, label = get_input_fn(test_data, 2, shuffle=False)()
with tf.Session() as sess:
items = sess.run(features)
print(items[REVIEW_KEY])
print(sess.run(label))
train_input_fn = get_input_fn(train_data, BATCH_SIZE, None)
test_input_fn = get_input_fn(test_data, BATCH_SIZE)
Explanation: Input functions
End of explanation
def get_model_fn(rnn_cell_sizes,
label_dimension,
dnn_layer_sizes=[],
optimizer='SGD',
learning_rate=0.01,
embed_dim=128):
def model_fn(features, labels, mode):
review = features[REVIEW_KEY]
sequence_length = tf.cast(features[SEQUENCE_LENGTH_KEY], tf.int32)
# Creating embedding
data = tf.Variable(tf.zeros([BATCH_SIZE, max_seq_len, 50]),dtype=tf.float32)
data = tf.nn.embedding_lookup(word_vector, review)
# Each RNN layer will consist of a LSTM cell
rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in rnn_cell_sizes]
# Construct the layers
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
# Runs the RNN model dynamically
# more about it at:
# https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
outputs, final_state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
# Slice to keep only the last cell of the RNN
last_activations = rnn_common.select_last_activations(outputs, sequence_length)
# Construct dense layers on top of the last cell of the RNN
for units in dnn_layer_sizes:
last_activations = tf.layers.dense(
last_activations, units, activation=tf.nn.relu)
# Final dense layer for prediction
predictions = tf.layers.dense(last_activations, label_dimension)
predictions_softmax = tf.nn.softmax(predictions)
loss = None
train_op = None
eval_op = None
preds_op = {
'prediction': predictions_softmax,
'label': labels
}
if mode == tf.estimator.ModeKeys.EVAL:
eval_op = {
"accuracy": tf.metrics.accuracy(
tf.argmax(input=predictions_softmax, axis=1),
tf.argmax(input=labels, axis=1))
}
if mode != tf.estimator.ModeKeys.PREDICT:
loss = tf.losses.softmax_cross_entropy(labels, predictions)
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss,
tf.contrib.framework.get_global_step(),
optimizer=optimizer,
learning_rate=learning_rate)
return model_fn_lib.EstimatorSpec(mode,
predictions=predictions_softmax,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_op)
return model_fn
model_fn = get_model_fn(rnn_cell_sizes=[64], # size of the hidden layers
label_dimension=2, # since are just 2 classes
dnn_layer_sizes=[128, 64], # size of units in the dense layers on top of the RNN
optimizer='Adam',
learning_rate=0.001,
embed_dim=512)
Explanation: Creating the Estimator model
End of explanation
# create experiment
def generate_experiment_fn():
Create an experiment function given hyperparameters.
Returns:
A function (output_dir) -> Experiment where output_dir is a string
representing the location of summaries, checkpoints, and exports.
this function is used by learn_runner to create an Experiment which
executes model code provided in the form of an Estimator and
input functions.
All listed arguments in the outer function are used to create an
Estimator, and input functions (training, evaluation, serving).
Unlisted args are passed through to Experiment.
def _experiment_fn(run_config, hparams):
estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input_fn,
eval_input_fn=test_input_fn,
train_steps=STEPS
)
return _experiment_fn
# run experiment
learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir='testing2'))
Explanation: Create and Run Experiment
End of explanation
def string_to_array(s, separator=' '):
return s.split(separator)
def generate_data_row(sentence, label, max_length):
sequence = np.zeros((max_length), dtype='int32')
for i, word in enumerate(string_to_array(sentence)):
sequence[i] = word_list.index(word)
return sequence, label, max_length
def generate_data(sentences, labels, max_length):
data = []
for s, l in zip(sentences, labels):
data.append(generate_data_row(s, l, max_length))
return np.asarray(data)
sentences = ['i thought the movie was incredible and inspiring',
'this is a great movie',
'this is a good movie but isnt the best',
'it was fine i guess',
'it was definitely bad',
'its not that bad',
'its not that bad i think its a good movie',
'its not bad i think its a good movie']
labels = [[1, 0],
[1, 0],
[1, 0],
[0, 1],
[0, 1],
[1, 0],
[1, 0],
[1, 0]] # [1, 0]: positive, [0, 1]: negative
my_test_data = generate_data(sentences, labels, 10)
estimator = tf.estimator.Estimator(model_fn=model_fn,
config=tf.contrib.learn.RunConfig(model_dir='tensorboard/batch_32'))
preds = estimator.predict(input_fn=get_input_fn(my_test_data, 1, 1, shuffle=False))
print()
for p, s in zip(preds, sentences):
print('sentence:', s)
print('good review:', p[0], 'bad review:', p[1])
print('-' * 10)
Explanation: Making Predictions
First let's generate our own sentences to see how the model classifies them.
End of explanation |
5,550 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression with L2 regularization
The goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following
Step1: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
Step2: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations
Step3: Now, let us take a look at what the dataset looks like (Note
Step4: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.
Note
Step5: Convert SFrame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step6: We convert both the training and validation sets into NumPy arrays.
Warning
Step7: Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as
Step8: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is
Step9: Quiz question
Step10: Quiz question
Step11: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
Step12: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
Step13: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
Step14: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Quiz Question. Which of the following is not listed in either positive_words or negative_words?
Step15: Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
Step16: Run the following cell to generate the plot. Use the plot to answer the following quiz question.
Step17: Quiz Question
Step18: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models. | Python Code:
from __future__ import division
import graphlab
Explanation: Logistic Regression with L2 regularization
The goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Write a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient.
Implement gradient ascent with an L2 penalty.
Empirically explore how the L2 penalty can ameliorate overfitting.
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
products = graphlab.SFrame('amazon_baby_subset.gl/')
Explanation: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
End of explanation
# The same feature processing (same as the previous assignments)
# ---------------------------------------------------------------
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
Explanation: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for the important_words)
Refer to Module 3 assignment for more details.
End of explanation
products
Explanation: Now, let us take a look at what the dataset looks like (Note: This may take a few minutes).
End of explanation
train_data, validation_data = products.random_split(.8, seed=2)
print 'Training set : %d data points' % len(train_data)
print 'Validation set : %d data points' % len(validation_data)
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.
Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.
End of explanation
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
Explanation: Convert SFrame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
End of explanation
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
Explanation: We convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes.
End of explanation
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
## YOUR CODE HERE
scores = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
## YOUR CODE HERE
predictions = 1. / (1. + np.exp(- scores))
return predictions
Explanation: Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
End of explanation
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors, feature)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
## YOUR CODE HERE
derivative -= 2 * l2_penalty * coefficient
return derivative
Explanation: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Adding L2 penalty to the derivative
It takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.
Recall from the lecture that the link function is still the sigmoid:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
We add the L2 penalty term to the per-coefficient derivative of log likelihood:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
The per-coefficient derivative for logistic regression with an L2 penalty is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
and for the intercept term, we have
$$
\frac{\partial\ell}{\partial w_0} = \sum{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Note: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.
Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
* coefficient containing the current value of coefficient $w_j$.
* l2_penalty representing the L2 penalty constant $\lambda$
* feature_is_constant telling whether the $j$-th feature is constant or not.
End of explanation
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
Explanation: Quiz question: In the code above, was the intercept term regularized?
To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) \color{red}{-\lambda\|\mathbf{w}\|_2^2} $$
End of explanation
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
## YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
## YOUR CODE HERE
derivative = feature_derivative_with_L2(errors, feature_matrix[:,j], coefficients[j], l2_penalty, is_intercept)
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
coefficients[j] += step_size * derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
Explanation: Quiz question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$?
The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
End of explanation
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
Explanation: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
End of explanation
table = graphlab.SFrame({'word': ['(intercept)'] + important_words})
def add_coefficients_to_table(coefficients, column_name):
table[column_name] = coefficients
return table
Explanation: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
End of explanation
add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')
add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')
add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')
add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')
add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')
add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
Explanation: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
End of explanation
subtable = table[['word', 'coefficients [L2=0]']]
ptable = sorted(subtable, key=lambda x: x['coefficients [L2=0]'], reverse=True)[:5]
ntable = sorted(subtable, key=lambda x: x['coefficients [L2=0]'], reverse=False)[:5]
positive_words = [w['word'] for w in ptable]
print positive_words
negative_words = [w['word'] for w in ntable]
print negative_words
Explanation: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Quiz Question. Which of the following is not listed in either positive_words or negative_words?
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table.filter_by(column_name='word', values=positive_words)
table_negative_words = table.filter_by(column_name='word', values=negative_words)
del table_positive_words['word']
del table_negative_words['word']
for i in xrange(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i in xrange(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
Explanation: Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
End of explanation
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
Explanation: Run the following cell to generate the plot. Use the plot to answer the following quiz question.
End of explanation
def get_classification_accuracy(feature_matrix, sentiment, coefficients):
scores = np.dot(feature_matrix, coefficients)
apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)
predictions = apply_threshold(scores)
num_correct = (predictions == sentiment).sum()
accuracy = num_correct / len(feature_matrix)
return accuracy
Explanation: Quiz Question: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.
Quiz Question: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)
Measuring accuracy
Now, let us compute the accuracy of the classifier model. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Recall from lecture that that the class prediction is calculated using
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & h(\mathbf{x}_i)^T\mathbf{w} > 0 \
-1 & h(\mathbf{x}_i)^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Note: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.
Based on the above, we will use the same code that was used in Module 3 assignment.
End of explanation
train_accuracy = {}
train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)
train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)
train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)
train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)
train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)
train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)
validation_accuracy = {}
validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)
validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)
validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)
validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)
validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)
validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print "L2 penalty = %g" % key
print "train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])
print "--------------------------------------------------------------------------------"
Explanation: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.
End of explanation |
5,551 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Creating a Series
You can convert a list,numpy array, or dictionary to a Series
Step2: Using Lists
Step3: Using NumPy Arrays
Step4: Using Dictionaries
Step5: Data in a Series
A pandas Series can hold a variety of object types
Step6: Using an Index
The key to using a Series is understanding its index. Pandas makes use of these index names or numbers by allowing for fast look ups of information (works like a hash table or dictionary).
Let's see some examples of how to grab information from a Series. Let us create two sereis, ser1 and ser2
Step7: Operations are then also done based off of index | Python Code:
import numpy as np
import pandas as pd
Explanation: <a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a>
<center><em>Copyright Pierian Data</em></center>
<center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center>
Series
The first main data type we will learn about for pandas is the Series data type. Let's import Pandas and explore the Series object.
A Series is very similar to a NumPy array (in fact it is built on top of the NumPy array object). What differentiates the NumPy array from a Series, is that a Series can have axis labels, meaning it can be indexed by a label, instead of just a number location. It also doesn't need to hold numeric data, it can hold any arbitrary Python Object.
Let's explore this concept through some examples:
End of explanation
labels = ['a','b','c']
my_list = [10,20,30]
arr = np.array([10,20,30])
d = {'a':10,'b':20,'c':30}
Explanation: Creating a Series
You can convert a list,numpy array, or dictionary to a Series:
End of explanation
pd.Series(data=my_list)
pd.Series(data=my_list,index=labels)
pd.Series(my_list,labels)
Explanation: Using Lists
End of explanation
pd.Series(arr)
pd.Series(arr,labels)
Explanation: Using NumPy Arrays
End of explanation
pd.Series(d)
Explanation: Using Dictionaries
End of explanation
pd.Series(data=labels)
# Even functions (although unlikely that you will use this)
pd.Series([sum,print,len])
Explanation: Data in a Series
A pandas Series can hold a variety of object types:
End of explanation
ser1 = pd.Series([1,2,3,4],index = ['USA', 'Germany','USSR', 'Japan'])
ser1
ser2 = pd.Series([1,2,5,4],index = ['USA', 'Germany','Italy', 'Japan'])
ser2
ser1['USA']
Explanation: Using an Index
The key to using a Series is understanding its index. Pandas makes use of these index names or numbers by allowing for fast look ups of information (works like a hash table or dictionary).
Let's see some examples of how to grab information from a Series. Let us create two sereis, ser1 and ser2:
End of explanation
ser1 + ser2
Explanation: Operations are then also done based off of index:
End of explanation |
5,552 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DATASCI W261
Step1: Part 1
Step2: (1b) Sparse vectors
Data points can typically be represented with a small number of non-zero OHE features relative to the total number of features that occur in the dataset. By leveraging this sparsity and using sparse vector representations of OHE data, we can reduce storage and computational burdens. Below are a few sample vectors represented as dense numpy arrays. Use SparseVector to represent them in a sparse fashion, and verify that both the sparse and dense representations yield the same results when computing dot products (we will later use MLlib to train classifiers via gradient descent, and MLlib will need to compute dot products between SparseVectors and dense parameter vectors).
Use SparseVector(size, *args) to create a new sparse vector where size is the length of the vector and args is either a dictionary, a list of (index, value) pairs, or two separate arrays of indices and values (sorted by index). You'll need to create a sparse vector representation of each dense vector aDense and bDense.
Step3: (1c) OHE features as sparse vectors
Now let's see how we can represent the OHE features for points in our sample dataset. Using the mapping defined by the OHE dictionary from Part (1a), manually define OHE features for the three sample data points using SparseVector format. Any feature that occurs in a point should have the value 1.0. For example, the DenseVector for a point with features 2 and 4 would be [0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0].
Step5: (1d) Define a OHE function
Next we will use the OHE dictionary from Part (1a) to programatically generate OHE features from the original categorical data. First write a function called oneHotEncoding that creates OHE feature vectors in SparseVector format. Then use this function to create OHE features for the first sample data point and verify that the result matches the result from Part (1c).
Step6: (1e) Apply OHE to a dataset
Finally, use the function from Part (1d) to create OHE features for all 3 data points in the sample dataset.
Step7: Part 2
Step8: (2b) OHE Dictionary from distinct features
Next, create an RDD of key-value tuples, where each (featureID, category) tuple in sampleDistinctFeats is a key and the values are distinct integers ranging from 0 to (number of keys - 1). Then convert this RDD into a dictionary, which can be done using the collectAsMap action. Note that there is no unique mapping from keys to values, as all we require is that each (featureID, category) key be mapped to a unique integer between 0 and the number of keys. In this exercise, any valid mapping is acceptable. Use zipWithIndex followed by collectAsMap.
In our sample dataset, one valid list of key-value tuples is
Step10: (2c) Automated creation of an OHE dictionary
Now use the code from Parts (2a) and (2b) to write a function that takes an input dataset and outputs an OHE dictionary. Then use this function to create an OHE dictionary for the sample dataset, and verify that it matches the dictionary from Part (2b).
Step11: Part 3
Step12: (3a) Loading and splitting the data
We are now ready to start working with the actual CTR data, and our first task involves splitting it into training, validation, and test sets. Use the randomSplit method with the specified weights and seed to create RDDs storing each of these datasets, and then cache each of these RDDs, as we will be accessing them multiple times in the remainder of this lab. Finally, compute the size of each dataset.
Step14: (3b) Extract features
We will now parse the raw training data to create an RDD that we can subsequently use to create an OHE dictionary. Note from the take() command in Part (3a) that each raw data point is a string containing several fields separated by some delimiter. For now, we will ignore the first field (which is the 0-1 label), and parse the remaining fields (or raw features). To do this, complete the implemention of the parsePoint function.
Step15: (3c) Create an OHE dictionary from the dataset
Note that parsePoint returns a data point as a list of (featureID, category) tuples, which is the same format as the sample dataset studied in Parts 1 and 2 of this lab. Using this observation, create an OHE dictionary using the function implemented in Part (2c). Note that we will assume for simplicity that all features in our CTR dataset are categorical.
Step17: (3d) Apply OHE to the dataset
Now let's use this OHE dictionary by starting with the raw training data and creating an RDD of LabeledPoint objects using OHE features. To do this, complete the implementation of the parseOHEPoint function. Hint
Step20: Visualization 1
Step22: (3e) Handling unseen features
We naturally would like to repeat the process from Part (3d), e.g., to compute OHE features for the validation and test datasets. However, we must be careful, as some categorical values will likely appear in new data that did not exist in the training data. To deal with this situation, update the oneHotEncoding() function from Part (1d) to ignore previously unseen categories, and then compute OHE features for the validation data.
Step23: Part 4
Step25: (4b) Log loss
Throughout this lab, we will use log loss to evaluate the quality of models. Log loss is defined as
Step26: (4c) Baseline log loss
Next we will use the function we wrote in Part (4b) to compute the baseline log loss on the training data. A very simple yet natural baseline model is one where we always make the same prediction independent of the given datapoint, setting the predicted value equal to the fraction of training points that correspond to click-through events (i.e., where the label is one). Compute this value (which is simply the mean of the training labels), and then use it to compute the training log loss for the baseline model. The log loss for multiple observations is the mean of the individual log loss values.
Step28: (4d) Predicted probability
In order to compute the log loss for the model we trained in Part (4a), we need to write code to generate predictions from this model. Write a function that computes the raw linear prediction from this logistic regression model and then passes it through a sigmoid function $ \scriptsize \sigma(t) = (1+ e^{-t})^{-1} $ to return the model's probabilistic prediction. Then compute probabilistic predictions on the training data.
Note that when incorporating an intercept into our predictions, we simply add the intercept to the value of the prediction obtained from the weights and features. Alternatively, if the intercept was included as the first weight, we would need to add a corresponding feature to our data where the feature has the value one. This is not the case here.
Step30: (4e) Evaluate the model
We are now ready to evaluate the quality of the model we trained in Part (4a). To do this, first write a general function that takes as input a model and data, and outputs the log loss. Then run this function on the OHE training data, and compare the result with the baseline log loss.
Step31: (4f) Validation log loss
Next, following the same logic as in Parts (4c) and 4(e), compute the validation log loss for both the baseline and logistic regression models. Notably, the baseline model for the validation data should still be based on the label fraction from the training dataset.
Step32: Visualization 2
Step34: Part 5
Step36: (5b) Creating hashed features
Next we will use this hash function to create hashed features for our CTR datasets. First write a function that uses the hash function from Part (5a) with numBuckets = $ \scriptsize 2^{15} \approx 33K $ to create a LabeledPoint with hashed features stored as a SparseVector. Then use this function to create new training, validation and test datasets with hashed features. Hint
Step38: (5c) Sparsity
Since we have 33K hashed features versus 233K OHE features, we should expect OHE features to be sparser. Verify this hypothesis by computing the average sparsity of the OHE and the hashed training datasets.
Note that if you have a SparseVector named sparse, calling len(sparse) returns the total number of features, not the number features with entries. SparseVector objects have the attributes indices and values that contain information about which features are nonzero. Continuing with our example, these can be accessed using sparse.indices and sparse.values, respectively.
Step39: (5d) Logistic model with hashed features
Now let's train a logistic regression model using the hashed features. Run a grid search to find suitable hyperparameters for the hashed features, evaluating via log loss on the validation data. Note
Step40: Visualization 3
Step41: (5e) Evaluate on the test set
Finally, evaluate the best model from Part (5d) on the test set. Compare the resulting log loss with the baseline log loss on the test set, which can be computed in the same way that the validation log loss was computed in Part (4f). | Python Code:
labVersion = 'MIDS_MLS_week12_v_0_9'
Explanation: DATASCI W261: Machine Learning at Scale
W261-1 Fall 2015
Week 12: Criteo CTR Project
November 14, 2015
Student name INSERT STUDENT NAME HERE
Click-Through Rate Prediction Lab
This lab covers the steps for creating a click-through rate (CTR) prediction pipeline. You will work with the Criteo Labs dataset that was used for a recent Kaggle competition.
This lab will cover:
Part 1: Featurize categorical data using one-hot-encoding (OHE)
Part 2: Construct an OHE dictionary
Part 3: Parse CTR data and generate OHE features
Visualization 1: Feature frequency
Part 4: CTR prediction and logloss evaluation
Visualization 2: ROC curve
Part 5: Reduce feature dimension via feature hashing
Visualization 3: Hyperparameter heat map
Note that, for reference, you can look up the details of the relevant Spark methods in Spark's Python API and the relevant NumPy methods in the NumPy Reference
End of explanation
# Data for manual OHE
# Note: the first data point does not include any value for the optional third feature
sampleOne = [(0, 'mouse'), (1, 'black')]
sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
sampleDataRDD = sc.parallelize([sampleOne, sampleTwo, sampleThree])
# TODO: Replace <FILL IN> with appropriate code
sampleOHEDictManual = {}
sampleOHEDictManual[(0,'bear')] = <FILL IN>
sampleOHEDictManual[(0,'cat')] = <FILL IN>
sampleOHEDictManual[(0,'mouse')] = <FILL IN>
sampleOHEDictManual<FILL IN>
sampleOHEDictManual<FILL IN>
sampleOHEDictManual<FILL IN>
sampleOHEDictManual<FILL IN>
# A testing helper
#https://pypi.python.org/pypi/test_helper/0.2
import hashlib
class TestFailure(Exception):
pass
class PrivateTestFailure(Exception):
pass
class Test(object):
passed = 0
numTests = 0
failFast = False
private = False
@classmethod
def setFailFast(cls):
cls.failFast = True
@classmethod
def setPrivateMode(cls):
cls.private = True
@classmethod
def assertTrue(cls, result, msg=""):
cls.numTests += 1
if result == True:
cls.passed += 1
print "1 test passed."
else:
print "1 test failed. " + msg
if cls.failFast:
if cls.private:
raise PrivateTestFailure(msg)
else:
raise TestFailure(msg)
@classmethod
def assertEquals(cls, var, val, msg=""):
cls.assertTrue(var == val, msg)
@classmethod
def assertEqualsHashed(cls, var, hashed_val, msg=""):
cls.assertEquals(cls._hash(var), hashed_val, msg)
@classmethod
def printStats(cls):
print "{0} / {1} test(s) passed.".format(cls.passed, cls.numTests)
@classmethod
def _hash(cls, x):
return hashlib.sha1(str(x)).hexdigest()
# TEST One-hot-encoding (1a)
from test_helper import Test
Test.assertEqualsHashed(sampleOHEDictManual[(0,'bear')],
'b6589fc6ab0dc82cf12099d1c2d40ab994e8410c',
"incorrect value for sampleOHEDictManual[(0,'bear')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'cat')],
'356a192b7913b04c54574d18c28d46e6395428ab',
"incorrect value for sampleOHEDictManual[(0,'cat')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'mouse')],
'da4b9237bacccdf19c0760cab7aec4a8359010b0',
"incorrect value for sampleOHEDictManual[(0,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'black')],
'77de68daecd823babbb58edb1c8e14d7106e83bb',
"incorrect value for sampleOHEDictManual[(1,'black')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'tabby')],
'1b6453892473a467d07372d45eb05abc2031647a',
"incorrect value for sampleOHEDictManual[(1,'tabby')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'mouse')],
'ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4',
"incorrect value for sampleOHEDictManual[(2,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'salmon')],
'c1dfd96eea8cc2b62785275bca38ac261256e278',
"incorrect value for sampleOHEDictManual[(2,'salmon')]")
Test.assertEquals(len(sampleOHEDictManual.keys()), 7,
'incorrect number of keys in sampleOHEDictManual')
Explanation: Part 1: Featurize categorical data using one-hot-encoding
(1a) One-hot-encoding
We would like to develop code to convert categorical features to numerical ones, and to build intuition, we will work with a sample unlabeled dataset with three data points, with each data point representing an animal. The first feature indicates the type of animal (bear, cat, mouse); the second feature describes the animal's color (black, tabby); and the third (optional) feature describes what the animal eats (mouse, salmon).
In a one-hot-encoding (OHE) scheme, we want to represent each tuple of (featureID, category) via its own binary feature. We can do this in Python by creating a dictionary that maps each tuple to a distinct integer, where the integer corresponds to a binary feature. To start, manually enter the entries in the OHE dictionary associated with the sample dataset by mapping the tuples to consecutive integers starting from zero, ordering the tuples first by featureID and next by category.
Later in this lab, we'll use OHE dictionaries to transform data points into compact lists of features that can be used in machine learning algorithms.
End of explanation
import numpy as np
from pyspark.mllib.linalg import SparseVector
# TODO: Replace <FILL IN> with appropriate code
aDense = np.array([0., 3., 0., 4.])
aSparse = <FILL IN>
bDense = np.array([0., 0., 0., 1.])
bSparse = <FILL IN>
w = np.array([0.4, 3.1, -1.4, -.5])
print aDense.dot(w)
print aSparse.dot(w)
print bDense.dot(w)
print bSparse.dot(w)
# TEST Sparse Vectors (1b)
Test.assertTrue(isinstance(aSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(isinstance(bSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(aDense.dot(w) == aSparse.dot(w),
'dot product of aDense and w should equal dot product of aSparse and w')
Test.assertTrue(bDense.dot(w) == bSparse.dot(w),
'dot product of bDense and w should equal dot product of bSparse and w')
Explanation: (1b) Sparse vectors
Data points can typically be represented with a small number of non-zero OHE features relative to the total number of features that occur in the dataset. By leveraging this sparsity and using sparse vector representations of OHE data, we can reduce storage and computational burdens. Below are a few sample vectors represented as dense numpy arrays. Use SparseVector to represent them in a sparse fashion, and verify that both the sparse and dense representations yield the same results when computing dot products (we will later use MLlib to train classifiers via gradient descent, and MLlib will need to compute dot products between SparseVectors and dense parameter vectors).
Use SparseVector(size, *args) to create a new sparse vector where size is the length of the vector and args is either a dictionary, a list of (index, value) pairs, or two separate arrays of indices and values (sorted by index). You'll need to create a sparse vector representation of each dense vector aDense and bDense.
End of explanation
# Reminder of the sample features
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# TODO: Replace <FILL IN> with appropriate code
sampleOneOHEFeatManual = <FILL IN>
sampleTwoOHEFeatManual = <FILL IN>
sampleThreeOHEFeatManual = <FILL IN>
# TEST OHE Features as sparse vectors (1c)
Test.assertTrue(isinstance(sampleOneOHEFeatManual, SparseVector),
'sampleOneOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleTwoOHEFeatManual, SparseVector),
'sampleTwoOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleThreeOHEFeatManual, SparseVector),
'sampleThreeOHEFeatManual needs to be a SparseVector')
Test.assertEqualsHashed(sampleOneOHEFeatManual,
'ecc00223d141b7bd0913d52377cee2cf5783abd6',
'incorrect value for sampleOneOHEFeatManual')
Test.assertEqualsHashed(sampleTwoOHEFeatManual,
'26b023f4109e3b8ab32241938e2e9b9e9d62720a',
'incorrect value for sampleTwoOHEFeatManual')
Test.assertEqualsHashed(sampleThreeOHEFeatManual,
'c04134fd603ae115395b29dcabe9d0c66fbdc8a7',
'incorrect value for sampleThreeOHEFeatManual')
Explanation: (1c) OHE features as sparse vectors
Now let's see how we can represent the OHE features for points in our sample dataset. Using the mapping defined by the OHE dictionary from Part (1a), manually define OHE features for the three sample data points using SparseVector format. Any feature that occurs in a point should have the value 1.0. For example, the DenseVector for a point with features 2 and 4 would be [0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0].
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
You should ensure that the indices used to create a SparseVector are sorted.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
<FILL IN>
# Calculate the number of features in sampleOHEDictManual
numSampleOHEFeats = <FILL IN>
# Run oneHotEnoding on sampleOne
sampleOneOHEFeat = <FILL IN>
print sampleOneOHEFeat
# TEST Define an OHE Function (1d)
Test.assertTrue(sampleOneOHEFeat == sampleOneOHEFeatManual,
'sampleOneOHEFeat should equal sampleOneOHEFeatManual')
Test.assertEquals(sampleOneOHEFeat, SparseVector(7, [2,3], [1.0,1.0]),
'incorrect value for sampleOneOHEFeat')
Test.assertEquals(oneHotEncoding([(1, 'black'), (0, 'mouse')], sampleOHEDictManual,
numSampleOHEFeats), SparseVector(7, [2,3], [1.0,1.0]),
'incorrect definition for oneHotEncoding')
Explanation: (1d) Define a OHE function
Next we will use the OHE dictionary from Part (1a) to programatically generate OHE features from the original categorical data. First write a function called oneHotEncoding that creates OHE feature vectors in SparseVector format. Then use this function to create OHE features for the first sample data point and verify that the result matches the result from Part (1c).
End of explanation
# TODO: Replace <FILL IN> with appropriate code
sampleOHEData = sampleDataRDD.<FILL IN>
print sampleOHEData.collect()
# TEST Apply OHE to a dataset (1e)
sampleOHEDataValues = sampleOHEData.collect()
Test.assertTrue(len(sampleOHEDataValues) == 3, 'sampleOHEData should have three elements')
Test.assertEquals(sampleOHEDataValues[0], SparseVector(7, {2: 1.0, 3: 1.0}),
'incorrect OHE for first sample')
Test.assertEquals(sampleOHEDataValues[1], SparseVector(7, {1: 1.0, 4: 1.0, 5: 1.0}),
'incorrect OHE for second sample')
Test.assertEquals(sampleOHEDataValues[2], SparseVector(7, {0: 1.0, 3: 1.0, 6: 1.0}),
'incorrect OHE for third sample')
Explanation: (1e) Apply OHE to a dataset
Finally, use the function from Part (1d) to create OHE features for all 3 data points in the sample dataset.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
sampleDistinctFeats = (sampleDataRDD
<FILL IN>)
# TEST Pair RDD of (featureID, category) (2a)
Test.assertEquals(sorted(sampleDistinctFeats.collect()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'incorrect value for sampleDistinctFeats')
Explanation: Part 2: Construct an OHE dictionary
(2a) Pair RDD of (featureID, category)
To start, create an RDD of distinct (featureID, category) tuples. In our sample dataset, the 7 items in the resulting RDD are (0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon'). Notably 'black' appears twice in the dataset but only contributes one item to the RDD: (1, 'black'), while 'mouse' also appears twice and contributes two items: (0, 'mouse') and (2, 'mouse'). Use flatMap and distinct.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
sampleOHEDict = (sampleDistinctFeats
<FILL IN>)
print sampleOHEDict
# TEST OHE Dictionary from distinct features (2b)
Test.assertEquals(sorted(sampleOHEDict.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDict has unexpected keys')
Test.assertEquals(sorted(sampleOHEDict.values()), range(7), 'sampleOHEDict has unexpected values')
Explanation: (2b) OHE Dictionary from distinct features
Next, create an RDD of key-value tuples, where each (featureID, category) tuple in sampleDistinctFeats is a key and the values are distinct integers ranging from 0 to (number of keys - 1). Then convert this RDD into a dictionary, which can be done using the collectAsMap action. Note that there is no unique mapping from keys to values, as all we require is that each (featureID, category) key be mapped to a unique integer between 0 and the number of keys. In this exercise, any valid mapping is acceptable. Use zipWithIndex followed by collectAsMap.
In our sample dataset, one valid list of key-value tuples is: [((0, 'bear'), 0), ((2, 'salmon'), 1), ((1, 'tabby'), 2), ((2, 'mouse'), 3), ((0, 'mouse'), 4), ((0, 'cat'), 5), ((1, 'black'), 6)]. The dictionary defined in Part (1a) illustrates another valid mapping between keys and integers.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def createOneHotDict(inputData):
Creates a one-hot-encoder dictionary based on the input data.
Args:
inputData (RDD of lists of (int, str)): An RDD of observations where each observation is
made up of a list of (featureID, value) tuples.
Returns:
dict: A dictionary where the keys are (featureID, value) tuples and map to values that are
unique integers.
<FILL IN>
sampleOHEDictAuto = <FILL IN>
print sampleOHEDictAuto
# TEST Automated creation of an OHE dictionary (2c)
Test.assertEquals(sorted(sampleOHEDictAuto.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDictAuto has unexpected keys')
Test.assertEquals(sorted(sampleOHEDictAuto.values()), range(7),
'sampleOHEDictAuto has unexpected values')
Explanation: (2c) Automated creation of an OHE dictionary
Now use the code from Parts (2a) and (2b) to write a function that takes an input dataset and outputs an OHE dictionary. Then use this function to create an OHE dictionary for the sample dataset, and verify that it matches the dictionary from Part (2b).
End of explanation
# Run this code to view Criteo's agreement
from IPython.lib.display import IFrame
IFrame("http://labs.criteo.com/downloads/2014-kaggle-display-advertising-challenge-dataset/",
600, 350)
# TODO: Replace <FILL IN> with appropriate code
# Just replace <FILL IN> with the url for dac_sample.tar.gz
import glob
import os.path
import tarfile
import urllib
import urlparse
# Paste url, url should end with: dac_sample.tar.gz
url = '<FILL IN>'
url = url.strip()
baseDir = os.path.join('data')
inputPath = os.path.join('w261', 'dac_sample.txt')
fileName = os.path.join(baseDir, inputPath)
inputDir = os.path.split(fileName)[0]
def extractTar(check = False):
# Find the zipped archive and extract the dataset
tars = glob.glob('dac_sample*.tar.gz*')
if check and len(tars) == 0:
return False
if len(tars) > 0:
try:
tarFile = tarfile.open(tars[0])
except tarfile.ReadError:
if not check:
print 'Unable to open tar.gz file. Check your URL.'
return False
tarFile.extract('dac_sample.txt', path=inputDir)
print 'Successfully extracted: dac_sample.txt'
return True
else:
print 'You need to retry the download with the correct url.'
print ('Alternatively, you can upload the dac_sample.tar.gz file to your Jupyter root ' +
'directory')
return False
if os.path.isfile(fileName):
print 'File is already available. Nothing to do.'
elif extractTar(check = True):
print 'tar.gz file was already available.'
elif not url.endswith('dac_sample.tar.gz'):
print 'Check your download url. Are you downloading the Sample dataset?'
else:
# Download the file and store it in the same directory as this notebook
try:
urllib.urlretrieve(url, os.path.basename(urlparse.urlsplit(url).path))
except IOError:
print 'Unable to download and store: {0}'.format(url)
extractTar()
import os.path
baseDir = os.path.join('data')
inputPath = os.path.join('w261', 'dac_sample.txt')
fileName = os.path.join(baseDir, inputPath)
if os.path.isfile(fileName):
rawData = (sc
.textFile(fileName, 2)
.map(lambda x: x.replace('\t', ','))) # work with either ',' or '\t' separated data
print rawData.take(1)
Explanation: Part 3: Parse CTR data and generate OHE features
Before we can proceed, you'll first need to obtain the data from Criteo. If you have already completed this step in the setup lab, just run the cells below and the data will be loaded into the rawData variable.
Below is Criteo's data sharing agreement. After you accept the agreement, you can obtain the download URL by right-clicking on the "Download Sample" button and clicking "Copy link address" or "Copy Link Location", depending on your browser. Paste the URL into the # TODO cell below. The file is 8.4 MB compressed. The script below will download the file to the virtual machine (VM) and then extract the data.
If running the cell below does not render a webpage, open the Criteo agreement in a separate browser tab. After you accept the agreement, you can obtain the download URL by right-clicking on the "Download Sample" button and clicking "Copy link address" or "Copy Link Location", depending on your browser. Paste the URL into the # TODO cell below.
Note that the download could take a few minutes, depending upon your connection speed.
The Criteo CTR data is for HW12.1 is available here (24.3 Meg, 100,000 Rows):
https://www.dropbox.com/s/m4jlnv6rdbqzzhu/dac_sample.txt?dl=0
Alternatively you can download the sample data directly by following the instructions contained in the cell below (8M compressed).
End of explanation
# TODO: Replace <FILL IN> with appropriate code
weights = [.8, .1, .1]
seed = 42
# Use randomSplit with weights and seed
rawTrainData, rawValidationData, rawTestData = rawData.<FILL IN>
# Cache the data
<FILL IN>
nTrain = <FILL IN>
nVal = <FILL IN>
nTest = <FILL IN>
print nTrain, nVal, nTest, nTrain + nVal + nTest
print rawData.take(1)
# TEST Loading and splitting the data (3a)
Test.assertTrue(all([rawTrainData.is_cached, rawValidationData.is_cached, rawTestData.is_cached]),
'you must cache the split data')
Test.assertEquals(nTrain, 79911, 'incorrect value for nTrain')
Test.assertEquals(nVal, 10075, 'incorrect value for nVal')
Test.assertEquals(nTest, 10014, 'incorrect value for nTest')
Explanation: (3a) Loading and splitting the data
We are now ready to start working with the actual CTR data, and our first task involves splitting it into training, validation, and test sets. Use the randomSplit method with the specified weights and seed to create RDDs storing each of these datasets, and then cache each of these RDDs, as we will be accessing them multiple times in the remainder of this lab. Finally, compute the size of each dataset.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def parsePoint(point):
Converts a comma separated string into a list of (featureID, value) tuples.
Note:
featureIDs should start at 0 and increase to the number of features - 1.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
Returns:
list: A list of (featureID, value) tuples.
<FILL IN>
parsedTrainFeat = rawTrainData.map(parsePoint)
numCategories = (parsedTrainFeat
.flatMap(lambda x: x)
.distinct()
.map(lambda x: (x[0], 1))
.reduceByKey(lambda x, y: x + y)
.sortByKey()
.collect())
print numCategories[2][1]
# TEST Extract features (3b)
Test.assertEquals(numCategories[2][1], 855, 'incorrect implementation of parsePoint')
Test.assertEquals(numCategories[32][1], 4, 'incorrect implementation of parsePoint')
Explanation: (3b) Extract features
We will now parse the raw training data to create an RDD that we can subsequently use to create an OHE dictionary. Note from the take() command in Part (3a) that each raw data point is a string containing several fields separated by some delimiter. For now, we will ignore the first field (which is the 0-1 label), and parse the remaining fields (or raw features). To do this, complete the implemention of the parsePoint function.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
ctrOHEDict = <FILL IN>
numCtrOHEFeats = len(ctrOHEDict.keys())
print numCtrOHEFeats
print ctrOHEDict[(0, '')]
# TEST Create an OHE dictionary from the dataset (3c)
Test.assertEquals(numCtrOHEFeats, 233286, 'incorrect number of features in ctrOHEDict')
Test.assertTrue((0, '') in ctrOHEDict, 'incorrect features in ctrOHEDict')
Explanation: (3c) Create an OHE dictionary from the dataset
Note that parsePoint returns a data point as a list of (featureID, category) tuples, which is the same format as the sample dataset studied in Parts 1 and 2 of this lab. Using this observation, create an OHE dictionary using the function implemented in Part (2c). Note that we will assume for simplicity that all features in our CTR dataset are categorical.
End of explanation
from pyspark.mllib.regression import LabeledPoint
# TODO: Replace <FILL IN> with appropriate code
def parseOHEPoint(point, OHEDict, numOHEFeats):
Obtain the label and feature vector for this raw observation.
Note:
You must use the function `oneHotEncoding` in this implementation or later portions
of this lab may not function as expected.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
OHEDict (dict of (int, str) to int): Mapping of (featureID, value) to unique integer.
numOHEFeats (int): The number of unique features in the training dataset.
Returns:
LabeledPoint: Contains the label for the observation and the one-hot-encoding of the
raw features based on the provided OHE dictionary.
<FILL IN>
OHETrainData = rawTrainData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHETrainData.cache()
print OHETrainData.take(1)
# Check that oneHotEncoding function was used in parseOHEPoint
backupOneHot = oneHotEncoding
oneHotEncoding = None
withOneHot = False
try: parseOHEPoint(rawTrainData.take(1)[0], ctrOHEDict, numCtrOHEFeats)
except TypeError: withOneHot = True
oneHotEncoding = backupOneHot
# TEST Apply OHE to the dataset (3d)
numNZ = sum(parsedTrainFeat.map(lambda x: len(x)).take(5))
numNZAlt = sum(OHETrainData.map(lambda lp: len(lp.features.indices)).take(5))
Test.assertEquals(numNZ, numNZAlt, 'incorrect implementation of parseOHEPoint')
Test.assertTrue(withOneHot, 'oneHotEncoding not present in parseOHEPoint')
Explanation: (3d) Apply OHE to the dataset
Now let's use this OHE dictionary by starting with the raw training data and creating an RDD of LabeledPoint objects using OHE features. To do this, complete the implementation of the parseOHEPoint function. Hint: parseOHEPoint is an extension of the parsePoint function from Part (3b) and it uses the oneHotEncoding function from Part (1d).
End of explanation
def bucketFeatByCount(featCount):
Bucket the counts by powers of two.
for i in range(11):
size = 2 ** i
if featCount <= size:
return size
return -1
featCounts = (OHETrainData
.flatMap(lambda lp: lp.features.indices)
.map(lambda x: (x, 1))
.reduceByKey(lambda x, y: x + y))
featCountsBuckets = (featCounts
.map(lambda x: (bucketFeatByCount(x[1]), 1))
.filter(lambda (k, v): k != -1)
.reduceByKey(lambda x, y: x + y)
.collect())
print featCountsBuckets
import matplotlib.pyplot as plt
x, y = zip(*featCountsBuckets)
x, y = np.log(x), np.log(y)
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
Template for generating the plot layout.
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(4, 14, 2))
ax.set_xlabel(r'$\log_e(bucketSize)$'), ax.set_ylabel(r'$\log_e(countInBucket)$')
plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
pass
Explanation: Visualization 1: Feature frequency
We will now visualize the number of times each of the 233,286 OHE features appears in the training data. We first compute the number of times each feature appears, then bucket the features by these counts. The buckets are sized by powers of 2, so the first bucket corresponds to features that appear exactly once ( $ \scriptsize 2^0 $ ), the second to features that appear twice ( $ \scriptsize 2^1 $ ), the third to features that occur between three and four ( $ \scriptsize 2^2 $ ) times, the fifth bucket is five to eight ( $ \scriptsize 2^3 $ ) times and so on. The scatter plot below shows the logarithm of the bucket thresholds versus the logarithm of the number of features that have counts that fall in the buckets.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
If a (featureID, value) tuple doesn't have a corresponding key in OHEDict it should be
ignored.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
<FILL IN>
OHEValidationData = rawValidationData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHEValidationData.cache()
print OHEValidationData.take(1)
# TEST Handling unseen features (3e)
numNZVal = (OHEValidationData
.map(lambda lp: len(lp.features.indices))
.sum())
Test.assertEquals(numNZVal, 372080, 'incorrect number of features')
Explanation: (3e) Handling unseen features
We naturally would like to repeat the process from Part (3d), e.g., to compute OHE features for the validation and test datasets. However, we must be careful, as some categorical values will likely appear in new data that did not exist in the training data. To deal with this situation, update the oneHotEncoding() function from Part (1d) to ignore previously unseen categories, and then compute OHE features for the validation data.
End of explanation
from pyspark.mllib.classification import LogisticRegressionWithSGD
# fixed hyperparameters
numIters = 50
stepSize = 10.
regParam = 1e-6
regType = 'l2'
includeIntercept = True
# TODO: Replace <FILL IN> with appropriate code
model0 = <FILL IN>
sortedWeights = sorted(model0.weights)
print sortedWeights[:5], model0.intercept
# TEST Logistic regression (4a)
Test.assertTrue(np.allclose(model0.intercept, 0.56455084025), 'incorrect value for model0.intercept')
Test.assertTrue(np.allclose(sortedWeights[0:5],
[-0.45899236853575609, -0.37973707648623956, -0.36996558266753304,
-0.36934962879928263, -0.32697945415010637]), 'incorrect value for model0.weights')
Explanation: Part 4: CTR prediction and logloss evaluation
(4a) Logistic regression
We are now ready to train our first CTR classifier. A natural classifier to use in this setting is logistic regression, since it models the probability of a click-through event rather than returning a binary response, and when working with rare events, probabilistic predictions are useful. First use LogisticRegressionWithSGD to train a model using OHETrainData with the given hyperparameter configuration. LogisticRegressionWithSGD returns a LogisticRegressionModel. Next, use the LogisticRegressionModel.weights and LogisticRegressionModel.intercept attributes to print out the model's parameters. Note that these are the names of the object's attributes and should be called using a syntax like model.weights for a given model.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
from math import log
def computeLogLoss(p, y):
Calculates the value of log loss for a given probabilty and label.
Note:
log(0) is undefined, so when p is 0 we need to add a small value (epsilon) to it
and when p is 1 we need to subtract a small value (epsilon) from it.
Args:
p (float): A probabilty between 0 and 1.
y (int): A label. Takes on the values 0 and 1.
Returns:
float: The log loss value.
epsilon = 10e-12
<FILL IN>
print computeLogLoss(.5, 1)
print computeLogLoss(.5, 0)
print computeLogLoss(.99, 1)
print computeLogLoss(.99, 0)
print computeLogLoss(.01, 1)
print computeLogLoss(.01, 0)
print computeLogLoss(0, 1)
print computeLogLoss(1, 1)
print computeLogLoss(1, 0)
# TEST Log loss (4b)
Test.assertTrue(np.allclose([computeLogLoss(.5, 1), computeLogLoss(.01, 0), computeLogLoss(.01, 1)],
[0.69314718056, 0.0100503358535, 4.60517018599]),
'computeLogLoss is not correct')
Test.assertTrue(np.allclose([computeLogLoss(0, 1), computeLogLoss(1, 1), computeLogLoss(1, 0)],
[25.3284360229, 1.00000008275e-11, 25.3284360229]),
'computeLogLoss needs to bound p away from 0 and 1 by epsilon')
Explanation: (4b) Log loss
Throughout this lab, we will use log loss to evaluate the quality of models. Log loss is defined as: $$ \begin{align} \scriptsize \ell_{log}(p, y) = \begin{cases} -\log (p) & \text{if } y = 1 \\ -\log(1-p) & \text{if } y = 0 \end{cases} \end{align} $$ where $ \scriptsize p$ is a probability between 0 and 1 and $ \scriptsize y$ is a label of either 0 or 1. Log loss is a standard evaluation criterion when predicting rare-events such as click-through rate prediction (it is also the criterion used in the Criteo Kaggle competition). Write a function to compute log loss, and evaluate it on some sample inputs.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Note that our dataset has a very high click-through rate by design
# In practice click-through rate can be one to two orders of magnitude lower
classOneFracTrain = <FILL IN>
print classOneFracTrain
logLossTrBase = <FILL IN>
print 'Baseline Train Logloss = {0:.3f}\n'.format(logLossTrBase)
# TEST Baseline log loss (4c)
Test.assertTrue(np.allclose(classOneFracTrain, 0.22717773523), 'incorrect value for classOneFracTrain')
Test.assertTrue(np.allclose(logLossTrBase, 0.535844), 'incorrect value for logLossTrBase')
Explanation: (4c) Baseline log loss
Next we will use the function we wrote in Part (4b) to compute the baseline log loss on the training data. A very simple yet natural baseline model is one where we always make the same prediction independent of the given datapoint, setting the predicted value equal to the fraction of training points that correspond to click-through events (i.e., where the label is one). Compute this value (which is simply the mean of the training labels), and then use it to compute the training log loss for the baseline model. The log loss for multiple observations is the mean of the individual log loss values.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
from math import exp # exp(-t) = e^-t
def getP(x, w, intercept):
Calculate the probability for an observation given a set of weights and intercept.
Note:
We'll bound our raw prediction between 20 and -20 for numerical purposes.
Args:
x (SparseVector): A vector with values of 1.0 for features that exist in this
observation and 0.0 otherwise.
w (DenseVector): A vector of weights (betas) for the model.
intercept (float): The model's intercept.
Returns:
float: A probability between 0 and 1.
rawPrediction = <FILL IN>
# Bound the raw prediction value
rawPrediction = min(rawPrediction, 20)
rawPrediction = max(rawPrediction, -20)
return <FILL IN>
trainingPredictions = <FILL IN>
print trainingPredictions.take(5)
# TEST Predicted probability (4d)
Test.assertTrue(np.allclose(trainingPredictions.sum(), 18135.4834348),
'incorrect value for trainingPredictions')
Explanation: (4d) Predicted probability
In order to compute the log loss for the model we trained in Part (4a), we need to write code to generate predictions from this model. Write a function that computes the raw linear prediction from this logistic regression model and then passes it through a sigmoid function $ \scriptsize \sigma(t) = (1+ e^{-t})^{-1} $ to return the model's probabilistic prediction. Then compute probabilistic predictions on the training data.
Note that when incorporating an intercept into our predictions, we simply add the intercept to the value of the prediction obtained from the weights and features. Alternatively, if the intercept was included as the first weight, we would need to add a corresponding feature to our data where the feature has the value one. This is not the case here.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def evaluateResults(model, data):
Calculates the log loss for the data given the model.
Args:
model (LogisticRegressionModel): A trained logistic regression model.
data (RDD of LabeledPoint): Labels and features for each observation.
Returns:
float: Log loss for the data.
<FILL IN>
logLossTrLR0 = evaluateResults(model0, OHETrainData)
print ('OHE Features Train Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossTrBase, logLossTrLR0))
# TEST Evaluate the model (4e)
Test.assertTrue(np.allclose(logLossTrLR0, 0.456903), 'incorrect value for logLossTrLR0')
Explanation: (4e) Evaluate the model
We are now ready to evaluate the quality of the model we trained in Part (4a). To do this, first write a general function that takes as input a model and data, and outputs the log loss. Then run this function on the OHE training data, and compare the result with the baseline log loss.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
logLossValBase = <FILL IN>
logLossValLR0 = <FILL IN>
print ('OHE Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossValBase, logLossValLR0))
# TEST Validation log loss (4f)
Test.assertTrue(np.allclose(logLossValBase, 0.527603), 'incorrect value for logLossValBase')
Test.assertTrue(np.allclose(logLossValLR0, 0.456957), 'incorrect value for logLossValLR0')
Explanation: (4f) Validation log loss
Next, following the same logic as in Parts (4c) and 4(e), compute the validation log loss for both the baseline and logistic regression models. Notably, the baseline model for the validation data should still be based on the label fraction from the training dataset.
End of explanation
labelsAndScores = OHEValidationData.map(lambda lp:
(lp.label, getP(lp.features, model0.weights, model0.intercept)))
labelsAndWeights = labelsAndScores.collect()
labelsAndWeights.sort(key=lambda (k, v): v, reverse=True)
labelsByWeight = np.array([k for (k, v) in labelsAndWeights])
length = labelsByWeight.size
truePositives = labelsByWeight.cumsum()
numPositive = truePositives[-1]
falsePositives = np.arange(1.0, length + 1, 1.) - truePositives
truePositiveRate = truePositives / numPositive
falsePositiveRate = falsePositives / (length - numPositive)
# Generate layout and plot data
fig, ax = preparePlot(np.arange(0., 1.1, 0.1), np.arange(0., 1.1, 0.1))
ax.set_xlim(-.05, 1.05), ax.set_ylim(-.05, 1.05)
ax.set_ylabel('True Positive Rate (Sensitivity)')
ax.set_xlabel('False Positive Rate (1 - Specificity)')
plt.plot(falsePositiveRate, truePositiveRate, color='#8cbfd0', linestyle='-', linewidth=3.)
plt.plot((0., 1.), (0., 1.), linestyle='--', color='#d6ebf2', linewidth=2.) # Baseline model
pass
Explanation: Visualization 2: ROC curve
We will now visualize how well the model predicts our target. To do this we generate a plot of the ROC curve. The ROC curve shows us the trade-off between the false positive rate and true positive rate, as we liberalize the threshold required to predict a positive outcome. A random model is represented by the dashed line.
End of explanation
from collections import defaultdict
import hashlib
def hashFunction(numBuckets, rawFeats, printMapping=False):
Calculate a feature dictionary for an observation's features based on hashing.
Note:
Use printMapping=True for debug purposes and to better understand how the hashing works.
Args:
numBuckets (int): Number of buckets to use as features.
rawFeats (list of (int, str)): A list of features for an observation. Represented as
(featureID, value) tuples.
printMapping (bool, optional): If true, the mappings of featureString to index will be
printed.
Returns:
dict of int to float: The keys will be integers which represent the buckets that the
features have been hashed to. The value for a given key will contain the count of the
(featureID, value) tuples that have hashed to that key.
mapping = {}
for ind, category in rawFeats:
featureString = category + str(ind)
mapping[featureString] = int(int(hashlib.md5(featureString).hexdigest(), 16) % numBuckets)
if(printMapping): print mapping
sparseFeatures = defaultdict(float)
for bucket in mapping.values():
sparseFeatures[bucket] += 1.0
return dict(sparseFeatures)
# Reminder of the sample values:
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# TODO: Replace <FILL IN> with appropriate code
# Use four buckets
sampOneFourBuckets = hashFunction(<FILL IN>, sampleOne, True)
sampTwoFourBuckets = hashFunction(<FILL IN>, sampleTwo, True)
sampThreeFourBuckets = hashFunction(<FILL IN>, sampleThree, True)
# Use one hundred buckets
sampOneHundredBuckets = hashFunction(<FILL IN>, sampleOne, True)
sampTwoHundredBuckets = hashFunction(<FILL IN>, sampleTwo, True)
sampThreeHundredBuckets = hashFunction(<FILL IN>, sampleThree, True)
print '\t\t 4 Buckets \t\t\t 100 Buckets'
print 'SampleOne:\t {0}\t\t {1}'.format(sampOneFourBuckets, sampOneHundredBuckets)
print 'SampleTwo:\t {0}\t\t {1}'.format(sampTwoFourBuckets, sampTwoHundredBuckets)
print 'SampleThree:\t {0}\t {1}'.format(sampThreeFourBuckets, sampThreeHundredBuckets)
# TEST Hash function (5a)
Test.assertEquals(sampOneFourBuckets, {2: 1.0, 3: 1.0}, 'incorrect value for sampOneFourBuckets')
Test.assertEquals(sampThreeHundredBuckets, {72: 1.0, 5: 1.0, 14: 1.0},
'incorrect value for sampThreeHundredBuckets')
Explanation: Part 5: Reduce feature dimension via feature hashing
(5a) Hash function
As we just saw, using a one-hot-encoding featurization can yield a model with good statistical accuracy. However, the number of distinct categories across all features is quite large -- recall that we observed 233K categories in the training data in Part (3c). Moreover, the full Kaggle training dataset includes more than 33M distinct categories, and the Kaggle dataset itself is just a small subset of Criteo's labeled data. Hence, featurizing via a one-hot-encoding representation would lead to a very large feature vector. To reduce the dimensionality of the feature space, we will use feature hashing.
Below is the hash function that we will use for this part of the lab. We will first use this hash function with the three sample data points from Part (1a) to gain some intuition. Specifically, run code to hash the three sample points using two different values for numBuckets and observe the resulting hashed feature dictionaries.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def parseHashPoint(point, numBuckets):
Create a LabeledPoint for this observation using hashing.
Args:
point (str): A comma separated string where the first value is the label and the rest are
features.
numBuckets: The number of buckets to hash to.
Returns:
LabeledPoint: A LabeledPoint with a label (0.0 or 1.0) and a SparseVector of hashed
features.
<FILL IN>
numBucketsCTR = 2 ** 15
hashTrainData = <FILL IN>
hashTrainData.cache()
hashValidationData = <FILL IN>
hashValidationData.cache()
hashTestData = <FILL IN>
hashTestData.cache()
print hashTrainData.take(1)
# TEST Creating hashed features (5b)
hashTrainDataFeatureSum = sum(hashTrainData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTrainDataLabelSum = sum(hashTrainData
.map(lambda lp: lp.label)
.take(100))
hashValidationDataFeatureSum = sum(hashValidationData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashValidationDataLabelSum = sum(hashValidationData
.map(lambda lp: lp.label)
.take(100))
hashTestDataFeatureSum = sum(hashTestData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTestDataLabelSum = sum(hashTestData
.map(lambda lp: lp.label)
.take(100))
Test.assertEquals(hashTrainDataFeatureSum, 772, 'incorrect number of features in hashTrainData')
Test.assertEquals(hashTrainDataLabelSum, 24.0, 'incorrect labels in hashTrainData')
Test.assertEquals(hashValidationDataFeatureSum, 776,
'incorrect number of features in hashValidationData')
Test.assertEquals(hashValidationDataLabelSum, 16.0, 'incorrect labels in hashValidationData')
Test.assertEquals(hashTestDataFeatureSum, 774, 'incorrect number of features in hashTestData')
Test.assertEquals(hashTestDataLabelSum, 23.0, 'incorrect labels in hashTestData')
Explanation: (5b) Creating hashed features
Next we will use this hash function to create hashed features for our CTR datasets. First write a function that uses the hash function from Part (5a) with numBuckets = $ \scriptsize 2^{15} \approx 33K $ to create a LabeledPoint with hashed features stored as a SparseVector. Then use this function to create new training, validation and test datasets with hashed features. Hint: parsedHashPoint is similar to parseOHEPoint from Part (3d).
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def computeSparsity(data, d, n):
Calculates the average sparsity for the features in an RDD of LabeledPoints.
Args:
data (RDD of LabeledPoint): The LabeledPoints to use in the sparsity calculation.
d (int): The total number of features.
n (int): The number of observations in the RDD.
Returns:
float: The average of the ratio of features in a point to total features.
<FILL IN>
averageSparsityHash = computeSparsity(hashTrainData, numBucketsCTR, nTrain)
averageSparsityOHE = computeSparsity(OHETrainData, numCtrOHEFeats, nTrain)
print 'Average OHE Sparsity: {0:.7e}'.format(averageSparsityOHE)
print 'Average Hash Sparsity: {0:.7e}'.format(averageSparsityHash)
# TEST Sparsity (5c)
Test.assertTrue(np.allclose(averageSparsityOHE, 1.6717677e-04),
'incorrect value for averageSparsityOHE')
Test.assertTrue(np.allclose(averageSparsityHash, 1.1805561e-03),
'incorrect value for averageSparsityHash')
Explanation: (5c) Sparsity
Since we have 33K hashed features versus 233K OHE features, we should expect OHE features to be sparser. Verify this hypothesis by computing the average sparsity of the OHE and the hashed training datasets.
Note that if you have a SparseVector named sparse, calling len(sparse) returns the total number of features, not the number features with entries. SparseVector objects have the attributes indices and values that contain information about which features are nonzero. Continuing with our example, these can be accessed using sparse.indices and sparse.values, respectively.
End of explanation
numIters = 500
regType = 'l2'
includeIntercept = True
# Initialize variables using values from initial model training
bestModel = None
bestLogLoss = 1e10
# TODO: Replace <FILL IN> with appropriate code
stepSizes = <FILL IN>
regParams = <FILL IN>
for stepSize in stepSizes:
for regParam in regParams:
model = (LogisticRegressionWithSGD
.train(hashTrainData, numIters, stepSize, regParam=regParam, regType=regType,
intercept=includeIntercept))
logLossVa = evaluateResults(model, hashValidationData)
print ('\tstepSize = {0:.1f}, regParam = {1:.0e}: logloss = {2:.3f}'
.format(stepSize, regParam, logLossVa))
if (logLossVa < bestLogLoss):
bestModel = model
bestLogLoss = logLossVa
print ('Hashed Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossValBase, bestLogLoss))
# TEST Logistic model with hashed features (5d)
Test.assertTrue(np.allclose(bestLogLoss, 0.4481683608), 'incorrect value for bestLogLoss')
Explanation: (5d) Logistic model with hashed features
Now let's train a logistic regression model using the hashed features. Run a grid search to find suitable hyperparameters for the hashed features, evaluating via log loss on the validation data. Note: This may take a few minutes to run. Use 1 and 10 for stepSizes and 1e-6 and 1e-3 for regParams.
End of explanation
from matplotlib.colors import LinearSegmentedColormap
# Saved parameters and results. Eliminate the time required to run 36 models
stepSizes = [3, 6, 9, 12, 15, 18]
regParams = [1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2]
logLoss = np.array([[ 0.45808431, 0.45808493, 0.45809113, 0.45815333, 0.45879221, 0.46556321],
[ 0.45188196, 0.45188306, 0.4518941, 0.4520051, 0.45316284, 0.46396068],
[ 0.44886478, 0.44886613, 0.44887974, 0.44902096, 0.4505614, 0.46371153],
[ 0.44706645, 0.4470698, 0.44708102, 0.44724251, 0.44905525, 0.46366507],
[ 0.44588848, 0.44589365, 0.44590568, 0.44606631, 0.44807106, 0.46365589],
[ 0.44508948, 0.44509474, 0.44510274, 0.44525007, 0.44738317, 0.46365405]])
numRows, numCols = len(stepSizes), len(regParams)
logLoss = np.array(logLoss)
logLoss.shape = (numRows, numCols)
fig, ax = preparePlot(np.arange(0, numCols, 1), np.arange(0, numRows, 1), figsize=(8, 7),
hideLabels=True, gridWidth=0.)
ax.set_xticklabels(regParams), ax.set_yticklabels(stepSizes)
ax.set_xlabel('Regularization Parameter'), ax.set_ylabel('Step Size')
colors = LinearSegmentedColormap.from_list('blue', ['#0022ff', '#000055'], gamma=.2)
image = plt.imshow(logLoss,interpolation='nearest', aspect='auto',
cmap = colors)
pass
Explanation: Visualization 3: Hyperparameter heat map
We will now perform a visualization of an extensive hyperparameter search. Specifically, we will create a heat map where the brighter colors correspond to lower values of logLoss.
The search was run using six step sizes and six values for regularization, which required the training of thirty-six separate models. We have included the results below, but omitted the actual search to save time.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Log loss for the best model from (5d)
logLossTest = <FILL IN>
# Log loss for the baseline model
logLossTestBaseline = <FILL IN>
print ('Hashed Features Test Log Loss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossTestBaseline, logLossTest))
# TEST Evaluate on the test set (5e)
Test.assertTrue(np.allclose(logLossTestBaseline, 0.537438),
'incorrect value for logLossTestBaseline')
Test.assertTrue(np.allclose(logLossTest, 0.455616931), 'incorrect value for logLossTest')
Explanation: (5e) Evaluate on the test set
Finally, evaluate the best model from Part (5d) on the test set. Compare the resulting log loss with the baseline log loss on the test set, which can be computed in the same way that the validation log loss was computed in Part (4f).
End of explanation |
5,553 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-esm2-1-hr', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: CNRM-ESM2-1-HR
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
5,554 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
E2E ML on GCP
Step1: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI, Compute Engine and Cloud Storage APIs.
If you are running this notebook locally, you need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step4: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step5: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step6: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step10: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
Step11: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step12: Set pre-built containers
Set the pre-built Docker container image for prediction.
For the latest list, see Pre-built containers for prediction.
Step13: Set machine type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for prediction.
machine type
n1-standard
Step14: Get pretrained model from TensorFlow Hub
For demonstration purposes, this tutorial uses a pretrained model from TensorFlow Hub (TFHub), which is then uploaded to a Vertex AI Model resource. Once you have a Vertex AI Model resource, the model can be deployed to a Vertex AI Endpoint resource.
Download the pretrained model
First, you download the pretrained model from TensorFlow Hub. The model gets downloaded as a TF.Keras layer. To finalize the model, in this example, you create a Sequential() model with the downloaded TFHub model as a layer, and specify the input shape to the model.
Step15: Save the model artifacts
At this point, the model is in memory. Next, you save the model artifacts to a Cloud Storage location.
Step16: Upload the TensorFlow Hub model to a Vertex AI Model resource
Finally, you upload the model artifacts from the TFHub model and serving function into a Vertex AI Model resource.
Note
Step17: Creating an Endpoint resource
You create an Endpoint resource using the Endpoint.create() method. At a minimum, you specify the display name for the endpoint. Optionally, you can specify the project and location (region); otherwise the settings are inherited by the values you set when you initialized the Vertex AI SDK with the init() method.
In this example, the following parameters are specified
Step18: Deploying Model resources to an Endpoint resource.
You can deploy one of more Vertex AI Model resource instances to the same endpoint. Each Vertex AI Model resource that is deployed will have its own deployment container for the serving binary.
Note
Step19: Display scaling configuration
Once your model is deployed, you can query the Endpoint resource to retrieve the scaling configuration for your deployed model with the property endpoint.gca_resource.deployed_models.
Since an Endpoint resource may have multiple deployed models, the deployed_models property returns a list, with one entry per deployed model. In this example, there is a single deployed model and you retrieve the scaling configuration as the first entry in the list
Step20: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step21: Manual scaling
In the next example, you deploy the Vertex AI Model resource to a Vertex AI Endpoint resource for manual scaling -- a fixed number (greater than 1) VM instances. In otherwords, when the model is deployed, the fixed number of VM instances are provisioned and stays provisioned until the model is undeployed.
In this example, you deploy the model with the minimal amount of specified parameters, as follows
Step22: Display scaling configuration
In this example, there is a single deployed model and you retrieve the scaling configuration as the first entry in the list
Step23: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step24: Auto scaling
In the next example, you deploy the Vertex AI Model resource to a Vertex AI Endpoint resource for auto scaling -- a variable number (greater than 1) VM instances. In otherwords, when the model is deployed, the minimum number of VM instances are provisioned. As the load varies, the number of provisioned instances may dynamically increase upto the maximum number of VM instances, and deprovision to the minimum number of VM instances. The number of provisioned VM instances will never be less than the minimum or more than the maximum.
In this example, you deploy the model with the minimal amount of specified parameters, as follows
Step25: Display scaling configuration
In this example, there is a single deployed model and you retrieve the scaling configuration as the first entry in the list
Step26: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step27: Setting scaling thresholds
An Endpoint resource supports auto-scaling based on two metrics
Step28: Display scaling configuration
In this example, there is a single deployed model and you retrieve the scaling configuration as the first entry in the list
Step29: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step30: Upload TensorFlow Hub model for GPU deployment image
Next, you upload a second instance of your TensorFlow Hub model as a Model resourc -- but where the corresponding serving container supports GPUs.
Step31: GPU thresholds
In this example, the deployment VM instances are configured to use hardware accelerators -- i.e., GPUs, by specifying the following parameters
Step32: Display scaling configuration
In this example, there is a single deployed model and you retrieve the scaling configuration as the first entry in the list
Step33: Deploy multiple models to Endpoint resource
Next, you deploy two models to the same Endpoint resource and split the predictio request traffic between them. One model will use GPUs, with 80% of the traffic and the other the CPU with 20% of the traffic.
You already have the GPU version of the model deployed to the Endpoint resource. In this example, you add a second model instance -- the CPU version -- to the same Endpoint resource, and specify the traffic split between the models. In this example, the traffic_split parameter is specified as follows
Step34: Display scaling configuration
In this example, there are two deployed models, the CPU and GPU versions.
Step35: Undeploy the models
When you are done doing predictions, you undeploy all the models from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step36: Delete the model instances
The method 'delete()' will delete the model.
Step37: Delete the endpoint
The method 'delete()' will delete the endpoint.
Step38: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial. | Python Code:
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
# Install the packages
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG -q
! pip3 install --upgrade google-cloud-storage $USER_FLAG -q
! pip3 install tensorflow-hub $USER_FLAG -q
Explanation: E2E ML on GCP: MLOps stage 5 : deployment: get started with configuring autoscaling for deployment
<table align="left">
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage4/get_started_with_autoscaling.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage4/get_started_with_autoscaling.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage4/get_started_with_autoscaling.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 5 : deployment: get started with autoscaling for deployment.
Dataset
This tutorial uses a pre-trained image classification model from TensorFlow Hub, which is trained on ImageNet dataset.
Learn more about ResNet V2 pretained model.
Objective
In this tutorial, you learn how to use fine-tune control auto-scaling configuration when deploying a Model resource to an Endpoint resource.
This tutorial uses the following Google Cloud ML services:
Vertex ML Prediction
The steps performed include:
Download a pretrained image classification model from TensorFlow Hub.
Upload the pretrained model as a Model resource.
Create an Endpoint resource.
Deploy Model resource for no-scaling (single node).
Deploy Model resource for manual scaling.
Deploy Model resource for auto-scaling.
Fine-tune scaling thresholds for CPU utilization.
Fine-tune scaling thresholds for GPU utilization.
Deploy mix of CPU and GPU model instances with auto-scaling to an Endpoint resource.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI pricing and Cloud Storage pricing and use the Pricing Calculator to generate a cost estimate based on your projected usage.
Installations
Install the packages required for executing this notebook.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI, Compute Engine and Cloud Storage APIs.
If you are running this notebook locally, you need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = "google.colab" in sys.modules
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
BUCKET_URI = "gs://" + BUCKET_NAME
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aiplatform
import tensorflow as tf
import tensorflow_hub as hub
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aiplatform.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aiplatform.gapic.AcceleratorType.NVIDIA_TESLA_K80, 1)
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more about hardware accelerator support for your region.
Learn more about GPU compatibility by Machine Type.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2.5".replace(".", "-")
GPU_VERSION = "tf2-gpu.{}".format(TF)
CPU_VERSION = "tf2-cpu.{}".format(TF)
DEPLOY_IMAGE_GPU = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], GPU_VERSION
)
DEPLOY_IMAGE_CPU = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], CPU_VERSION
)
print("Deployment:", DEPLOY_IMAGE_GPU, DEPLOY_IMAGE_CPU, DEPLOY_GPU, DEPLOY_NGPU)
Explanation: Set pre-built containers
Set the pre-built Docker container image for prediction.
For the latest list, see Pre-built containers for prediction.
End of explanation
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", DEPLOY_COMPUTE)
Explanation: Set machine type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
tfhub_model = tf.keras.Sequential(
[hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v2_101/classification/5")]
)
tfhub_model.build([None, 224, 224, 3])
tfhub_model.summary()
Explanation: Get pretrained model from TensorFlow Hub
For demonstration purposes, this tutorial uses a pretrained model from TensorFlow Hub (TFHub), which is then uploaded to a Vertex AI Model resource. Once you have a Vertex AI Model resource, the model can be deployed to a Vertex AI Endpoint resource.
Download the pretrained model
First, you download the pretrained model from TensorFlow Hub. The model gets downloaded as a TF.Keras layer. To finalize the model, in this example, you create a Sequential() model with the downloaded TFHub model as a layer, and specify the input shape to the model.
End of explanation
MODEL_DIR = BUCKET_URI + "/model"
tfhub_model.save(MODEL_DIR)
Explanation: Save the model artifacts
At this point, the model is in memory. Next, you save the model artifacts to a Cloud Storage location.
End of explanation
model = aiplatform.Model.upload(
display_name="example_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE_CPU,
)
print(model)
Explanation: Upload the TensorFlow Hub model to a Vertex AI Model resource
Finally, you upload the model artifacts from the TFHub model and serving function into a Vertex AI Model resource.
Note: When you upload the model artifacts to a Vertex AI Model resource, you specify the corresponding deployment container image. In this example, you are using a CPU only deployment container.
End of explanation
endpoint = aiplatform.Endpoint.create(
display_name="example_" + TIMESTAMP,
project=PROJECT_ID,
location=REGION,
labels={"your_key": "your_value"},
)
print(endpoint)
Explanation: Creating an Endpoint resource
You create an Endpoint resource using the Endpoint.create() method. At a minimum, you specify the display name for the endpoint. Optionally, you can specify the project and location (region); otherwise the settings are inherited by the values you set when you initialized the Vertex AI SDK with the init() method.
In this example, the following parameters are specified:
display_name: A human readable name for the Endpoint resource.
project: Your project ID.
location: Your region.
labels: (optional) User defined metadata for the Endpoint in the form of key/value pairs.
This method returns an Endpoint object.
Learn more about Vertex AI Endpoints.
End of explanation
response = endpoint.deploy(
model=model,
deployed_model_display_name="example_" + TIMESTAMP,
machine_type=DEPLOY_COMPUTE,
)
Explanation: Deploying Model resources to an Endpoint resource.
You can deploy one of more Vertex AI Model resource instances to the same endpoint. Each Vertex AI Model resource that is deployed will have its own deployment container for the serving binary.
Note: For this example, you specified the deployment container for the TFHub model in the previous step of uploading the model artifacts to a Vertex AI Model resource.
Scaling
A Vertex AI Endpoint resource supports three types of scaling:
No Scaling: The serving binary is deployed to a single VM instance.
Manual Scaling: The serving binary is deployed to a fixed number of multiple VM instances.
Auto Scaling: The number of VM instances that the serving binary is deployed to varies depending on load.
No Scaling
In the next example, you deploy the Vertex AI Model resource to a Vertex AI Endpoint resource, without any scaling -- i.e., single VM (node) instance. In otherwords, when the model is deployed, a single VM instance is provisioned and stays provisioned until the model is undeployed.
In this example, you deploy the model with the minimal amount of specified parameters, as follows:
model: The Model resource.- model: The Model resource to deploy.
machine_type: The machine type for each VM instance.
deployed_model_displayed_name: The human readable name for the deployed model instance.
For no-scaling, the single VM instance is provisioned during the deployment of the model. Do to the requirements to provision the resource, this may take upto a few minutes.
End of explanation
print(endpoint.gca_resource.deployed_models[0].dedicated_resources)
deployed_model_id = endpoint.gca_resource.deployed_models[0].id
Explanation: Display scaling configuration
Once your model is deployed, you can query the Endpoint resource to retrieve the scaling configuration for your deployed model with the property endpoint.gca_resource.deployed_models.
Since an Endpoint resource may have multiple deployed models, the deployed_models property returns a list, with one entry per deployed model. In this example, there is a single deployed model and you retrieve the scaling configuration as the first entry in the list: deployed_models[0]. You then display the property dedicated_resources, which will return the machine type and min/max number of nodes to scale. For no-scaling, the min/max nodes will be set to one.
Note: The deployed model identifier refers to the deployed instance of the model and not the model resource identifier.
End of explanation
endpoint.undeploy(deployed_model_id)
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
MIN_NODES = MAX_NODES = 2
response = endpoint.deploy(
model=model,
deployed_model_display_name="example_" + TIMESTAMP,
machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES
)
Explanation: Manual scaling
In the next example, you deploy the Vertex AI Model resource to a Vertex AI Endpoint resource for manual scaling -- a fixed number (greater than 1) VM instances. In otherwords, when the model is deployed, the fixed number of VM instances are provisioned and stays provisioned until the model is undeployed.
In this example, you deploy the model with the minimal amount of specified parameters, as follows:
model: The Model resource.- model: The Model resource to deploy.
machine_type: The machine type for each VM instance.
deployed_model_displayed_name: The human readable name for the deployed model instance.
min_replica_count: The minimum number of VM instances (nodes) to provision.
max_replica_count: The maximum number of VM instances (nodes) to provision.
For manual-scaling, the fixed number of VM instances are provisioned during the deployment of the model.
Note: For manual scaling, the minimum and maximum number of nodes are set to the same value.
End of explanation
print(endpoint.gca_resource.deployed_models[0].dedicated_resources)
deployed_model_id = endpoint.gca_resource.deployed_models[0].id
Explanation: Display scaling configuration
In this example, there is a single deployed model and you retrieve the scaling configuration as the first entry in the list: deployed_models[0]. You then display the property dedicated_resources, which will return the machine type and min/max number of nodes to scale. For manual scaling, the min/max nodes will be set to the same value, greater than one.
End of explanation
endpoint.undeploy(deployed_model_id)
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
MIN_NODES = 1
MAX_NODES = 2
response = endpoint.deploy(
model=model,
deployed_model_display_name="example_" + TIMESTAMP,
machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES
)
Explanation: Auto scaling
In the next example, you deploy the Vertex AI Model resource to a Vertex AI Endpoint resource for auto scaling -- a variable number (greater than 1) VM instances. In otherwords, when the model is deployed, the minimum number of VM instances are provisioned. As the load varies, the number of provisioned instances may dynamically increase upto the maximum number of VM instances, and deprovision to the minimum number of VM instances. The number of provisioned VM instances will never be less than the minimum or more than the maximum.
In this example, you deploy the model with the minimal amount of specified parameters, as follows:
model: The Model resource.- model: The Model resource to deploy.
machine_type: The machine type for each VM instance.
deployed_model_displayed_name: The human readable name for the deployed model instance.
min_replica_count: The minimum number of VM instances (nodes) to provision.
max_replica_count: The maximum number of VM instances (nodes) to provision.
For auto-scaling, the minimum number of VM instances are provisioned during the deployment of the model.
Note: For auto scaling, the minimum number of nodes must be set to a value greater than zero. In otherwords, there will always be at least one VM instance provisioned.
End of explanation
print(endpoint.gca_resource.deployed_models[0].dedicated_resources)
deployed_model_id = endpoint.gca_resource.deployed_models[0].id
Explanation: Display scaling configuration
In this example, there is a single deployed model and you retrieve the scaling configuration as the first entry in the list: deployed_models[0]. You then display the property dedicated_resources, which will return the machine type and min/max number of nodes to scale. For auto scaling, the max nodes will be set to a value greater than the min.
End of explanation
endpoint.undeploy(deployed_model_id)
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
MIN_NODES = 1
MAX_NODES = 4
response = endpoint.deploy(
model=model,
deployed_model_display_name="example_" + TIMESTAMP,
machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
autoscaling_target_cpu_utilization=50
)
Explanation: Setting scaling thresholds
An Endpoint resource supports auto-scaling based on two metrics: CPU utilization and GPU duty cycle. Both metrics are measured by taking the average utilization of each deployed model. Once the utilization metric exceeds a threshold by a certain amount of time, the number of VM instances (nodes) adjusts up or down accordingly.
CPU thresholds
In the previous examples, the VM instances deployed where with CPUs only -- i.e., no hardware accelerators. By default (in auto-scaling), the CPU utilization metric is set to 60%. When deploying the model, specify the parameter autoscaling_target_cpu_utilization to set a non-default value.
End of explanation
print(endpoint.gca_resource.deployed_models[0].dedicated_resources)
deployed_model_id = endpoint.gca_resource.deployed_models[0].id
Explanation: Display scaling configuration
In this example, there is a single deployed model and you retrieve the scaling configuration as the first entry in the list: deployed_models[0]. You then display the property dedicated_resources, which will return the machine type and min/max number of nodes to scale, and the target value for the CPU utilization: autoscaling_metric_specs.
End of explanation
endpoint.undeploy(deployed_model_id)
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
model_gpu = aiplatform.Model.upload(
display_name="example_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE_GPU,
)
print(model)
Explanation: Upload TensorFlow Hub model for GPU deployment image
Next, you upload a second instance of your TensorFlow Hub model as a Model resourc -- but where the corresponding serving container supports GPUs.
End of explanation
MIN_NODES = 1
MAX_NODES = 2
response = endpoint.deploy(
model=model_gpu,
deployed_model_display_name="example_" + TIMESTAMP,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU.name,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
autoscaling_target_accelerator_duty_cycle=50
)
Explanation: GPU thresholds
In this example, the deployment VM instances are configured to use hardware accelerators -- i.e., GPUs, by specifying the following parameters:
accelerator_type: The type of hardware (e.g., GPU) accelerator.
accelerator_count: The number of harware accelerators per previsioned VM instance.
The type and number of GPUs supported is specific to machine type and region.
Learn more about GPU types and number per machine type.
Learn more about GPU types available per region.
By default (in auto-scaling), the GPU utilization metric is set to 60%. When deploying the model, specify the parameter autoscaling_target_accelerator_duty_cycle to set a non-default value.
When serving, if either the CPU utilization or GPU duty cycle exceed or fall below the threshold for a certain amount of time, then auto-scaling is triggered.
End of explanation
print(endpoint.gca_resource.deployed_models[0].dedicated_resources)
deployed_model_id = endpoint.gca_resource.deployed_models[0].id
Explanation: Display scaling configuration
In this example, there is a single deployed model and you retrieve the scaling configuration as the first entry in the list: deployed_models[0]. You then display the property dedicated_resources, which will return the machine type and min/max number of nodes to scale, and the target value for the GPU duty cycle: autoscaling_metric_specs.
End of explanation
response = endpoint.deploy(
model=model,
deployed_model_display_name="example_" + TIMESTAMP,
machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
autoscaling_target_cpu_utilization=50,
traffic_split={"0": 20, deployed_model_id: 80 }
)
Explanation: Deploy multiple models to Endpoint resource
Next, you deploy two models to the same Endpoint resource and split the predictio request traffic between them. One model will use GPUs, with 80% of the traffic and the other the CPU with 20% of the traffic.
You already have the GPU version of the model deployed to the Endpoint resource. In this example, you add a second model instance -- the CPU version -- to the same Endpoint resource, and specify the traffic split between the models. In this example, the traffic_split parameter is specified as follows:
"0": 20: The model being deployed (default ID is 0) will receive 20% of the traffic.
deployed_model_id: 80: The existing deployed model (specified by its deployed model ID) will receive 80% of the traffic.
End of explanation
print(endpoint.gca_resource.deployed_models)
Explanation: Display scaling configuration
In this example, there are two deployed models, the CPU and GPU versions.
End of explanation
endpoint.undeploy_all()
Explanation: Undeploy the models
When you are done doing predictions, you undeploy all the models from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
model.delete()
model_gpu.delete()
Explanation: Delete the model instances
The method 'delete()' will delete the model.
End of explanation
endpoint.delete()
Explanation: Delete the endpoint
The method 'delete()' will delete the endpoint.
End of explanation
# Set this to true only if you'd like to delete your bucket
delete_bucket = True
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
End of explanation |
5,555 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Creating a simple model based on Gender
Step2: Load training data
Step3: Take a look at the training data
Step4: Playing w/ the data
I have an array of 12 columns and 891 rows.
I can access any element I want, so the entire first column would be data[0
Step5: Proportion of Survivors
Step6: Stats of all women on board
Find the stats of all the women on board, by making an array that lists True/False whether each row is female
Step7: Filter the whole data, to find statistics for just women, by just placing women_only_stats as a "mask" on my full data -- Use it in place of the '0
Step8: Derive some statistics about them
Step9: Now that I have my indicator that women were much more likely to survive, I am done with the training set.
First Basic Model
Step10: Open a new file so I can write to it.
Call it something descriptive.
Finally, loop through each row in the train file, and look in column index [3] (which is 'Sex').
Write out the PassengerId, and my prediction.
Step11: Model algorithm | Python Code:
This simple code is desinged to teach a basic user to read in the files in python, simply find what proportion of males and females survived and make a predictive model based on this
Author : AstroDave
Date : 18 September 2012
Revised: 28 March 2014
import csv as csv
import numpy as np
Explanation: Creating a simple model based on Gender
End of explanation
csv_file_object = csv.reader(open('./data/train.csv', 'rb')) # Load in the csv file
header = csv_file_object.next() # Skip the fist line as it is a header
data =[] # Create a variable to hold the data
for row in csv_file_object: # Skip through each row in the csv file,
data.append(row[0:]) # adding each row to the data variable
data = np.array(data) # Then convert from a list to an array.
Explanation: Load training data
End of explanation
type(data)
data.shape
data
Explanation: Take a look at the training data
End of explanation
data[0::,0].astype(np.float)
Explanation: Playing w/ the data
I have an array of 12 columns and 891 rows.
I can access any element I want, so the entire first column would be data[0::,0].astype(np.float)
-- This means all of the rows (from start to end), in column 0 I have to add the .astype() command, because when appending the rows, python thought it was a string - so needed to convert
End of explanation
number_passengers = np.size(data[0::,1].astype(np.float))
number_survived = np.sum(data[0::,1].astype(np.float))
proportion_survivors = number_survived / number_passengers
print 'Proportion of Survivors: %s/%s = %s' % (number_passengers \
, number_survived \
, proportion_survivors)
Explanation: Proportion of Survivors
End of explanation
women_only_stats = data[0::,4] == "female" # This finds where all the women are
men_only_stats = data[0::,4] != "female" # This finds where all the men are (note != means 'not equal')
Explanation: Stats of all women on board
Find the stats of all the women on board, by making an array that lists True/False whether each row is female
End of explanation
women_onboard = data[women_only_stats, 1].astype(np.float)
men_onboard = data[men_only_stats, 1].astype(np.float)
print 'Women onboard: %s' % women_onboard.size
print 'Men onboard: %s' % men_onboard.size
print 'Total onboard: %s' % (women_onboard.size+men_onboard.size)
Explanation: Filter the whole data, to find statistics for just women, by just placing women_only_stats as a "mask" on my full data -- Use it in place of the '0::' part of the array index.
You can test it by placing it there, and requesting column index [4], and the output should all read 'female' e.g. try typing this: data[women_only_stats,4]
End of explanation
proportion_women_survived = np.sum(women_onboard) / np.size(women_onboard)
proportion_men_survived = np.sum(men_onboard) / np.size(men_onboard)
print 'Proportion of women who survived is %s' % proportion_women_survived
print 'Proportion of men who survived is %s' % proportion_men_survived
Explanation: Derive some statistics about them
End of explanation
test_file = open('./data/test.csv', 'rb') # First, read in test.csv
test_file_object = csv.reader(test_file)
header = test_file_object.next()
header
Explanation: Now that I have my indicator that women were much more likely to survive, I am done with the training set.
First Basic Model: Women Survive
Let's read in the test file and write out a simple prediction:
- if female, then model that she survived (1)
- if male, then model that he did not survive (0)
End of explanation
predictions_file = open("./models/jfaPythonBasicGenderModel.csv", "wb")
predictions_file_object = csv.writer(predictions_file)
predictions_file_object.writerow(["PassengerId", "Survived"]) # write the column headers
Explanation: Open a new file so I can write to it.
Call it something descriptive.
Finally, loop through each row in the train file, and look in column index [3] (which is 'Sex').
Write out the PassengerId, and my prediction.
End of explanation
for row in test_file_object: # For each row in test file,
if row[3] == 'female': # is it a female, if yes then
predictions_file_object.writerow([row[0], "1"]) # write the PassengerId, and predict 1
else: # or else if male,
predictions_file_object.writerow([row[0], "0"]) # write the PassengerId, and predict 0.
test_file.close() # Close out the files.
predictions_file.close()
Explanation: Model algorithm: Females survive, Males die
Loop through each passenger in the train file:
- if passenger is female, set as survived
- if passenger is male, set as died
End of explanation |
5,556 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: The Data
Read the yelp.csv file and set it as a dataframe called yelp.
Step2: Check the head, info , and describe methods on yelp.
Step3: Create a new column called "text length" which is the number of words in the text column.
Step4: EDA
Let's explore the data
Imports
Import the data visualization libraries if you haven't done so already.
Step5: Use FacetGrid from the seaborn library to create a grid of 5 histograms of text length based off of the star ratings. Reference the seaborn documentation for hints on this
Step6: Create a boxplot of text length for each star category.
Step7: Create a countplot of the number of occurrences for each type of star rating.
Step8: Use groupby to get the mean values of the numerical columns, you should be able to create this dataframe with the operation
Step9: Use the corr() method on that groupby dataframe to produce this dataframe
Step10: Then use seaborn to create a heatmap based off that .corr() dataframe
Step11: NLP Classification Task
Let's move on to the actual task. To make things a little easier, go ahead and only grab reviews that were either 1 star or 5 stars.
Create a dataframe called yelp_class that contains the columns of yelp dataframe but for only the 1 or 5 star reviews.
Step12: Create two objects X and y. X will be the 'text' column of yelp_class and y will be the 'stars' column of yelp_class. (Your features and target/labels)
Step13: Import CountVectorizer and create a CountVectorizer object.
Step14: Use the fit_transform method on the CountVectorizer object and pass in X (the 'text' column). Save this result by overwriting X.
Step15: Train Test Split
Let's split our data into training and testing data.
Use train_test_split to split up the data into X_train, X_test, y_train, y_test. Use test_size=0.3 and random_state=101
Step16: Training a Model
Time to train a model!
Import MultinomialNB and create an instance of the estimator and call is nb
Step17: Now fit nb using the training data.
Step18: Predictions and Evaluations
Time to see how our model did!
Use the predict method off of nb to predict labels from X_test.
Step19: Create a confusion matrix and classification report using these predictions and y_test
Step20: Great! Let's see what happens if we try to include TF-IDF to this process using a pipeline.
Using Text Processing
Import TfidfTransformer from sklearn.
Step21: Import Pipeline from sklearn.
Step22: Now create a pipeline with the following steps
Step23: Using the Pipeline
Time to use the pipeline! Remember this pipeline has all your pre-process steps in it already, meaning we'll need to re-split the original data (Remember that we overwrote X as the CountVectorized version. What we need is just the text
Train Test Split
Redo the train test split on the yelp_class object.
Step24: Now fit the pipeline to the training data. Remember you can't use the same training data as last time because that data has already been vectorized. We need to pass in just the text and labels
Step25: Predictions and Evaluation
Now use the pipeline to predict from the X_test and create a classification report and confusion matrix. You should notice strange results. | Python Code:
import numpy as np
import pandas as pd
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Natural Language Processing Project
Welcome to the NLP Project for this section of the course. In this NLP project you will be attempting to classify Yelp Reviews into 1 star or 5 star categories based off the text content in the reviews. This will be a simpler procedure than the lecture, since we will utilize the pipeline methods for more complex tasks.
We will use the Yelp Review Data Set from Kaggle.
Each observation in this dataset is a review of a particular business by a particular user.
The "stars" column is the number of stars (1 through 5) assigned by the reviewer to the business. (Higher stars is better.) In other words, it is the rating of the business by the person who wrote the review.
The "cool" column is the number of "cool" votes this review received from other Yelp users.
All reviews start with 0 "cool" votes, and there is no limit to how many "cool" votes a review can receive. In other words, it is a rating of the review itself, not a rating of the business.
The "useful" and "funny" columns are similar to the "cool" column.
Let's get started! Just follow the directions below!
Imports
Import the usual suspects. :)
End of explanation
yelp = pd.read_csv('yelp.csv')
Explanation: The Data
Read the yelp.csv file and set it as a dataframe called yelp.
End of explanation
yelp.head()
yelp.info()
yelp.describe()
Explanation: Check the head, info , and describe methods on yelp.
End of explanation
yelp['text length'] = yelp['text'].apply(len)
Explanation: Create a new column called "text length" which is the number of words in the text column.
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
%matplotlib inline
Explanation: EDA
Let's explore the data
Imports
Import the data visualization libraries if you haven't done so already.
End of explanation
g = sns.FacetGrid(yelp,col='stars')
g.map(plt.hist,'text length')
Explanation: Use FacetGrid from the seaborn library to create a grid of 5 histograms of text length based off of the star ratings. Reference the seaborn documentation for hints on this
End of explanation
sns.boxplot(x='stars',y='text length',data=yelp,palette='rainbow')
Explanation: Create a boxplot of text length for each star category.
End of explanation
sns.countplot(x='stars',data=yelp,palette='rainbow')
Explanation: Create a countplot of the number of occurrences for each type of star rating.
End of explanation
stars = yelp.groupby('stars').mean()
stars
Explanation: Use groupby to get the mean values of the numerical columns, you should be able to create this dataframe with the operation:
End of explanation
stars.corr()
Explanation: Use the corr() method on that groupby dataframe to produce this dataframe:
End of explanation
sns.heatmap(stars.corr(),cmap='coolwarm',annot=True)
Explanation: Then use seaborn to create a heatmap based off that .corr() dataframe:
End of explanation
yelp_class = yelp[(yelp.stars==1) | (yelp.stars==5)]
Explanation: NLP Classification Task
Let's move on to the actual task. To make things a little easier, go ahead and only grab reviews that were either 1 star or 5 stars.
Create a dataframe called yelp_class that contains the columns of yelp dataframe but for only the 1 or 5 star reviews.
End of explanation
X = yelp_class['text']
y = yelp_class['stars']
Explanation: Create two objects X and y. X will be the 'text' column of yelp_class and y will be the 'stars' column of yelp_class. (Your features and target/labels)
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
Explanation: Import CountVectorizer and create a CountVectorizer object.
End of explanation
X = cv.fit_transform(X)
Explanation: Use the fit_transform method on the CountVectorizer object and pass in X (the 'text' column). Save this result by overwriting X.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.3,random_state=101)
Explanation: Train Test Split
Let's split our data into training and testing data.
Use train_test_split to split up the data into X_train, X_test, y_train, y_test. Use test_size=0.3 and random_state=101
End of explanation
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
Explanation: Training a Model
Time to train a model!
Import MultinomialNB and create an instance of the estimator and call is nb
End of explanation
nb.fit(X_train,y_train)
Explanation: Now fit nb using the training data.
End of explanation
predictions = nb.predict(X_test)
Explanation: Predictions and Evaluations
Time to see how our model did!
Use the predict method off of nb to predict labels from X_test.
End of explanation
from sklearn.metrics import confusion_matrix,classification_report
print(confusion_matrix(y_test,predictions))
print('\n')
print(classification_report(y_test,predictions))
Explanation: Create a confusion matrix and classification report using these predictions and y_test
End of explanation
from sklearn.feature_extraction.text import TfidfTransformer
Explanation: Great! Let's see what happens if we try to include TF-IDF to this process using a pipeline.
Using Text Processing
Import TfidfTransformer from sklearn.
End of explanation
from sklearn.pipeline import Pipeline
Explanation: Import Pipeline from sklearn.
End of explanation
pipeline = Pipeline([
('bow', CountVectorizer()), # strings to token integer counts
('tfidf', TfidfTransformer()), # integer counts to weighted TF-IDF scores
('classifier', MultinomialNB()), # train on TF-IDF vectors w/ Naive Bayes classifier
])
Explanation: Now create a pipeline with the following steps:CountVectorizer(), TfidfTransformer(),MultinomialNB()
End of explanation
X = yelp_class['text']
y = yelp_class['stars']
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.3,random_state=101)
Explanation: Using the Pipeline
Time to use the pipeline! Remember this pipeline has all your pre-process steps in it already, meaning we'll need to re-split the original data (Remember that we overwrote X as the CountVectorized version. What we need is just the text
Train Test Split
Redo the train test split on the yelp_class object.
End of explanation
# May take some time
pipeline.fit(X_train,y_train)
Explanation: Now fit the pipeline to the training data. Remember you can't use the same training data as last time because that data has already been vectorized. We need to pass in just the text and labels
End of explanation
predictions = pipeline.predict(X_test)
print(confusion_matrix(y_test,predictions))
print(classification_report(y_test,predictions))
Explanation: Predictions and Evaluation
Now use the pipeline to predict from the X_test and create a classification report and confusion matrix. You should notice strange results.
End of explanation |
5,557 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
finance4py plotting example
In this example I'm going to evaluate different indicator and plot them one using pyplot subplots.
Step1: Problems
There are some problems here | Python Code:
# reading data from google finance
stock = DataReader('NFLX', 'google')
# extract bollinger bands
boll_bands = fp.bbands(stock.Close)
boll_bands.tail()
# extract average true range
atr = fp.average_true_range(stock.High, stock.Low, stock.Close)
atr.tail()
# extract RSI
rsi = fp.rsi(stock.Close).to_frame()
rsi.head(10)
# extract MACD
macd = fp.macd(stock.Close)
macd.tail()
Explanation: finance4py plotting example
In this example I'm going to evaluate different indicator and plot them one using pyplot subplots.
End of explanation
plt.figure(1)
plt.subplot(411)
plt.plot(boll_bands.join(stock.Close))
plt.subplot(412)
plt.plot(macd)
plt.subplot(413)
plt.plot(rsi)
plt.subplot(414)
plt.plot(atr['ATR'])
Explanation: Problems
There are some problems here:
The price plot must be bigger than the other ones.
The MACD plot must have a barplot inside
I had problem plotting the volume as a barchart, altough I did't try hard to solve this one.
One possible solution could be, and I think this is how I'm going to implement it, to use the class StockDataFrame, in which every possible indicator should have the capability to plot itself in the appropriate way.
End of explanation |
5,558 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mission capabilities
Maximum acceleration and delta-v limit missions which can be conducted by space ship. Typical mission is composed of two parts
1. accelerated motion with active trust - during this phase traveled distance grows quadratically with time as $s = 0.5 at^2$ (where $a=F/m$ is acceleration determined by engine thrust $F$ and mass of the ship $m$ )
2. passive intertial coasting - during this phase traveled distance linearlily with time as $s = v t$ where $v$ is the velocity achieved at the end of thruster burn.
This simple analysis is exact only in empty space without gravity (or other forces) from planets and stars. It also holds quite well when delta-v of space ships is much higher than escape velocity of any planets or stars. This analysis is derived just for fly-by missions where space ship does not slow-down at target (resp. does not match its velocity). However, generalization for full transfer including velocity matching is strainghtforward, since slow down is exactly symmetric to acceleration - therefore time required for acceleration is just doubled.
Step1: As you can see from the previous log-log plot the distance traveled by the ship exhibit a kink between accelerated phase and inertial costing case. You can also notice minor curvature just before the kink which is due to decreasing mass of the ship as fuel is spend - but this effect is rather small unless the ship has very large propelent mass ratio.
Further simplification
The analysis can be therefore further symplified. Exploiting the fact that the curve is composed of two (almost) linear parts we use slope the the curves (which is 1.acceleration $a$ and 2.delta-v $\Delta v$) as new axes. This means afine transformation of log-log axes. At the end we obtain new log-log plot where we can for any combination of ($a$,$\Delta v$) read time required to reach particuler distance, and corresponding power density required. | Python Code:
from py import timeToDistance
timeToDistance.main()
Explanation: Mission capabilities
Maximum acceleration and delta-v limit missions which can be conducted by space ship. Typical mission is composed of two parts
1. accelerated motion with active trust - during this phase traveled distance grows quadratically with time as $s = 0.5 at^2$ (where $a=F/m$ is acceleration determined by engine thrust $F$ and mass of the ship $m$ )
2. passive intertial coasting - during this phase traveled distance linearlily with time as $s = v t$ where $v$ is the velocity achieved at the end of thruster burn.
This simple analysis is exact only in empty space without gravity (or other forces) from planets and stars. It also holds quite well when delta-v of space ships is much higher than escape velocity of any planets or stars. This analysis is derived just for fly-by missions where space ship does not slow-down at target (resp. does not match its velocity). However, generalization for full transfer including velocity matching is strainghtforward, since slow down is exactly symmetric to acceleration - therefore time required for acceleration is just doubled.
End of explanation
from py import maxDeltaV_approx
maxDeltaV_approx.main()
Explanation: As you can see from the previous log-log plot the distance traveled by the ship exhibit a kink between accelerated phase and inertial costing case. You can also notice minor curvature just before the kink which is due to decreasing mass of the ship as fuel is spend - but this effect is rather small unless the ship has very large propelent mass ratio.
Further simplification
The analysis can be therefore further symplified. Exploiting the fact that the curve is composed of two (almost) linear parts we use slope the the curves (which is 1.acceleration $a$ and 2.delta-v $\Delta v$) as new axes. This means afine transformation of log-log axes. At the end we obtain new log-log plot where we can for any combination of ($a$,$\Delta v$) read time required to reach particuler distance, and corresponding power density required.
End of explanation |
5,559 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Understanding sequential ensembles
The SequentialEnsemble object is one of the more powerful, but also more difficult, tools in the OpenPathSampling toolkit.
At first, it looks deceptively simple
Step1: 1. Simple example
First let's consider an easy example. We'll define a single state. The trajectory we're interested in will begin in the state, then exit the state, then again return to the state. So we have the sequence [AllInXEnsemble(state), AllOutXEnsemble(state), AllInXEnsemble(state)].
Step2: 2. Slightly less simple
Step3: You try it
Step4: Exercises
How would you implement a sequential ensemble which only has the length-1 cap at the beginning of the trajectory? Which of the given trajectories would satisfy that ensemble?
How could you extend trajectories that do satisfy that ensemble so that they do not satisfy it?
Implement that ensemble. Test your predictions from question 1. Create trajectories and test your predictions from question 2.
3. Can-append in the sequential ensemble
The can-append function checks whether the given trajectory could be a subtrajectory of the given ensemble -- or, more correctly, it checks whether there is any possibility that after adding another frame, the resulting trajectory could be in the ensemble or could be a subtrajectory of the ensemble.
For the sequential ensemble, the logic behind this is quite complicated, and has a worst-case scaling of $O(n^2)$ in the length of the trajectory. However, for common sequential ensembles, the scaling is typically linear.
Let's consider the same trajectories and ensembles as before. Since the first sequential ensemble (seq_ens_01) does not have a limit on the the number of frames in either its beginning or ending ensemble, the all_in_traj could be a subtrajectory of it.
Step5: You try it
Step6: Questions | Python Code:
xval = paths.FunctionCV(name="x", f=lambda snap : snap.coordinates[0][0])
state = paths.CVDefinedVolume(xval, lambda_min=float("-inf"), lambda_max=0.0)
# building example trajectories
delta = 0.1
x_out = 0.9; x_in = -1.1
traj1 = trajectory_1D([x_in, x_out, x_in])
x_in += delta; x_out += delta
traj2 = trajectory_1D([x_in, x_in, x_out, x_out, x_in, x_in])
x_in += delta; x_out += delta
traj3 = trajectory_1D([x_in, x_out, x_in, x_in])
x_in += delta; x_out += delta
traj4 = trajectory_1D([x_out, x_out, x_out, x_out, x_out])
all_in_traj = trajectory_1D([x_in, x_in, x_in, x_in, x_in])
# plotting
plt.plot(xval(traj1), 'bo-', label='traj1')
plt.plot(xval(traj2), 'ro-', label='traj2')
plt.plot(xval(traj3), 'go-', label='traj3')
plt.plot(xval(traj4), 'mo-', label='traj4')
plt.plot(xval(all_in_traj), 'ko-', label='all-in')
plt.plot([0]*10, 'k-.', label='State boundary')
plt.ylim(-1.5, 1.5)
plt.xlim(-0.1, 8)
plt.legend();
Explanation: Understanding sequential ensembles
The SequentialEnsemble object is one of the more powerful, but also more difficult, tools in the OpenPathSampling toolkit.
At first, it looks deceptively simple: it is just a list of path ensembles which must be applied in order. However, in practice there are several subtle points to pay attention to.
To understand all of this, we'll consider one dimensional trajectories: the time will be plotted along the horizontal axis, with the value along the vertical axis.
End of explanation
seq_ens_01 = paths.SequentialEnsemble([
paths.AllInXEnsemble(state),
paths.AllOutXEnsemble(state),
paths.AllInXEnsemble(state)
])
seq_ens_01(all_in_traj)
seq_ens_01(traj1)
seq_ens_01(traj2)
Explanation: 1. Simple example
First let's consider an easy example. We'll define a single state. The trajectory we're interested in will begin in the state, then exit the state, then again return to the state. So we have the sequence [AllInXEnsemble(state), AllOutXEnsemble(state), AllInXEnsemble(state)].
End of explanation
seq_ens_02 = paths.SequentialEnsemble([
paths.AllInXEnsemble(state) & paths.LengthEnsemble(1),
paths.AllOutXEnsemble(state),
paths.AllInXEnsemble(state) & paths.LengthEnsemble(1)
])
seq_ens_02(all_in_traj)
seq_ens_02(traj1)
Explanation: 2. Slightly less simple: cap ends with LengthEnsemble(1)
End of explanation
# write a line to check whether traj2 satisfies seq_ens_02 (but predict the result first!)
Explanation: You try it: traj2 satisfied seq_ens_01. Will it satisfy seq_ens_02? Try it yourself below.
End of explanation
seq_ens_01.can_append(all_in_traj)
Explanation: Exercises
How would you implement a sequential ensemble which only has the length-1 cap at the beginning of the trajectory? Which of the given trajectories would satisfy that ensemble?
How could you extend trajectories that do satisfy that ensemble so that they do not satisfy it?
Implement that ensemble. Test your predictions from question 1. Create trajectories and test your predictions from question 2.
3. Can-append in the sequential ensemble
The can-append function checks whether the given trajectory could be a subtrajectory of the given ensemble -- or, more correctly, it checks whether there is any possibility that after adding another frame, the resulting trajectory could be in the ensemble or could be a subtrajectory of the ensemble.
For the sequential ensemble, the logic behind this is quite complicated, and has a worst-case scaling of $O(n^2)$ in the length of the trajectory. However, for common sequential ensembles, the scaling is typically linear.
Let's consider the same trajectories and ensembles as before. Since the first sequential ensemble (seq_ens_01) does not have a limit on the the number of frames in either its beginning or ending ensemble, the all_in_traj could be a subtrajectory of it.
End of explanation
# write a line to check whether can_append is true for all_in_traj in seq_ens_02
Explanation: You try it: Will all_in_traj also give can_append is true for seq_ens_02?
End of explanation
seq_ens_01.can_append(traj4)
seq_ens_02.can_append(traj4)
Explanation: Questions: What's the shortest trajectory that would extend all_in_traj until seq_ens_01.can_append returned False? Would that trajectory satisfy seq_ens_01? Would any shorter subtrajectory of it satisfy seq_ens_01?
Note that a trajectory doesn't have to start in the first ensemble of the sequential ensemble in order to satisfy can_append. Any subtrajectory, beginning from any ensemble of the sequence, can satisfy can_append. (There is a separate method, strict_can_append, which requires that the trajectory begin in the initial state.)
Consider traj4, which has all its frames outside the state:
End of explanation |
5,560 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: 2016 US Bike Share Activity Snapshot
Table of Contents
Introduction
Posing Questions
Data Collection and Wrangling
Condensing the Trip Data
Exploratory Data Analysis
Statistics
Visualizations
Performing Your Own Analysis
Conclusions
<a id='intro'></a>
Introduction
Tip
Step5: If everything has been filled out correctly, you should see below the printout of each city name (which has been parsed from the data file name) that the first trip has been parsed in the form of a dictionary. When you set up a DictReader object, the first row of the data file is normally interpreted as column names. Every other row in the data file will use those column names as keys, as a dictionary is generated for each row.
This will be useful since we can refer to quantities by an easily-understandable label instead of just a numeric index. For example, if we have a trip stored in the variable row, then we would rather get the trip duration from row['duration'] instead of row[0].
<a id='condensing'></a>
Condensing the Trip Data
It should also be observable from the above printout that each city provides different information. Even where the information is the same, the column names and formats are sometimes different. To make things as simple as possible when we get to the actual exploration, we should trim and clean the data. Cleaning the data makes sure that the data formats across the cities are consistent, while trimming focuses only on the parts of the data we are most interested in to make the exploration easier to work with.
You will generate new data files with five values of interest for each trip
Step7: Question 3b
Step9: Tip
Step10: Tip
Step11: Question 4c
Step12: <a id='visualizations'></a>
Visualizations
The last set of values that you computed should have pulled up an interesting result. While the mean trip time for Subscribers is well under 30 minutes, the mean trip time for Customers is actually above 30 minutes! It will be interesting for us to look at how the trip times are distributed. In order to do this, a new library will be introduced here, matplotlib. Run the cell below to load the library and to generate an example plot.
Step13: In the above cell, we collected fifty trip times in a list, and passed this list as the first argument to the .hist() function. This function performs the computations and creates plotting objects for generating a histogram, but the plot is actually not rendered until the .show() function is executed. The .title() and .xlabel() functions provide some labeling for plot context.
You will now use these functions to create a histogram of the trip times for the city you selected in question 4c. Don't separate the Subscribers and Customers for now
Step14: If you followed the use of the .hist() and .show() functions exactly like in the example, you're probably looking at a plot that's completely unexpected. The plot consists of one extremely tall bar on the left, maybe a very short second bar, and a whole lot of empty space in the center and right. Take a look at the duration values on the x-axis. This suggests that there are some highly infrequent outliers in the data. Instead of reprocessing the data, you will use additional parameters with the .hist() function to limit the range of data that is plotted. Documentation for the function can be found [here].
Question 5
Step15: <a id='eda_continued'></a>
Performing Your Own Analysis
So far, you've performed an initial exploration into the data available. You have compared the relative volume of trips made between three U.S. cities and the ratio of trips made by Subscribers and Customers. For one of these cities, you have investigated differences between Subscribers and Customers in terms of how long a typical trip lasts. Now it is your turn to continue the exploration in a direction that you choose. Here are a few suggestions for questions to explore | Python Code:
## import all necessary packages and functions.
import csv # read and write csv files
from datetime import datetime # operations to parse dates
from datetime import time
from datetime import date
import pprint # use to print data structures like dictionaries in
# a nicer way than the base print function.
def print_first_point(filename):
This function prints and returns the first data point (second row) from
a csv file that includes a header row.
# print city name for reference
city = filename.split('-')[0].split('/')[-1]
print('\nCity: {}'.format(city))
with open(filename, 'r') as f_in:
## TODO: Use the csv library to set up a DictReader object. ##
## see https://docs.python.org/3/library/csv.html ##
trip_reader = csv.DictReader(f_in)
## TODO: Use a function on the DictReader object to read the ##
## first trip from the data file and store it in a variable. ##
## see https://docs.python.org/3/library/csv.html#reader-objects##
first_trip = trip_reader.__next__()
## TODO: Use the pprint library to print the first trip. ##
## see https://docs.python.org/3/library/pprint.html ##
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(first_trip)
# output city name and first trip for later testing
return (city, first_trip)
# list of files for each city
data_files = ['./data/NYC-CitiBike-2016.csv',
'./data/Chicago-Divvy-2016.csv',
'./data/Washington-CapitalBikeshare-2016.csv',]
# print the first trip from each file, store in dictionary
example_trips = {}
for data_file in data_files:
city, first_trip = print_first_point(data_file)
example_trips[city] = first_trip
Explanation: 2016 US Bike Share Activity Snapshot
Table of Contents
Introduction
Posing Questions
Data Collection and Wrangling
Condensing the Trip Data
Exploratory Data Analysis
Statistics
Visualizations
Performing Your Own Analysis
Conclusions
<a id='intro'></a>
Introduction
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use a Jupyter notebook.
Over the past decade, bicycle-sharing systems have been growing in number and popularity in cities across the world. Bicycle-sharing systems allow users to rent bicycles for short trips, typically 30 minutes or less. Thanks to the rise in information technologies, it is easy for a user of the system to access a dock within the system to unlock or return bicycles. These technologies also provide a wealth of data that can be used to explore how these bike-sharing systems are used.
In this project, you will perform an exploratory analysis on data provided by Motivate, a bike-share system provider for many major cities in the United States. You will compare the system usage between three large cities: New York City, Chicago, and Washington, DC. You will also see if there are any differences within each system for those users that are registered, regular users and those users that are short-term, casual users.
<a id='pose_questions'></a>
Posing Questions
Before looking at the bike sharing data, you should start by asking questions you might want to understand about the bike share data. Consider, for example, if you were working for Motivate. What kinds of information would you want to know about in order to make smarter business decisions? If you were a user of the bike-share service, what factors might influence how you would want to use the service?
Question 1: Write at least two questions related to bike sharing that you think could be answered by data.
Answer: To inform business decisions I would consider:
- useage disribution as a function of:
- time of day
- day of year
- season
- weather patterns
- Customer demographics
- Customer segments
- Whether or not any service point is running out of bikes and when (time of day, day of year)
The main general questions come down to the classic :
Who uses the bike share?
How do they use it?
Why do they use it?
When do they use it?
Where do they use it (where do they pick up and return)?
Finally, some specific Questions like,
What is the most common day of useage for subsribers, and customers?
What is the most common time of useage for subscribers and customers?
What is the most common trip duration?
Tip: If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options using a plain-text syntax. You will also use Markdown later in the Nanodegree program. Use Shift + Enter or Shift + Return to run the cell and show its rendered form.
<a id='wrangling'></a>
Data Collection and Wrangling
Now it's time to collect and explore our data. In this project, we will focus on the record of individual trips taken in 2016 from our selected cities: New York City, Chicago, and Washington, DC. Each of these cities has a page where we can freely download the trip data.:
New York City (Citi Bike): Link
Chicago (Divvy): Link
Washington, DC (Capital Bikeshare): Link
If you visit these pages, you will notice that each city has a different way of delivering its data. Chicago updates with new data twice a year, Washington DC is quarterly, and New York City is monthly. However, you do not need to download the data yourself. The data has already been collected for you in the /data/ folder of the project files. While the original data for 2016 is spread among multiple files for each city, the files in the /data/ folder collect all of the trip data for the year into one file per city. Some data wrangling of inconsistencies in timestamp format within each city has already been performed for you. In addition, a random 2% sample of the original data is taken to make the exploration more manageable.
Question 2: However, there is still a lot of data for us to investigate, so it's a good idea to start off by looking at one entry from each of the cities we're going to analyze. Run the first code cell below to load some packages and functions that you'll be using in your analysis. Then, complete the second code cell to print out the first trip recorded from each of the cities (the second line of each data file).
Tip: You can run a code cell like you formatted Markdown cells above by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the toolbar after selecting it. While the cell is running, you will see an asterisk in the message to the left of the cell, i.e. In [*]:. The asterisk will change into a number to show that execution has completed, e.g. In [1]. If there is output, it will show up as Out [1]:, with an appropriate number to match the "In" number.
End of explanation
def duration_in_mins(datum, city):
Takes as input a dictionary containing info about a single trip (datum) and
its origin city (city) and returns the trip duration in units of minutes.
Remember that Washington is in terms of milliseconds while Chicago and NYC
are in terms of seconds.
HINT: The csv module reads in all of the data as strings, including numeric
values. You will need a function to convert the strings into an appropriate
numeric type when making your transformations.
see https://docs.python.org/3/library/functions.html
# YOUR CODE HERE
if city == 'NYC' or city == 'Chicago':
duration = int(datum['tripduration'])/60
elif city == 'BayArea':
duration = float(datum['duration'])
else:
duration = int(datum['Duration (ms)'])/60000
return duration
# Some tests to check that your code works. There should be no output if all of
# the assertions pass. The `example_trips` dictionary was obtained from when
# you printed the first trip from each of the original data files.
tests = {'NYC': 13.9833,
'Chicago': 15.4333,
'Washington': 7.1231}
for city in tests:
assert abs(duration_in_mins(example_trips[city], city) - tests[city]) < .001
def time_of_trip(datum, city):
Takes as input a dictionary containing info about a single trip (datum) and
its origin city (city) and returns the month, hour, and day of the week in
which the trip was made.
Remember that NYC includes seconds, while Washington and Chicago do not.
HINT: You should use the datetime module to parse the original date
strings into a format that is useful for extracting the desired information.
see https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior
# YOUR CODE HERE
days_dict = {0: 'Monday', 1: 'Tuesday', 2: 'Wednesday', 3: 'Thursday', 4: 'Friday', 5: 'Saturday', 6: 'Sunday'}
if city == 'NYC':
trip_date = datetime.strptime(datum['starttime'],'%m/%d/%Y %H:%M:%S')
month = int(trip_date.strftime('%m')[-1])
hour = int(trip_date.strftime('%H')[-1])
days_of_week = days_dict[datetime.weekday(datetime.date(trip_date))]
elif city == 'Chicago':
trip_date = datetime.strptime(datum['starttime'],'%m/%d/%Y %H:%M')
month = int(trip_date.strftime('%m')[-1])
hour = int(trip_date.strftime('%H'))
days_of_week = days_dict[datetime.weekday(datetime.date(trip_date))]
elif city == 'Washington':
trip_date = datetime.strptime(datum['Start date'],'%m/%d/%Y %H:%M')
month = int(trip_date.strftime('%m')[-1])
hour = int(trip_date.strftime('%H'))
days_of_week = days_dict[datetime.weekday(datetime.date(trip_date))]
return ( month, hour, days_of_week )
# Some tests to check that your code works. There should be no output if all of
# the assertions pass. The `example_trips` dictionary was obtained from when
# you printed the first trip from each of the original data files.
#'NYC': (1, 0, 'Friday'),
tests = {'NYC': (1, 0, 'Friday'),
'Chicago': (3, 23, 'Thursday'),
'Washington': (3, 22, 'Thursday')}
for city in tests:
assert time_of_trip(example_trips[city], city) == tests[city]
def type_of_user(datum, city):
Takes as input a dictionary containing info about a single trip (datum) and
its origin city (city) and returns the type of system user that made the
trip.
Remember that Washington has different category names compared to Chicago
and NYC.
# YOUR CODE HERE
if city == 'NYC'or city == 'Chicago':
user_type = datum['usertype']
elif city == 'BayArea':
user_type = datum['user_type']
else:
user_type = datum['Member Type']
if user_type == 'Registered':
user_type = 'Subscriber'
else:
user_type = 'Customer'
return user_type
# Some tests to check that your code works. There should be no output if all of
# the assertions pass. The `example_trips` dictionary was obtained from when
# you printed the first trip from each of the original data files.
tests = {'NYC': 'Customer',
'Chicago': 'Subscriber',
'Washington': 'Subscriber'}
for city in tests:
assert type_of_user(example_trips[city], city) == tests[city]
Explanation: If everything has been filled out correctly, you should see below the printout of each city name (which has been parsed from the data file name) that the first trip has been parsed in the form of a dictionary. When you set up a DictReader object, the first row of the data file is normally interpreted as column names. Every other row in the data file will use those column names as keys, as a dictionary is generated for each row.
This will be useful since we can refer to quantities by an easily-understandable label instead of just a numeric index. For example, if we have a trip stored in the variable row, then we would rather get the trip duration from row['duration'] instead of row[0].
<a id='condensing'></a>
Condensing the Trip Data
It should also be observable from the above printout that each city provides different information. Even where the information is the same, the column names and formats are sometimes different. To make things as simple as possible when we get to the actual exploration, we should trim and clean the data. Cleaning the data makes sure that the data formats across the cities are consistent, while trimming focuses only on the parts of the data we are most interested in to make the exploration easier to work with.
You will generate new data files with five values of interest for each trip: trip duration, starting month, starting hour, day of the week, and user type. Each of these may require additional wrangling depending on the city:
Duration: This has been given to us in seconds (New York, Chicago) or milliseconds (Washington). A more natural unit of analysis will be if all the trip durations are given in terms of minutes.
Month, Hour, Day of Week: Ridership volume is likely to change based on the season, time of day, and whether it is a weekday or weekend. Use the start time of the trip to obtain these values. The New York City data includes the seconds in their timestamps, while Washington and Chicago do not. The datetime package will be very useful here to make the needed conversions.
User Type: It is possible that users who are subscribed to a bike-share system will have different patterns of use compared to users who only have temporary passes. Washington divides its users into two types: 'Registered' for users with annual, monthly, and other longer-term subscriptions, and 'Casual', for users with 24-hour, 3-day, and other short-term passes. The New York and Chicago data uses 'Subscriber' and 'Customer' for these groups, respectively. For consistency, you will convert the Washington labels to match the other two.
Question 3a: Complete the helper functions in the code cells below to address each of the cleaning tasks described above.
End of explanation
def condense_data(in_file, out_file, city):
This function takes full data from the specified input file
and writes the condensed data to a specified output file. The city
argument determines how the input file will be parsed.
HINT: See the cell below to see how the arguments are structured!
with open(out_file, 'w') as f_out, open(in_file, 'r') as f_in:
# set up csv DictWriter object - writer requires column names for the
# first row as the "fieldnames" argument
out_colnames = ['duration', 'month', 'hour', 'day_of_week', 'user_type']
trip_writer = csv.DictWriter(f_out, fieldnames = out_colnames)
print (trip_writer)
trip_writer.writeheader()
## TODO: set up csv DictReader object ##
trip_reader = csv.DictReader(f_in)
# collect data from and process each row
for row in trip_reader:
# set up a dictionary to hold the values for the cleaned and trimmed
# data point
new_point = {}
## TODO: use the helper functions to get the cleaned data from ##
## the original data dictionaries. ##
## Note that the keys for the new_point dictionary should match ##
## the column names set in the DictWriter object above. ##
new_point['duration'] = duration_in_mins(row, city)
new_point['month'] = time_of_trip(row,city)[0]
new_point['hour'] = time_of_trip(row,city)[1]
new_point['day_of_week'] = time_of_trip(row,city)[2]
new_point['user_type'] = type_of_user(row, city)
## TODO: write the processed information to the output file. ##
## see https://docs.python.org/3/library/csv.html#writer-objects ##
trip_writer.writerow(new_point)
# Run this cell to check your work
city_info = {'Washington': {'in_file': './data/Washington-CapitalBikeshare-2016.csv',
'out_file': './data/Washington-2016-Summary.csv'},
'Chicago': {'in_file': './data/Chicago-Divvy-2016.csv',
'out_file': './data/Chicago-2016-Summary.csv'},
'NYC': {'in_file': './data/NYC-CitiBike-2016.csv',
'out_file': './data/NYC-2016-Summary.csv'}}
for city, filenames in city_info.items():
condense_data(filenames['in_file'], filenames['out_file'], city)
print_first_point(filenames['out_file'])
Explanation: Question 3b: Now, use the helper functions you wrote above to create a condensed data file for each city consisting only of the data fields indicated above. In the /examples/ folder, you will see an example datafile from the Bay Area Bike Share before and after conversion. Make sure that your output is formatted to be consistent with the example file.
End of explanation
def number_of_trips(filename):
This function reads in a file with trip data and reports the number of
trips made by subscribers, customers, and total overall.
city = filename.split('-')[0].split('/')[-1]
with open(filename, 'r') as f_in:
# set up csv reader object
reader = csv.DictReader(f_in)
# initialize count variables
n_subscribers = 0
n_customers = 0
# tally up ride types
for row in reader:
if city == 'NYC' or city == 'Chicago':
if row['usertype'] == 'Subscriber':
n_subscribers += 1
else:
n_customers += 1
else:
if row['Member Type'] == 'Registered':
n_subscribers += 1
else:
n_customers += 1
# compute total number of rides
n_trips = n_subscribers + n_customers
# return tallies as a tuple
return city, n_trips, n_subscribers, n_customers
## Modify this and the previous cell to answer Question 4a. Remember to run ##
## the function on the cleaned data files you created from Question 3. ##
data_file = ['./data/NYC-CitiBike-2016.csv',
'./data/Chicago-Divvy-2016.csv',
'./data/Washington-CapitalBikeshare-2016.csv']
output =[]
for file in data_file:
data = number_of_trips(file)
output.append(data)
for item in output:
print (item[0],":",item[1],"=>'TotalTrips' ",item[2],"=>'TotalSubscriber' ",item[3],"=>'TotalCustomer'")
Explanation: Tip: If you save a jupyter Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the necessary code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
<a id='eda'></a>
Exploratory Data Analysis
Now that you have the data collected and wrangled, you're ready to start exploring the data. In this section you will write some code to compute descriptive statistics from the data. You will also be introduced to the matplotlib library to create some basic histograms of the data.
<a id='statistics'></a>
Statistics
First, let's compute some basic counts. The first cell below contains a function that uses the csv module to iterate through a provided data file, returning the number of trips made by subscribers and customers. The second cell runs this function on the example Bay Area data in the /examples/ folder. Modify the cells to answer the question below.
Question 4a: Which city has the highest number of trips? Which city has the highest proportion of trips made by subscribers? Which city has the highest proportion of trips made by short-term customers?
Answer: NYC has the Highest Number Of Trips, Highest Number Of Subscribers and Highest Number of Short-term Customers.
End of explanation
## Use this and additional cells to answer Question 4b. ##
## ##
## HINT: The csv module reads in all of the data as strings, including ##
## numeric values. You will need a function to convert the strings ##
## into an appropriate numeric type before you aggregate data. ##
## TIP: For the Bay Area example, the average trip length is 14 minutes ##
## and 3.5% of trips are longer than 30 minutes. ##
def trip_avg(filename):
city = filename.split('-')[0].split('/')[-1]
with open(filename,'r') as f_in:
trip = csv.DictReader(f_in)
trips = 0
trip_time = 0
trip_exceed = 0
trip_duration = 0
for row in trip:
trips += 1
trip_time = duration_in_mins(row, city)
if trip_time > 30:
trip_exceed += 1
else:
None
trip_duration += trip_time
trip_exceed_percent = (float(trip_exceed/trips))*100
trip_avg = float(trip_duration/trips)
return (city,trip_avg,trip_exceed_percent)
data_file = [ './data/NYC-CitiBike-2016.csv',
'./data/Chicago-Divvy-2016.csv',
'./data/Washington-CapitalBikeshare-2016.csv']
for file in data_file:
print (trip_avg(file))
Explanation: Tip: In order to add additional cells to a notebook, you can use the "Insert Cell Above" and "Insert Cell Below" options from the menu bar above. There is also an icon in the toolbar for adding new cells, with additional icons for moving the cells up and down the document. By default, new cells are of the code type; you can also specify the cell type (e.g. Code or Markdown) of selected cells from the Cell menu or the dropdown in the toolbar.
Now, you will write your own code to continue investigating properties of the data.
Question 4b: Bike-share systems are designed for riders to take short trips. Most of the time, users are allowed to take trips of 30 minutes or less with no additional charges, with overage charges made for trips of longer than that duration. What is the average trip length for each city? What proportion of rides made in each city are longer than 30 minutes?
Answer:
- NYC : The average Trip Length = 15.81, Propotion of Trips Longer than 30 Minutes = 7.30%
- Chicago: The average Trip Length = 16.56, Propotion of Trips Longer than 30 Minutes = 8.33%
- Washington: The average Trip Length = 18.93, Propotion of Trips Longer than 30 Minutes = 10.83%
End of explanation
## Use this and additional cells to answer Question 4c. If you have ##
## not done so yet, consider revising some of your previous code to ##
## make use of functions for reusability. ##
## ##
## TIP: For the Bay Area example data, you should find the average ##
## Subscriber trip duration to be 9.5 minutes and the average Customer ##
## trip duration to be 54.6 minutes. Do the other cities have this ##
## level of difference? ##
def avg_user_type(filename):
city = filename.split('-')[0].split('/')[-1]
with open(filename,'r') as f_in:
data = csv.DictReader(f_in)
trips = 0
subscriber = 0
customer = 0
for row in data:
trips += 1
if type_of_user(row,city) == 'Subscriber':
subscriber += duration_in_mins(row, city)
else:
customer += duration_in_mins(row, city)
subscriber_avg = float(subscriber/trips)
customer_avg = float(customer/trips)
return (city,subscriber_avg,customer_avg)
data_file = ['./data/NYC-CitiBike-2016.csv',
'./data/Chicago-Divvy-2016.csv',
'./data/Washington-CapitalBikeshare-2016.csv']
for file in data_file:
print(avg_user_type(file))
Explanation: Question 4c: Dig deeper into the question of trip duration based on ridership. Choose one city. Within that city, which type of user takes longer rides on average: Subscribers or Customers?
Answer: Choosing NYC as the city, from that the Subscriber is taking longer ride on average when compared to the customer average duration.
End of explanation
# load library
import matplotlib.pyplot as plt
# this is a 'magic word' that allows for plots to be displayed
# inline with the notebook. If you want to know more, see:
# http://ipython.readthedocs.io/en/stable/interactive/magics.html
%matplotlib inline
# example histogram, data taken from bay area sample
data = [ 7.65, 8.92, 7.42, 5.50, 16.17, 4.20, 8.98, 9.62, 11.48, 14.33,
19.02, 21.53, 3.90, 7.97, 2.62, 2.67, 3.08, 14.40, 12.90, 7.83,
25.12, 8.30, 4.93, 12.43, 10.60, 6.17, 10.88, 4.78, 15.15, 3.53,
9.43, 13.32, 11.72, 9.85, 5.22, 15.10, 3.95, 3.17, 8.78, 1.88,
4.55, 12.68, 12.38, 9.78, 7.63, 6.45, 17.38, 11.90, 11.52, 8.63,]
plt.hist(data)
plt.title('Distribution of Trip Durations')
plt.xlabel('Duration (m)')
plt.show()
Explanation: <a id='visualizations'></a>
Visualizations
The last set of values that you computed should have pulled up an interesting result. While the mean trip time for Subscribers is well under 30 minutes, the mean trip time for Customers is actually above 30 minutes! It will be interesting for us to look at how the trip times are distributed. In order to do this, a new library will be introduced here, matplotlib. Run the cell below to load the library and to generate an example plot.
End of explanation
## Use this and additional cells to collect all of the trip times as a list ##
## and then use pyplot functions to generate a histogram of trip times. ##
def trip_time(filename):
city = filename.split('-')[0].split('/')[-1]
with open(filename,'r') as f_in:
reader = csv.DictReader(f_in)
data = []
for row in reader:
duration_data = duration_in_mins(row,city)
data.append(duration_data)
return data
file = './data/NYC-CitiBike-2016.csv'
duration_plot = trip_time(file)
plt.hist(duration_plot,bins=30)
plt.xlim(0,3000)
plt.title('Trip Duration Of NYC')
plt.xlabel('Duration (m)')
plt.show()
Explanation: In the above cell, we collected fifty trip times in a list, and passed this list as the first argument to the .hist() function. This function performs the computations and creates plotting objects for generating a histogram, but the plot is actually not rendered until the .show() function is executed. The .title() and .xlabel() functions provide some labeling for plot context.
You will now use these functions to create a histogram of the trip times for the city you selected in question 4c. Don't separate the Subscribers and Customers for now: just collect all of the trip times and plot them.
End of explanation
## Use this and additional cells to answer Question 5. ##
def trip_time(filename):
city = filename.split('-')[0].split('/')[-1]
with open(filename,'r') as f_in:
reader = csv.DictReader(f_in)
subscriber = []
customer = []
for row in reader:
if type_of_user(row, city) == 'Subscriber':
duration_data = duration_in_mins(row,city)
if duration_data < 75:
subscriber.append(duration_data)
else:
None
else:
duration_data = duration_in_mins(row,city)
if duration_data < 75:
customer.append(duration_data)
else:
None
return (subscriber,customer)
file = './data/NYC-CitiBike-2016.csv'
subscriber,customer = trip_time(file)
plt.hist(subscriber,bins=10)
plt.title('Trip Duration Of NYC Subscriber')
plt.xlabel('Duration (m)')
plt.show()
plt.hist(customer,bins=10)
plt.title('Trip Duration Of NYC Customer')
plt.xlabel('Duration (m)')
plt.show()
Explanation: If you followed the use of the .hist() and .show() functions exactly like in the example, you're probably looking at a plot that's completely unexpected. The plot consists of one extremely tall bar on the left, maybe a very short second bar, and a whole lot of empty space in the center and right. Take a look at the duration values on the x-axis. This suggests that there are some highly infrequent outliers in the data. Instead of reprocessing the data, you will use additional parameters with the .hist() function to limit the range of data that is plotted. Documentation for the function can be found [here].
Question 5: Use the parameters of the .hist() function to plot the distribution of trip times for the Subscribers in your selected city. Do the same thing for only the Customers. Add limits to the plots so that only trips of duration less than 75 minutes are plotted. As a bonus, set the plots up so that bars are in five-minute wide intervals. For each group, where is the peak of each distribution? How would you describe the shape of each distribution?
Answer:
SUBSCRIBER: The Distribution has a peak from 1 to 10 minutes of duration for nearly 10000 data's and to describe the Shape of Distribution, From the peak on the left side of graph gradually getting decreased to the right side of the Graph.
CUSTOMER: The Distrbution has a peak some where from 15 to 24 minuntes of duration for nearly 8000 data's
and the Shape of Distribution, it has a moderate increase from the lift side of the graph then to the peak and gradually it gets decreased to the right side of the graph.
End of explanation
## Use this and additional cells to continue to explore the dataset. ##
## Once you have performed your exploration, document your findings ##
import numpy as np
## in the Markdown cell above.##
def type_user_analysis(filename):
city = filename.split('-')[0].split('/')[-1]
with open(filename,'r') as f_in:
reader = csv.DictReader(f_in)
sub_week_day = []
sub_weekend_days = []
cust_week_day = []
cust_weekend_days = []
trips = 0
for rows in reader:
trips += 1
if type_of_user(rows, city) == 'Subscriber':
week_days = time_of_trip(rows, city)[2]
if week_days =='Saturday' or week_days == 'Sunday':
sub_weekend_days.append(week_days)
else:
sub_week_day.append(week_days)
else:
week_days = time_of_trip(rows, city)[2]
if week_days == 'Saturday' or week_days == 'Sunday':
cust_weekend_days.append(week_days)
else:
cust_week_day.append(week_days)
return (sub_week_day,sub_weekend_days,cust_week_day,cust_weekend_days)
def dayofweek_avg(filename):
city = filename.split('-')[0].split('/')[-1]
with open(filename,'r') as f_in:
trip = csv.DictReader(f_in)
trips = 0
trip_weekday= 0
trip_weekend = 0
for row in trip:
trips += 1
if time_of_trip(row, city)[2] == 'Saturday' or time_of_trip(row, city)[2] == 'Sunday':
trip_weekend += duration_in_mins(row, city)
else:
trip_weekday += duration_in_mins(row, city)
weekday_avg = float(trip_weekday/trips)
weekend_avg = float(trip_weekend/trips)
return (weekday_avg,weekend_avg)
file = './data/NYC-CitiBike-2016.csv'
sub_weekdays , sub_weekends, cust_weekdays, cust_weekends = type_user_analysis(file)
weekday_avg , weekend_avg = dayofweek_avg(file)
subscriber_weekdays = len(sub_weekdays)
subscriber_weekends = len(sub_weekends)
customer_weekdays = len(cust_weekdays)
customer_weekends = len(cust_weekends)
sub_object = ('weekdays','weekends')
subscriber = [subscriber_weekdays,subscriber_weekends]
y_pos1 = np.arange(len(sub_object))
plt.bar(y_pos1,subscriber)
plt.title("Subscriber Usage")
plt.xticks(y_pos1,sub_object)
plt.ylabel('Number of Trips')
plt.show()
cust_object = ('weekdays','weekends')
customer = [customer_weekdays,customer_weekends]
y_pos2 = np.arange(len(cust_object))
plt.bar(y_pos2,customer)
plt.title("Customer Usage")
plt.xticks(y_pos2,cust_object)
plt.ylabel('Number of Trips')
plt.show()
trip_avg = ('weekdays','weekend')
avg = [weekday_avg, weekend_avg]
y_pos3= np.arange(len(trip_avg))
plt.bar(y_pos3,avg)
plt.title("Average Usage")
plt.xticks(y_pos3,trip_avg)
plt.ylabel('Number of Trips')
plt.show()
Explanation: <a id='eda_continued'></a>
Performing Your Own Analysis
So far, you've performed an initial exploration into the data available. You have compared the relative volume of trips made between three U.S. cities and the ratio of trips made by Subscribers and Customers. For one of these cities, you have investigated differences between Subscribers and Customers in terms of how long a typical trip lasts. Now it is your turn to continue the exploration in a direction that you choose. Here are a few suggestions for questions to explore:
How does ridership differ by month or season? Which month / season has the highest ridership? Does the ratio of Subscriber trips to Customer trips change depending on the month or season?
Is the pattern of ridership different on the weekends versus weekdays? On what days are Subscribers most likely to use the system? What about Customers? Does the average duration of rides change depending on the day of the week?
During what time of day is the system used the most? Is there a difference in usage patterns for Subscribers and Customers?
If any of the questions you posed in your answer to question 1 align with the bullet points above, this is a good opportunity to investigate one of them. As part of your investigation, you will need to create a visualization. If you want to create something other than a histogram, then you might want to consult the Pyplot documentation. In particular, if you are plotting values across a categorical variable (e.g. city, user type), a bar chart will be useful. The documentation page for .bar() includes links at the bottom of the page with examples for you to build off of for your own use.
Question 6: Continue the investigation by exploring another question that could be answered by the data available. Document the question you want to explore below. Your investigation should involve at least two variables and should compare at least two groups. You should also use at least one visualization as part of your explorations.
Answer:
QUESTION: Is the pattern of ridership different on the weekends versus weekdays? On what days are Subscribers most likely to use the system? What about Customers? Does the average duration of rides change depending on the day of the week?
ANSWER ANALYSIS I have a made a analysis for the weekdays and weekends of each type of user says Subscirbers and the Customers, Answering to the 1st part of the question, the ridership pattern differs based on the weekdays and weekends 2nd part, Subscriber type user are more on weekdays than on weekends and in Customer type user also Weekday riders are more in count comapred to the weekend riders and the 3rd part of the question, yes do average duration of rides changes depending on the day of the week.
On Conclusion anaysis of this visual anaysis the weekday riders are more in number, when compared to the weekend users, based on this more bikes and maintainence has to be made on the weekdays to give a better servies and improve the Business on a effective way.
End of explanation |
5,561 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
RF to B-Mode Generation
Ultrasound imaging employs acoustic pressure waves to gather information describing changes in tissue permeability. A transducer emits an ultrasound beam at discrete time intervals and translates the pressure wave echo into radio frequency waves. These electromagnetic waves are then readable with an analog-to-digital converter for machine input.
The following example notebook showcases how the ITK Ultrasound module may be used to generate a human-readable brightness-mode (B-Mode) image from RF-Mode input collected from a transducer. The input image MouseLiverRF.mha describes one ultrasound capture representing axial, lateral, and time dimensions. The itk.b_mode_image_filter function performs envelope detection on the RF signal along the axial direction and applies a logarithmic intensity transform to the image output for visibility.
Step1: Read RF Image
The example RF image represents raw transducer output describing a mouse liver sample. Taken alone, the RF sample fails to deliver good insight into the mouse liver image. We must perform envelope detection and brightness filtering on the image in order to generate a BMode image for analysis.
Step2: The RF image represents transducer results over a certain time period with regards to the axial direction (along the beam) and the lateral direction (normal to the beam). Data along the axial direction describes how the ultrasound wave echo propagated through the image medium. Here we use matplotlib to visualize the waveform at a fixed lateral position at T=0.
Step3: Perform Envelope Detection
To understand how the pressure wave echo travelled through the imaging medium we apply amplitude demodulation through envelope detection.
Step4: Apply Logarithmic Transform
To get data on a more useful scale for analysis we apply a log10 transform over the intensity image. Other transforms such as smoothing and clamping may also be applied to help translate the intensity image into a human-interpretable format.
Step5: Use BModeImageFilter for RF-to-BMode image generation
itk.BModeImageFilter combines envelope detection and logarithmic transformation to produce a straightforward pipeline for BMode image generation. Note that MouseLiverB.mha is produced using a similar method but with additional transformations for visibility and is intended only for general comparison here.
Step6: Beamline Over Time
We can plot pixel values along the axial direction over time to investigate how the BMode filter transforms data. | Python Code:
# Install notebook dependencies
import sys
#!{sys.executable} -m pip install itk itk-ultrasound numpy matplotlib itkwidgets
import itk
from matplotlib import pyplot as plt
from itkwidgets import view, compare
Explanation: RF to B-Mode Generation
Ultrasound imaging employs acoustic pressure waves to gather information describing changes in tissue permeability. A transducer emits an ultrasound beam at discrete time intervals and translates the pressure wave echo into radio frequency waves. These electromagnetic waves are then readable with an analog-to-digital converter for machine input.
The following example notebook showcases how the ITK Ultrasound module may be used to generate a human-readable brightness-mode (B-Mode) image from RF-Mode input collected from a transducer. The input image MouseLiverRF.mha describes one ultrasound capture representing axial, lateral, and time dimensions. The itk.b_mode_image_filter function performs envelope detection on the RF signal along the axial direction and applies a logarithmic intensity transform to the image output for visibility.
End of explanation
SAMPLING_FREQUENCY = 40e6 # Hz
SAMPLING_PERIOD = SAMPLING_FREQUENCY ** -1
# Image dimensions: [axial, lateral, frame]
rf_image = itk.imread('Input/MouseLiverRF.mha', itk.F)
print(itk.size(rf_image))
view(rf_image)
Explanation: Read RF Image
The example RF image represents raw transducer output describing a mouse liver sample. Taken alone, the RF sample fails to deliver good insight into the mouse liver image. We must perform envelope detection and brightness filtering on the image in order to generate a BMode image for analysis.
End of explanation
def visualize_beam_waveform(image, lateral_idx=0, time_idx=0):
arr = itk.array_view_from_image(image)
x_labels = [idx * SAMPLING_PERIOD for idx in range(0,arr.shape[2])]
plt.plot(x_labels,arr[time_idx,lateral_idx,:])
plt.xlabel('Sampling Time ($\mu$s)')
plt.ylabel('Response')
plt.show()
# Visualize one RF waveform
visualize_beam_waveform(rf_image)
Explanation: The RF image represents transducer results over a certain time period with regards to the axial direction (along the beam) and the lateral direction (normal to the beam). Data along the axial direction describes how the ultrasound wave echo propagated through the image medium. Here we use matplotlib to visualize the waveform at a fixed lateral position at T=0.
End of explanation
# Note that dimension in direction of analytic signal filter must be
# a multiple of 2,3,5 for the FFT to be valid
padded_image = itk.fft_pad_image_filter(rf_image)
complex_image = itk.analytic_signal_image_filter(rf_image,direction=0)
modulus_image = itk.complex_to_modulus_image_filter(complex_image)
visualize_beam_waveform(modulus_image)
view(modulus_image)
Explanation: Perform Envelope Detection
To understand how the pressure wave echo travelled through the imaging medium we apply amplitude demodulation through envelope detection.
End of explanation
log_image = itk.log10_image_filter(modulus_image)
visualize_beam_waveform(log_image)
view(log_image)
Explanation: Apply Logarithmic Transform
To get data on a more useful scale for analysis we apply a log10 transform over the intensity image. Other transforms such as smoothing and clamping may also be applied to help translate the intensity image into a human-interpretable format.
End of explanation
bmode_image = itk.imread('Input/MouseLiverB.mha', itk.F)
filtered_image = itk.b_mode_image_filter(rf_image,direction=1)
compare(filtered_image, bmode_image)
Explanation: Use BModeImageFilter for RF-to-BMode image generation
itk.BModeImageFilter combines envelope detection and logarithmic transformation to produce a straightforward pipeline for BMode image generation. Note that MouseLiverB.mha is produced using a similar method but with additional transformations for visibility and is intended only for general comparison here.
End of explanation
# BMode Image (calculated)
visualize_beam_waveform(filtered_image)
# BMode Image (expected)
arr = itk.array_view_from_image(bmode_image)
plt.plot(arr[0,:,0])
plt.show()
Explanation: Beamline Over Time
We can plot pixel values along the axial direction over time to investigate how the BMode filter transforms data.
End of explanation |
5,562 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Writing Keras Models With TensorFlow NumPy
Author
Step1: Optionally, you can call tnp.experimental_enable_numpy_behavior() to enable type promotion in TensorFlow.
This allows TNP to more closely follow the NumPy standard.
Step2: To test our models we will use the Boston housing prices regression dataset.
Step3: Subclassing keras.Model with TNP
The most flexible way to make use of the Keras API is to subclass the
keras.Model class. Subclassing the Model class
gives you the ability to fully customize what occurs in the training loop. This makes
subclassing Model a popular option for researchers.
In this example, we will implement a Model subclass that performs regression over the
boston housing dataset using the TNP API. Note that differentiation and gradient
descent is handled automatically when using the TNP API alongside keras.
First let's define a simple TNPForwardFeedRegressionNetwork class.
Step4: Just like with any other Keras model we can utilize any supported optimizer, loss,
metrics or callbacks that we want.
Let's see how the model performs!
Step5: Great! Our model seems to be effectively learning to solve the problem at hand.
We can also write our own custom loss function using TNP.
Step6: Implementing a Keras Layer Based Model with TNP
If desired, TNP can also be used in layer oriented Keras code structure. Let's
implement the same model, but using a layered approach!
Step7: You can also seamlessly switch between TNP layers and native Keras layers!
Step8: The Keras API offers a wide variety of layers. The ability to use them alongside NumPy
code can be a huge time saver in projects.
Distribution Strategy
TensorFlow NumPy and Keras integrate with
TensorFlow Distribution Strategies.
This makes it simple to perform distributed training across multiple GPUs,
or even an entire TPU Pod.
Step9: TensorBoard Integration
One of the many benefits of using the Keras API is the ability to monitor training
through TensorBoard. Using the TensorFlow NumPy API alongside Keras allows you to easily
leverage TensorBoard.
Step10: To load the TensorBoard from a Jupyter notebook, you can run the following magic | Python Code:
import tensorflow as tf
import tensorflow.experimental.numpy as tnp
import keras
import keras.layers as layers
import numpy as np
Explanation: Writing Keras Models With TensorFlow NumPy
Author: lukewood<br>
Date created: 2021/08/28<br>
Last modified: 2021/08/28<br>
Description: Overview of how to use the TensorFlow NumPy API to write Keras models.
Introduction
NumPy is a hugely successful Python linear algebra library.
TensorFlow recently launched tf_numpy, a
TensorFlow implementation of a large subset of the NumPy API.
Thanks to tf_numpy, you can write Keras layers or models in the NumPy style!
The TensorFlow NumPy API has full integration with the TensorFlow ecosystem.
Features such as automatic differentiation, TensorBoard, Keras model callbacks,
TPU distribution and model exporting are all supported.
Let's run through a few examples.
Setup
TensorFlow NumPy requires TensorFlow 2.5 or later.
End of explanation
tnp.experimental_enable_numpy_behavior()
Explanation: Optionally, you can call tnp.experimental_enable_numpy_behavior() to enable type promotion in TensorFlow.
This allows TNP to more closely follow the NumPy standard.
End of explanation
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
def evaluate_model(model: keras.Model):
[loss, percent_error] = model.evaluate(x_test, y_test, verbose=0)
print("Mean absolute percent error before training: ", percent_error)
model.fit(x_train, y_train, epochs=200, verbose=0)
[loss, percent_error] = model.evaluate(x_test, y_test, verbose=0)
print("Mean absolute percent error after training:", percent_error)
Explanation: To test our models we will use the Boston housing prices regression dataset.
End of explanation
class TNPForwardFeedRegressionNetwork(keras.Model):
def __init__(self, blocks=None, **kwargs):
super(TNPForwardFeedRegressionNetwork, self).__init__(**kwargs)
if not isinstance(blocks, list):
raise ValueError(f"blocks must be a list, got blocks={blocks}")
self.blocks = blocks
self.block_weights = None
self.biases = None
def build(self, input_shape):
current_shape = input_shape[1]
self.block_weights = []
self.biases = []
for i, block in enumerate(self.blocks):
self.block_weights.append(
self.add_weight(
shape=(current_shape, block), trainable=True, name=f"block-{i}"
)
)
self.biases.append(
self.add_weight(shape=(block,), trainable=True, name=f"bias-{i}")
)
current_shape = block
self.linear_layer = self.add_weight(
shape=(current_shape, 1), name="linear_projector", trainable=True
)
def call(self, inputs):
activations = inputs
for w, b in zip(self.block_weights, self.biases):
activations = tnp.matmul(activations, w) + b
# ReLu activation function
activations = tnp.maximum(activations, 0.0)
return tnp.matmul(activations, self.linear_layer)
Explanation: Subclassing keras.Model with TNP
The most flexible way to make use of the Keras API is to subclass the
keras.Model class. Subclassing the Model class
gives you the ability to fully customize what occurs in the training loop. This makes
subclassing Model a popular option for researchers.
In this example, we will implement a Model subclass that performs regression over the
boston housing dataset using the TNP API. Note that differentiation and gradient
descent is handled automatically when using the TNP API alongside keras.
First let's define a simple TNPForwardFeedRegressionNetwork class.
End of explanation
model = TNPForwardFeedRegressionNetwork(blocks=[3, 3])
model.compile(
optimizer="adam",
loss="mean_squared_error",
metrics=[keras.metrics.MeanAbsolutePercentageError()],
)
evaluate_model(model)
Explanation: Just like with any other Keras model we can utilize any supported optimizer, loss,
metrics or callbacks that we want.
Let's see how the model performs!
End of explanation
def tnp_mse(y_true, y_pred):
return tnp.mean(tnp.square(y_true - y_pred), axis=0)
keras.backend.clear_session()
model = TNPForwardFeedRegressionNetwork(blocks=[3, 3])
model.compile(
optimizer="adam",
loss=tnp_mse,
metrics=[keras.metrics.MeanAbsolutePercentageError()],
)
evaluate_model(model)
Explanation: Great! Our model seems to be effectively learning to solve the problem at hand.
We can also write our own custom loss function using TNP.
End of explanation
def tnp_relu(x):
return tnp.maximum(x, 0)
class TNPDense(keras.layers.Layer):
def __init__(self, units, activation=None):
super().__init__()
self.units = units
self.activation = activation
def build(self, input_shape):
self.w = self.add_weight(
name="weights",
shape=(input_shape[1], self.units),
initializer="random_normal",
trainable=True,
)
self.bias = self.add_weight(
name="bias",
shape=(self.units,),
initializer="random_normal",
trainable=True,
)
def call(self, inputs):
outputs = tnp.matmul(inputs, self.w) + self.bias
if self.activation:
return self.activation(outputs)
return outputs
def create_layered_tnp_model():
return keras.Sequential(
[
TNPDense(3, activation=tnp_relu),
TNPDense(3, activation=tnp_relu),
TNPDense(1),
]
)
model = create_layered_tnp_model()
model.compile(
optimizer="adam",
loss="mean_squared_error",
metrics=[keras.metrics.MeanAbsolutePercentageError()],
)
model.build((None, 13,))
model.summary()
evaluate_model(model)
Explanation: Implementing a Keras Layer Based Model with TNP
If desired, TNP can also be used in layer oriented Keras code structure. Let's
implement the same model, but using a layered approach!
End of explanation
def create_mixed_model():
return keras.Sequential(
[
TNPDense(3, activation=tnp_relu),
# The model will have no issue using a normal Dense layer
layers.Dense(3, activation="relu"),
# ... or switching back to tnp layers!
TNPDense(1),
]
)
model = create_mixed_model()
model.compile(
optimizer="adam",
loss="mean_squared_error",
metrics=[keras.metrics.MeanAbsolutePercentageError()],
)
model.build((None, 13,))
model.summary()
evaluate_model(model)
Explanation: You can also seamlessly switch between TNP layers and native Keras layers!
End of explanation
gpus = tf.config.list_logical_devices("GPU")
if gpus:
strategy = tf.distribute.MirroredStrategy(gpus)
else:
# We can fallback to a no-op CPU strategy.
strategy = tf.distribute.get_strategy()
print("Running with strategy:", str(strategy.__class__.__name__))
with strategy.scope():
model = create_layered_tnp_model()
model.compile(
optimizer="adam",
loss="mean_squared_error",
metrics=[keras.metrics.MeanAbsolutePercentageError()],
)
model.build((None, 13,))
model.summary()
evaluate_model(model)
Explanation: The Keras API offers a wide variety of layers. The ability to use them alongside NumPy
code can be a huge time saver in projects.
Distribution Strategy
TensorFlow NumPy and Keras integrate with
TensorFlow Distribution Strategies.
This makes it simple to perform distributed training across multiple GPUs,
or even an entire TPU Pod.
End of explanation
keras.backend.clear_session()
Explanation: TensorBoard Integration
One of the many benefits of using the Keras API is the ability to monitor training
through TensorBoard. Using the TensorFlow NumPy API alongside Keras allows you to easily
leverage TensorBoard.
End of explanation
models = [
(TNPForwardFeedRegressionNetwork(blocks=[3, 3]), "TNPForwardFeedRegressionNetwork"),
(create_layered_tnp_model(), "layered_tnp_model"),
(create_mixed_model(), "mixed_model"),
]
for model, model_name in models:
model.compile(
optimizer="adam",
loss="mean_squared_error",
metrics=[keras.metrics.MeanAbsolutePercentageError()],
)
model.fit(
x_train,
y_train,
epochs=200,
verbose=0,
callbacks=[keras.callbacks.TensorBoard(log_dir=f"logs/{model_name}")],
)
Explanation: To load the TensorBoard from a Jupyter notebook, you can run the following magic:
%load_ext tensorboard
End of explanation |
5,563 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Inverse Regression with Yelp reviews
In this note we'll use gensim to turn the Word2Vec machinery into a document classifier, as in Document Classification by Inversion of Distributed Language Representations from ACL 2015.
Data and prep
First, download to the same directory as this note the data from the Yelp recruiting contest on kaggle
Step1: First, we define a super simple parser
Step2: And put everything together in a review generator that provides tokenized sentences and the number of stars for every review.
Step3: For example
Step4: Now, since the files are small we'll just read everything into in-memory lists. It takes a minute ...
Step5: Finally, write a function to generate sentences -- ordered lists of words -- from reviews that have certain star ratings
Step6: Word2Vec modeling
We fit out-of-the-box Word2Vec
Step7: Build vocab from all sentences (you could also pre-train the base model from a neutral or un-labeled vocabulary)
Step8: Now, we will deep copy each base model and do star-specific training. This is where the big computations happen...
Step10: Inversion of the distributed representations
At this point, we have 5 different word2vec language representations. Each 'model' has been trained conditional (i.e., limited to) text from a specific star rating. We will apply Bayes rule to go from p(text|stars) to p(stars|text).
For any new sentence we can obtain its likelihood (lhd; actually, the composite likelihood approximation; see the paper) using the score function in the word2vec class. We get the likelihood for each sentence in the first test review, then convert to a probability over star ratings. Every sentence in the review is evaluated separately and the final star rating of the review is an average vote of all the sentences. This is all in the following handy wrapper.
Step11: Test set example
As an example, we apply the inversion on the full test set. | Python Code:
# ### uncomment below if you want...
# ## ... copious amounts of logging info
# import logging
# logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
# rootLogger = logging.getLogger()
# rootLogger.setLevel(logging.INFO)
# ## ... or auto-reload of gensim during development
# %load_ext autoreload
# %autoreload 2
Explanation: Deep Inverse Regression with Yelp reviews
In this note we'll use gensim to turn the Word2Vec machinery into a document classifier, as in Document Classification by Inversion of Distributed Language Representations from ACL 2015.
Data and prep
First, download to the same directory as this note the data from the Yelp recruiting contest on kaggle:
* https://www.kaggle.com/c/yelp-recruiting/download/yelp_training_set.zip
* https://www.kaggle.com/c/yelp-recruiting/download/yelp_test_set.zip
You'll need to sign-up for kaggle.
You can then unpack the data and grab the information we need.
End of explanation
import re
contractions = re.compile(r"'|-|\"")
# all non alphanumeric
symbols = re.compile(r'(\W+)', re.U)
# single character removal
singles = re.compile(r'(\s\S\s)', re.I|re.U)
# separators (any whitespace)
seps = re.compile(r'\s+')
# cleaner (order matters)
def clean(text):
text = text.lower()
text = contractions.sub('', text)
text = symbols.sub(r' \1 ', text)
text = singles.sub(' ', text)
text = seps.sub(' ', text)
return text
# sentence splitter
alteos = re.compile(r'([!\?])')
def sentences(l):
l = alteos.sub(r' \1 .', l).rstrip("(\.)*\n")
return l.split(".")
Explanation: First, we define a super simple parser
End of explanation
from zipfile import ZipFile
import json
def YelpReviews(label):
with ZipFile("yelp_%s_set.zip"%label, 'r') as zf:
with zf.open("yelp_%s_set/yelp_%s_set_review.json"%(label,label)) as f:
for line in f:
rev = json.loads(line)
yield {'y':rev['stars'],\
'x':[clean(s).split() for s in sentences(rev['text'])]}
Explanation: And put everything together in a review generator that provides tokenized sentences and the number of stars for every review.
End of explanation
YelpReviews("test").next()
Explanation: For example:
End of explanation
revtrain = list(YelpReviews("training"))
print len(revtrain), "training reviews"
## and shuffle just in case they are ordered
import numpy as np
np.random.shuffle(revtrain)
Explanation: Now, since the files are small we'll just read everything into in-memory lists. It takes a minute ...
End of explanation
def StarSentences(reviews, stars=[1,2,3,4,5]):
for r in reviews:
if r['y'] in stars:
for s in r['x']:
yield s
Explanation: Finally, write a function to generate sentences -- ordered lists of words -- from reviews that have certain star ratings
End of explanation
from gensim.models import Word2Vec
import multiprocessing
## create a w2v learner
basemodel = Word2Vec(
workers=multiprocessing.cpu_count(), # use your cores
iter=3, # iter = sweeps of SGD through the data; more is better
hs=1, negative=0 # we only have scoring for the hierarchical softmax setup
)
print basemodel
Explanation: Word2Vec modeling
We fit out-of-the-box Word2Vec
End of explanation
basemodel.build_vocab(StarSentences(revtrain))
Explanation: Build vocab from all sentences (you could also pre-train the base model from a neutral or un-labeled vocabulary)
End of explanation
from copy import deepcopy
starmodels = [deepcopy(basemodel) for i in range(5)]
for i in range(5):
slist = list(StarSentences(revtrain, [i+1]))
print i+1, "stars (", len(slist), ")"
starmodels[i].train( slist, total_examples=len(slist) )
Explanation: Now, we will deep copy each base model and do star-specific training. This is where the big computations happen...
End of explanation
docprob takes two lists
* docs: a list of documents, each of which is a list of sentences
* models: the candidate word2vec models (each potential class)
it returns the array of class probabilities. Everything is done in-memory.
import pandas as pd # for quick summing within doc
def docprob(docs, mods):
# score() takes a list [s] of sentences here; could also be a sentence generator
sentlist = [s for d in docs for s in d]
# the log likelihood of each sentence in this review under each w2v representation
llhd = np.array( [ m.score(sentlist, len(sentlist)) for m in mods ] )
# now exponentiate to get likelihoods,
lhd = np.exp(llhd - llhd.max(axis=0)) # subtract row max to avoid numeric overload
# normalize across models (stars) to get sentence-star probabilities
prob = pd.DataFrame( (lhd/lhd.sum(axis=0)).transpose() )
# and finally average the sentence probabilities to get the review probability
prob["doc"] = [i for i,d in enumerate(docs) for s in d]
prob = prob.groupby("doc").mean()
return prob
Explanation: Inversion of the distributed representations
At this point, we have 5 different word2vec language representations. Each 'model' has been trained conditional (i.e., limited to) text from a specific star rating. We will apply Bayes rule to go from p(text|stars) to p(stars|text).
For any new sentence we can obtain its likelihood (lhd; actually, the composite likelihood approximation; see the paper) using the score function in the word2vec class. We get the likelihood for each sentence in the first test review, then convert to a probability over star ratings. Every sentence in the review is evaluated separately and the final star rating of the review is an average vote of all the sentences. This is all in the following handy wrapper.
End of explanation
# read in the test set
revtest = list(YelpReviews("test"))
# get the probs (note we give docprob a list of lists of words, plus the models)
probs = docprob( [r['x'] for r in revtest], starmodels )
%matplotlib inline
probpos = pd.DataFrame({"out-of-sample prob positive":probs[[3,4]].sum(axis=1),
"true stars":[r['y'] for r in revtest]})
probpos.boxplot("out-of-sample prob positive",by="true stars", figsize=(12,5))
Explanation: Test set example
As an example, we apply the inversion on the full test set.
End of explanation |
5,564 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Pie Chart
Step1: Update Data
Step2: Display Values
Step3: Enable sort
Step4: Set different styles for selected slices
Step5: For more on piechart interactions, see the Mark Interactions notebook
Modify label styling
Step6: Update pie shape and style
Step7: Change pie dimensions
Step8: Move the pie around
x and y attributes control the position of the pie in the figure.
If no scales are passed for x and y, they are taken in absolute
figure coordinates, between 0 and 1.
Step9: Change slice styles
Pie slice colors cycle through the colors and opacities attribute, as the Lines Mark.
Step10: Represent an additional dimension using Color
The Pie allows for its colors to be determined by data, that is passed to the color attribute.
A ColorScale with the desired color scheme must also be passed. | Python Code:
data = np.random.rand(3)
fig = plt.figure(animation_duration=1000)
pie = plt.pie(data, display_labels="outside", labels=list(string.ascii_uppercase))
fig
Explanation: Basic Pie Chart
End of explanation
n = np.random.randint(1, 10)
pie.sizes = np.random.rand(n)
Explanation: Update Data
End of explanation
with pie.hold_sync():
pie.display_values = True
pie.values_format = ".1f"
Explanation: Display Values
End of explanation
pie.sort = True
Explanation: Enable sort
End of explanation
pie.selected_style = {"opacity": 1, "stroke": "white", "stroke-width": 2}
pie.unselected_style = {"opacity": 0.2}
pie.selected = [1]
pie.selected = None
Explanation: Set different styles for selected slices
End of explanation
pie.label_color = "Red"
pie.font_size = "20px"
pie.font_weight = "bold"
Explanation: For more on piechart interactions, see the Mark Interactions notebook
Modify label styling
End of explanation
fig1 = plt.figure(animation_duration=1000)
pie1 = plt.pie(np.random.rand(6), inner_radius=0.05)
fig1
Explanation: Update pie shape and style
End of explanation
# As of now, the radius sizes are absolute, in pixels
with pie1.hold_sync():
pie1.radius = 150
pie1.inner_radius = 100
# Angles are in radians, 0 being the top vertical
with pie1.hold_sync():
pie1.start_angle = -90
pie1.end_angle = 90
Explanation: Change pie dimensions
End of explanation
pie1.y = 0.1
pie1.x = 0.6
pie1.radius = 180
Explanation: Move the pie around
x and y attributes control the position of the pie in the figure.
If no scales are passed for x and y, they are taken in absolute
figure coordinates, between 0 and 1.
End of explanation
pie1.stroke = "brown"
pie1.colors = ["orange", "darkviolet"]
pie1.opacities = [0.1, 1]
fig1
Explanation: Change slice styles
Pie slice colors cycle through the colors and opacities attribute, as the Lines Mark.
End of explanation
from bqplot import ColorScale, ColorAxis
n = 7
size_data = np.random.rand(n)
color_data = np.random.randn(n)
fig2 = plt.figure()
plt.scales(scales={"color": ColorScale(scheme="Reds")})
pie2 = plt.pie(size_data, color=color_data)
fig2
Explanation: Represent an additional dimension using Color
The Pie allows for its colors to be determined by data, that is passed to the color attribute.
A ColorScale with the desired color scheme must also be passed.
End of explanation |
5,565 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-3', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
5,566 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="right">Python 3.6 Jupyter Notebook</div>
Privacy by design
Step1: 1.2 Calculate uniqueness
In order to calculate uniqueness, as defined earlier, you need to define a function that accepts an input data set, and list of features, to be used to evaluate the data set. The output indicates the number or records in the data set that can be uniquely identified using the provided features.
Step2: The results indicate that about 20% of the individuals could potentially be identified using two features ("sex" and "birthday"), and 99% of the population could potentially be reidentified using three features ("sex", "birthday", and "zip").
1.3 K-anonymity
As per the earlier definition of k-anonymity, you can calculate uniqueness as the smallest group of records that is returned, based on the grouping parameters provided. The code cell below defines a function that takes an input data set, and list of features, to group the recordset when performing the evaluation. The function provides the minimum count of records grouped by these features as the output.
Step3: In this example, you will notice a value of one. This implies that the minimum number of individuals with a unique combination of the provided features is one, and that there is a significant risk of potential attackers being able to reidentify individuals in the data set.
Typically, the goal would be to not have any groups with a size less than k values, as defined by your organizational or industry standards. The typical target values that are observed range from six to eight.
Note
Step4: 1.4 Coarsening of data
In this section, you will coarsen the data using a number of different techniques. It should be noted that the granularity or accuracy of the data set is lost in order to preserve the privacy of the records for individuals in the data set.
1.4.1 Remove the zip code
As mentioned before, the district is contained in the first two characters of the zip code. In order to retain the district when coarsening the data, a simple programmatic transformation (shown below) can be applied. After applying this transformation, you can choose to expose the "zip_district" to end users, instead of the more granular "zip".
Step5: 1.4.2 Coarsen the data from birthday to birth year
Similar to the previous exercise, you can expose the birth year, instead of the birthday, as demonstrated in the code cell below.
Step6: 1.4.3 Coarsen the data from birthday to birth decade
You can reduce granularity to decade-level instead of year-level, as seen in Section 1.4.2, with the code demonstrated below.
Step7: 1.5 Suppression
This refers to the suppression of all of the groups that are smaller than the desired k. In many cases, you will reach a point where you will have to coarsen data to the point of destroying its utility. Removing records can be problematic because you may remove the records of interest to a particular question (such as 1% of data with a link to a particular feature).
Step8: 2. Privacy considerations for big data
Big data sets typically differ from traditional data sets in terms of the following
Step9: 2.1.1 Example
Step10: Compute the unicity for a single data point (with 3 iterations)
In this example, you will use one datapoint and three iterations. The result will vary, based on the selected sample, but will indicate that about 35% of the individuals in the sample could potentially be identified using a single datapoint.
Note
Step11: 2.2 Unicity levels in big data data sets and their consequences
de Montjoye et al. (2015) have shown that, for 1.5 million people (over the course of a year), four visits to places (location and timestamps) are enough to uniquely identify 95% of the users. While for another 1 million people (over 3 months), unicity reached 90% at four points (shop and day) or even 94% at only three points (shop, day, and approximate price). Such an ease of identification means that if someone effectively anonymizes the big data of individuals, they will strip it of its utility.
Note
Step12: <br>
<div class="alert alert-info">
<b>Exercise 1 End.</b>
</div>
Exercise complete
Step13: <br>
<div class="alert alert-info">
<b>Exercise 2 Start.</b>
</div>
Instructions
Calculate the unicity of the coarsened mobility data set (samples_10) with the same number of datapoints (four) and iterations (five) as in Exercise 1. You need to execute the same function, and replace the input data set, "samples", with the newly-created, "samples_10", data set.
1. What is the difference between your answer, and the answer provided in the previous exercise, if any?
2. How much does it improve anonymity (if at all)?
3. Is the loss of spatial resolution worth the computational load and effort? | Python Code:
import pandas
# Load the data set.
df = pandas.read_csv('privacy/belgium_100k.csv')
df = df.where((pandas.notnull(df)), None)
df['birthday'] = df['birthday'].astype('datetime64[ns]')
df.head()
Explanation: <div align="right">Python 3.6 Jupyter Notebook</div>
Privacy by design: Big data and personal data protection
Your completion of the notebook exercises will be graded based on your ability to do the following:
Understand: Do your pseudo-code and comments show evidence that you recall and understand technical concepts?
Apply: Are you able to execute code (using the supplied examples) that performs the required functionality on supplied or generated data sets?
Notebook objectives:
By the end of this notebook, you will be expected to:
Understand the importance, challenges, and approaches to personal data protection;
Identify the dimensions across which big data differs from traditional data sets;
Understand the concept of unicity;
Use coarsening to anonymize personal information in data; and
Understand the limitations of anonymization in the context of big data.
List of exercises:
Exercise 1: Calculate the unicity of a raw data set.
Exercise 2: Calculate and interpret the unicity of a coarsened data set.
Exercise 3: Identify limitations of data anonymization in the context of big data, and suggest alternative data-protection mechanisms.
Notebook introduction
In the video content, Cameron Kerry indicated that the law lags too far behind technology to answer many of the hard questions around data protection. He then went on to elaborate that, in many cases, the question becomes not just what you must do, but rather, what you should do in order to establish and maintain a trust relationship.
Sharing data (collected about individuals) between entities poses a risk to privacy and trust, and is regulated in most parts of the world. The European Union recently passed the General Data Protection Regulation (GDPR), which addresses the treatment of personal information, as well as the rights of the individuals whose information has been collected. Penalties are based on a tiered approach, and some infringements can result in fines of up to 4% of annual worldwide turnover, and €20 million. It is often the case that the information to be shared needs to be anonymous. In some cases, ensuring anonymity removes the data from the jurisdiction of certain laws. The application of the laws is a complex task that needs to be carefully implemented to ensure compliance. Refer to Stefan Nerinckx’s article on the new EU data protection regime for additional context.
Pseudonymization – the removal of direct identifiers – is the first step to anonymize data. This is achieved by removing direct identifiers such as names, surnames, social insurance numbers, and phone numbers; or by replacing them with random or hashed (and salted – see the NYC taxi cab example) values.
However, cases like William Weld's show that pseudonymization is not sufficient to prevent the reidentification of individuals in pseudonymized data sets. In 1990, the Massachusetts Group Insurance Commission (GIC) released hospital data to researchers for the purpose of improving healthcare and controlling costs. At the time GIC released the data, William Weld, then Governor of Massachusetts, assured the public that GIC had protected patient privacy by deleting identifiers (Sweeney 2002).
Note:
Latanya Sweeney was a graduate student at MIT at that stage. She bought the data, reidentified Governor Weld's medical records, and sent these to him (Sweeney 2002).
Sweeney (2002) later demonstrated that 87% of Americans can be uniquely identified by their zip code, gender, and birth date.
This value (i.e., the percentage of unique and, thus, identifiable members of the data set knowing a couple of quasi-identifiers) has been conceptualized as uniqueness.
While the numerous, available sources of data may reveal insights into human behavior, it is important to be sensitive to the legal and ethical considerations when dealing with them. These sources of data include census data, medical records, financial and transaction data, loyalty cards, mobility data, mobile phone data, browsing history and ratings, research-based or observational data, etc.
You can review the seven principles of privacy by design, for more information.
<div class="alert alert-warning">
<b>Note</b>:<br>
It is strongly recommended that you save and checkpoint after applying significant changes or completing exercises. This allows you to return the notebook to a previous state should you wish to do so. On the Jupyter menu, select "File", then "Save and Checkpoint" from the dropdown menu that appears.
</div>
1. Uniqueness and k-anonymity
Uniqueness refers to the fraction of unique records in a particular data set (i.e., the number of individuals who are identifiable, given the fields).
The available fields in your data set can typically contain the following:
Identifiers: Attributes that can be used to explicitly identify individuals. These are typically removed from data sets prior to release.
Quasi-identifiers: A subset of attributes that can uniquely identify most individuals in the data set. They are not unique identifiers themselves, but are sufficiently well-correlated with an individual that they can be combined with other quasi-identifiers to create a unique identifier.
Anonymization has been chosen as a strategy to protect personal privacy. K-anonymity is the measure used for anonymization, and is defined below according to Sweeney (2002).
K-anonymity of a data set (given one or more fields) is the size of the smallest group in the dataset sharing the same value of the given field(s), or the number of persons having identical values of the fields, yielding them indistinguishable (Sweeney 2002).
For k-anonymity, the person anonymizing the data set needs to decide what the quasi-identifiers are, and what a potential attacker could extract from the provided data set.
Generalization and suppression are the core tools used to anonymize data, and make a data set k-anonymous (Samarati and Sweeney 1998). The privacy-securing methods employed in this paradigm are optimized for the high k-anonymity versus the precision of the data. One of the biggest problems experienced is that optimization is use-case specific, and, therefore, depends on the application. Typical methods include the following:
Generalization (or coarsening): Reducing the resolution of the data. For example, date of birth -> year of birth -> decade of birth.
Suppression: Removing rows (from groups in which k is lower than desired) from the data set.
These heuristics typically come with trade-offs. Other techniques (such as noise addition and translation) exist, but provide similar results.
Technical examples of such methods are not of central importance in this course, therefore only the basic components will be repeated below to illustrate the fundamentals of the elements discussed above.
1.1 Load data set
This example uses a synthetic data set created for 100,000 fictional people from Belgium. The zip codes are random numbers adhering to the same standards observed in Belgium, with the first two characters indicating the district.
End of explanation
# Define function to evaluate uniqueness of the provided dataset.
def uniqueness(dataframe, pseudo):
groups = list(dataframe.groupby(pseudo).groups.values())
return sum(1. for g in groups if len(g) == 1) / len(dataframe)
print((uniqueness(df, ['zip'])))
print((uniqueness(df, ['sex', 'birthday'])))
print((uniqueness(df, ['sex', 'birthday', 'zip'])))
Explanation: 1.2 Calculate uniqueness
In order to calculate uniqueness, as defined earlier, you need to define a function that accepts an input data set, and list of features, to be used to evaluate the data set. The output indicates the number or records in the data set that can be uniquely identified using the provided features.
End of explanation
# Define function to evaluate k-anonymity of the provided data set.
def k_anonymity(dataframe, pseudo):
return dataframe.groupby(pseudo).count().min()[0]
print((k_anonymity(df, ['sex', 'birthday', 'zip'])))
Explanation: The results indicate that about 20% of the individuals could potentially be identified using two features ("sex" and "birthday"), and 99% of the population could potentially be reidentified using three features ("sex", "birthday", and "zip").
1.3 K-anonymity
As per the earlier definition of k-anonymity, you can calculate uniqueness as the smallest group of records that is returned, based on the grouping parameters provided. The code cell below defines a function that takes an input data set, and list of features, to group the recordset when performing the evaluation. The function provides the minimum count of records grouped by these features as the output.
End of explanation
# Use this code cell to review the k-anonymity function with different input parameters.
Explanation: In this example, you will notice a value of one. This implies that the minimum number of individuals with a unique combination of the provided features is one, and that there is a significant risk of potential attackers being able to reidentify individuals in the data set.
Typically, the goal would be to not have any groups with a size less than k values, as defined by your organizational or industry standards. The typical target values that are observed range from six to eight.
Note:
You can experiment with different combinations of features, or repeat the test with single features, to review the impact on the produced result.
Example: print(k_anonymity(df, ['sex']))
End of explanation
# Reduce the zip code to zip district.
df['zip_district'] = [z // 1000 for z in df['zip']]
df[['zip', 'zip_district']].head(3)
Explanation: 1.4 Coarsening of data
In this section, you will coarsen the data using a number of different techniques. It should be noted that the granularity or accuracy of the data set is lost in order to preserve the privacy of the records for individuals in the data set.
1.4.1 Remove the zip code
As mentioned before, the district is contained in the first two characters of the zip code. In order to retain the district when coarsening the data, a simple programmatic transformation (shown below) can be applied. After applying this transformation, you can choose to expose the "zip_district" to end users, instead of the more granular "zip".
End of explanation
# From birthday to birth year.
df['birth_year'] = df['birthday'].map(lambda d: d.year)
df[['birthday', 'birth_year']].head(3)
Explanation: 1.4.2 Coarsen the data from birthday to birth year
Similar to the previous exercise, you can expose the birth year, instead of the birthday, as demonstrated in the code cell below.
End of explanation
# From birthday to birth decade.
df['birth_decade'] = df['birth_year'] // 10 * 10
df[['birthday', 'birth_year', 'birth_decade']].head()
Explanation: 1.4.3 Coarsen the data from birthday to birth decade
You can reduce granularity to decade-level instead of year-level, as seen in Section 1.4.2, with the code demonstrated below.
End of explanation
print((k_anonymity(df, ['sex', 'birth_year', 'zip_district'])))
grouped = df.groupby(['sex', 'birth_year', 'zip_district'])
df_filtered = grouped.filter(lambda x: len(x) > 5)
print(('Reducing size:', len(df), '> ', len(df_filtered)))
print(('K-anonymity after suppression:', k_anonymity(df_filtered, ['sex', 'birth_year', 'zip_district'])))
Explanation: 1.5 Suppression
This refers to the suppression of all of the groups that are smaller than the desired k. In many cases, you will reach a point where you will have to coarsen data to the point of destroying its utility. Removing records can be problematic because you may remove the records of interest to a particular question (such as 1% of data with a link to a particular feature).
End of explanation
# Function implementing the unicity assessment algorithm.
def draw_points(user, points):
'''IN: a Series; int'''
user.dropna(inplace=True)
indices = np.random.choice(len(user), points, replace=False)
return user[indices]
def is_unique(user_name, points):
'''IN: str, int'''
drawn_p = draw_points(samples[user_name], points)
for other_user in samples.loc[drawn_p.index].drop(user_name, axis=1).as_matrix().T:
if np.equal(drawn_p.values, other_user).all():
return False
return True
def compute_unicity(samples, points):
'''IN:int, int'''
unique_count = .0
users = samples.columns
for user_name in users:
if is_unique(user_name, points):
unique_count += 1
return unique_count / len(samples.columns)
def iterate_unicity(samples, points=4, iterations=10):
'''IN:int, int, int'''
unicities = []
for _ in tqdm(list(range(iterations))):
unicities.append(compute_unicity(samples, points))
return np.mean(unicities)
Explanation: 2. Privacy considerations for big data
Big data sets typically differ from traditional data sets in terms of the following:
- Longitude: Data is typically collected for months, years, or even indefinitely. This is in contrast to snapshots or clearly-defined retention periods.
- Resolution: Datapoints are collected with frequencies that are down to single seconds.
- Features: Features have unprecedented width and detail for behavioral data, including location and mobility, purchases histories, and more.
Many of the traditional measures used to define the uniqueness of individuals, and the strategies to preserve users' privacy, are no longer sufficient. Instead of uniqueness usually being used for fields consisting of single values, unicity has been proposed (de Montjoye et al. 2015; de Montjoye et al. 2013). Unicity can be used to measure the ease of reidentification of individuals in sets of metadata (such as a user's location over a period of time). Instead of assuming that an attacker knows all of the quasi-identifiers and none of the data, unicity assumes that any datapoint can either be known to the attacker or useful for research, and focuses on quantifying the amount of information that would be needed to uniquely reidentify people. In many cases, data is poorly anonymized. You also need to consider the richness of big data sources when evaluating articles, such as Natasha Singer’s article on identifying famous people.
2.1 Unicity of a data set at p datapoints (given one or more fields)
Given one or more fields, the unicity of a dataset at p datapoints refers to:
- The fraction of users who can be uniquely identified by p randomly-chosen points from that field; and
- The approximate number of datapoints needed to reconcile two data sets.
The concept of unicity was originally developed in cryptography, and is based on information theory. Specifically, the unicity distance is a measure of the secrecy of a cryptographic system, and determines how effective it is against third parties gaining access to protected information.
An algorithm for computing unicity is shown below.
Note:
Unicity is well-suited for big data and its metadata, meaning that it is applicable to features containing numerous values (such as a trace, for example, a history of GPS coordinates).
An implementation of the unicity assessment algorithm is given below.
Note:
You do not need to understand the code in the cells below. It is provided as sample implementation for advanced students.
End of explanation
# Load required libraries and methods.
import pandas as pd
import numpy as np
from scipy.stats import rv_discrete
from tqdm import tqdm
%pylab inline
# Load samples of the data set.
samples = pd.read_csv('privacy/mobility_sample_1k.csv', index_col='datetime')
samples.index = samples.index.astype('datetime64[ns]')
samples.head(3)
Explanation: 2.1.1 Example: Assessing the unicity of a data set
In this example, you will use a synthetic data set that simulates the mobility of 1,000 users. The data set contains mobile phone records based on hourly intervals.
Sampling
End of explanation
## Compute unicity.
iterate_unicity(samples, 1, 3)
Explanation: Compute the unicity for a single data point (with 3 iterations)
In this example, you will use one datapoint and three iterations. The result will vary, based on the selected sample, but will indicate that about 35% of the individuals in the sample could potentially be identified using a single datapoint.
Note:
For a more robust result, a single estimation of unicity is performed using the “compute_unicity” function.
End of explanation
# Your code here.
Explanation: 2.2 Unicity levels in big data data sets and their consequences
de Montjoye et al. (2015) have shown that, for 1.5 million people (over the course of a year), four visits to places (location and timestamps) are enough to uniquely identify 95% of the users. While for another 1 million people (over 3 months), unicity reached 90% at four points (shop and day) or even 94% at only three points (shop, day, and approximate price). Such an ease of identification means that if someone effectively anonymizes the big data of individuals, they will strip it of its utility.
Note: The “Friends and Family” data set individually transformed the location data of each user, which preserved their privacy well, yet rendered the data unusable for the purposes of this notebook. The “StudentLife” data set, on the other hand, left the GPS records intact, which enabled you to use this as input for Module 3’s notebook exercises. This introduces the risk of attacks through reidentifying individuals by reconciling the GPS records with location services such as Foursquare, Twitter, and Facebook.
<br>
<div class="alert alert-info">
<b>Exercise 1 Start.</b>
</div>
Instructions
Calculate the unicity at four datapoints. Iterate five times for additional accuracy. You can find the syntax in the example calculation above, and change the parameters as required.
Question: Is it big or small? What does this mean for anonymity?
End of explanation
# Load antenna data.
antennas = pd.read_csv("privacy/belgium_antennas.csv")
antennas.set_index('ins', inplace=True)
cluster_10 = pd.read_csv('privacy/clusters_10.csv')
#cluster_10['ins'] = map(int, cluster_10['ins'])
cluster_10['ins'] = list(map(int, cluster_10['ins']))
mapping = dict(cluster_10[['ins', 'cluster']].values)
# Reduce the grain of the data set.
# Requires Numpy version 1.11.
samples_10 = samples.copy()
samples_10 = samples_10.applymap(lambda k: np.nan if np.isnan(k) else mapping[antennas.index[k]])
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 1 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
2.2 Coarsening
Similarly, here you could try coarsening in order to anonymize the data. However, this approach has been shown to be insufficient in making a data set anonymous.
For more information on coarsening data, read about the implementation and interpretation of results of the "Unique in the Crowd: The privacy bounds of human mobility" study conducted by Yves-Alexandre de Montjoye et al. (2013).
Please review the paper and pay special attention to Figure 4, which demonstrates how the uniqueness of mobility traces (ε) depends on the spatial and temporal resolution of the data. The study found that traces are more unique when coarse on one dimension, and fine along another, than when they are medium-grained along both dimensions. (Unique implies being easier to attack, through the reidentification of individuals.)
The risk of reidentification decreases with the application of these basic techniques. However, this decrease is not fast enough. An alternate solution, for this specific use case, is to merge the antennas into (big) groups of 10, in an attempt to lower the unicity.
Note:
The two code cells below are used to prepare your data set, but do not produce any output. They will generate the input data set required for Exercise 2. The second code cell will also produce a warning, which you can safely ignore.
End of explanation
# Your code here.
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 2 Start.</b>
</div>
Instructions
Calculate the unicity of the coarsened mobility data set (samples_10) with the same number of datapoints (four) and iterations (five) as in Exercise 1. You need to execute the same function, and replace the input data set, "samples", with the newly-created, "samples_10", data set.
1. What is the difference between your answer, and the answer provided in the previous exercise, if any?
2. How much does it improve anonymity (if at all)?
3. Is the loss of spatial resolution worth the computational load and effort?
End of explanation |
5,567 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style="font-size
Step1: <h3>Generalized Hooke's Law</h3>
1) ε<sub>x</sub> = $\frac{1}{E}[$σ<sub>x</sub> - ν(σ<sub>y</sub> + σ<sub>z</sub>)$]$ <br/>
2) ε<sub>y</sub> = $\frac{1}{E}[$σ<sub>y</sub> - ν(σ<sub>z</sub> + σ<sub>x</sub>)$]$ <br/>
3) ε<sub>z</sub> = $\frac{1}{E}[$σ<sub>z</sub> - ν(σ<sub>x</sub> + σ<sub>y</sub>)$]$ <br/><br/>
where,<br/>
E - Modulus of elasticity<br/>
ν - poisson's ratio<br/>
Applying the following conditions to the above equations | Python Code:
import sympy as sp
from sympy import init_printing
init_printing(use_unicode=True)
# Declare the symbols
EPx, EPy, EPz = sp.symbols("\u03B5x \u03B5y \u03B5z")
Qx, Qy, Qz = sp.symbols("\u03C3x \u03C3z \u03C3z")
E, v = sp.symbols("E \u03C5")
Explanation: <div style="font-size:24px;background-color:blue;color:#fff;padding:10px">ALEXIUS S. ACADEMIA</div>
<h2>Assignment 01 - Theory of Plates and Shells</h2>
<em>Derivation of stresses from Hooke's Law</em><br>
<br/>
End of explanation
M = sp.Matrix([[1, -v],
[-v, 1]]) # Create the M matrix
V = sp.Matrix([E*EPx, E*EPy]) # Create the V vector
# Get the inverse of M matrix
Minv = M**-1
# Multiply inverse of M by V vector to solve for vector X
X = Minv * V
sp.simplify(X)
Explanation: <h3>Generalized Hooke's Law</h3>
1) ε<sub>x</sub> = $\frac{1}{E}[$σ<sub>x</sub> - ν(σ<sub>y</sub> + σ<sub>z</sub>)$]$ <br/>
2) ε<sub>y</sub> = $\frac{1}{E}[$σ<sub>y</sub> - ν(σ<sub>z</sub> + σ<sub>x</sub>)$]$ <br/>
3) ε<sub>z</sub> = $\frac{1}{E}[$σ<sub>z</sub> - ν(σ<sub>x</sub> + σ<sub>y</sub>)$]$ <br/><br/>
where,<br/>
E - Modulus of elasticity<br/>
ν - poisson's ratio<br/>
Applying the following conditions to the above equations:<br/>
ε<sub>z</sub> = 0 <br/>
σ<sub>z</sub> = 0 <br/><br/>
we get,<br/>
1) ε<sub>x</sub> = $\frac{1}{E}[$σ<sub>x</sub> - νσ<sub>y</sub>$]$ ==> σ<sub>x</sub> - νσ<sub>y</sub> = E ε<sub>x</sub> <br/>
2) ε<sub>y</sub> = $\frac{1}{E}[$σ<sub>y</sub> - νσ<sub>x</sub>$]$ ==> σ<sub>y</sub> - νσ<sub>x</sub> = E ε<sub>y</sub> <br/>
In matrix form we have:<br/>
$[ 1$ $-v ]$ $[$ σ<sub>x</sub> $]$ = $[$ E ε<sub>x</sub> $]$ <br/>
$[ -v$ $1 ]$ $[$ σ<sub>y</sub> $]$ = $[$ E ε<sub>y</sub> $]$ <br/><br/>
or M * X = V <br/><br/>
where: M - matrix, X - vector and V - vector <br/><br/>
Now solving for σ<sub>x</sub> and σ<sub>y</sub> by using <b>python</b> and <b>sympy</b>
End of explanation |
5,568 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Map functions
These functions are probably the most commonly used functions when dealing with an RDD object.
map()
mapValues()
flatMap()
flatMapValues()
map
The map() method applies a function to each elements of the RDD. Each element has to be a valid input to the function. The returned RDD has the function outputs as its new elements.
Elements in the RDD object map_exp_rdd below are rows of the mtcars in string format. We are going to apply the map() function multiple times to convert each string elements as a list elements. Each list element has two values
Step1: mapValues
The mapValues function requires that each element in the RDD has a key/value pair structure, for example, a tuple of 2 items, or a list of 2 items. The mapValues function applies a function to each of the element values. The element key will remain unchanged.
We can apply the mapValues function to the RDD object mapValues_exp_rdd below.
Step2: When using mapValues(), the x in the above lambda function refers to the element value, not including the element key.
flatMap
This function first applies a function to each elements of an RDD and then flatten the results. We can simply use this function to flatten elements of an RDD without extra operation on each elements.
Step3: flatMapValues
The flatMapValues function requires that each element in the RDD has a key/value pair structure. It applies a function to each element value of the RDD object and then flatten the results.
For example, my raw data looks like below. But I would like to transform the data so that it has three columns | Python Code:
# create an example RDD
map_exp_rdd = sc.textFile('../../data/mtcars.csv')
map_exp_rdd.take(4)
# split auto model from other feature values
map_exp_rdd_1 = map_exp_rdd.map(lambda x: x.split(',')).map(lambda x: (x[0], x[1:]))
map_exp_rdd_1.take(4)
# remove the header row
header = map_exp_rdd_1.first()
# the filter method apply a function to each elemnts. The function output is a boolean value (TRUE or FALSE)
# elements that have output TRUE will be kept.
map_exp_rdd_2 = map_exp_rdd_1.filter(lambda x: x != header)
map_exp_rdd_2.take(4)
# convert string values to numeric values
map_exp_rdd_3 = map_exp_rdd_2.map(lambda x: (x[0], list(map(float, x[1]))))
map_exp_rdd_3.take(4)
Explanation: Map functions
These functions are probably the most commonly used functions when dealing with an RDD object.
map()
mapValues()
flatMap()
flatMapValues()
map
The map() method applies a function to each elements of the RDD. Each element has to be a valid input to the function. The returned RDD has the function outputs as its new elements.
Elements in the RDD object map_exp_rdd below are rows of the mtcars in string format. We are going to apply the map() function multiple times to convert each string elements as a list elements. Each list element has two values: the first value will be the auto model in string format; the second value will be a list of numeric values.
End of explanation
mapValues_exp_rdd = map_exp_rdd_3
mapValues_exp_rdd.take(4)
import numpy as np
mapValues_exp_rdd_1 = mapValues_exp_rdd.mapValues(lambda x: np.mean(x))
mapValues_exp_rdd_1.take(4)
Explanation: mapValues
The mapValues function requires that each element in the RDD has a key/value pair structure, for example, a tuple of 2 items, or a list of 2 items. The mapValues function applies a function to each of the element values. The element key will remain unchanged.
We can apply the mapValues function to the RDD object mapValues_exp_rdd below.
End of explanation
x = [('a', 'b', 'c'), ('a', 'a'), ('c', 'c', 'c', 'd')]
flatMap_exp_rdd = sc.parallelize(x)
flatMap_exp_rdd.collect()
flatMap_exp_rdd_1 = flatMap_exp_rdd.flatMap(lambda x: x)
flatMap_exp_rdd_1.collect()
Explanation: When using mapValues(), the x in the above lambda function refers to the element value, not including the element key.
flatMap
This function first applies a function to each elements of an RDD and then flatten the results. We can simply use this function to flatten elements of an RDD without extra operation on each elements.
End of explanation
# example data
my_data = [
[1, (23, 28, 32)],
[2, (18, 29, 31)],
[3, (34, 21, 18)]
]
flatMapValues_exp_rdd = sc.parallelize(my_data)
flatMapValues_exp_rdd.collect()
# merge A,B,and C columns into on column and add the type column
flatMapValues_exp_rdd_1 = flatMapValues_exp_rdd.flatMapValues(lambda x: list(zip(list('ABC'), x)))
flatMapValues_exp_rdd_1.collect()
# unpack the element values
flatMapValues_exp_rdd_2 = flatMapValues_exp_rdd_1.map(lambda x: [x[0]] + list(x[1]) )
flatMapValues_exp_rdd_2.collect()
Explanation: flatMapValues
The flatMapValues function requires that each element in the RDD has a key/value pair structure. It applies a function to each element value of the RDD object and then flatten the results.
For example, my raw data looks like below. But I would like to transform the data so that it has three columns: the first column is the sample id; the second the column is the three types (A,B or C); the third column is the values.
| sample id | A | B | C |
|:---------:|:--:|:--:|:--:|
| 1 | 23 | 18 | 32 |
| 2 | 18 | 29 | 31 |
| 3 | 34 | 21 | 18 |
End of explanation |
5,569 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook demonstrates basic usage of BioThings Explorer, an engine for autonomously querying a distributed knowledge graph. BioThings Explorer can answer two classes of queries -- "PREDICT" and "EXPLAIN". PREDICT queries are described in PREDICT_demo.ipynb. Here, we describe EXPLAIN queries and how to use BioThings Explorer to execute them. A more detailed overview of the BioThings Explorer systems is provided in these slides.
EXPLAIN queries are designed to identify plausible reasoning chains to explain the relationship between two entities. For example, in this notebook, we explore the question
Step1: Next, import the relevant modules
Step2: Step 1
Step3: Step 2
Step4: Here, we formulate a FindConnection query with "CML" as the input_ojb, "imatinib" as the output_obj. We further specify with the intermediate_nodes parameter that we are looking for paths joining chronic myelogenous leukemia and imatinib with one intermediate node that is a Gene. (The ability to search for longer reasoning paths that include additional intermediate nodes will be added shortly.)
Step5: We next execute the connect method, which performs the query path planning and query path execution process. In short, BioThings Explorer is deconstructing the query into individual API calls, executing those API calls, then assembling the results.
A verbose log of this process is displayed below
Step6: Step 3
Step7: While most results are based on edges from semmed, edges from DGIdb, biolink, disgenet, mydisease.info and drugcentral were also retrieved from their respective APIs.
Next, let's look to see which genes are mentioned the most.
Step8: Not surprisingly, the top two genes that BioThings Explorer found that join imatinib to CML are ABL1 and BCR, the two genes that are fused in the "Philadelphia chromosome", the genetic abnormality that underlies CML, and the validate target of imatinib.
Let's examine some of the PubMed articles linking CML to ABL1 and ABL1 to imatinib.
Step9: Comparing results between CML and GIST
Let's perform another BioThings Explorer query, this time looking to EXPLAIN the relationship between imatinib and gastrointestinal stromal tumors (GIST), another disease treated by imatinib. | Python Code:
%%capture
!pip install git+https://github.com/biothings/biothings_explorer#egg=biothings_explorer
Explanation: Introduction
This notebook demonstrates basic usage of BioThings Explorer, an engine for autonomously querying a distributed knowledge graph. BioThings Explorer can answer two classes of queries -- "PREDICT" and "EXPLAIN". PREDICT queries are described in PREDICT_demo.ipynb. Here, we describe EXPLAIN queries and how to use BioThings Explorer to execute them. A more detailed overview of the BioThings Explorer systems is provided in these slides.
EXPLAIN queries are designed to identify plausible reasoning chains to explain the relationship between two entities. For example, in this notebook, we explore the question:
"Why does imatinib have an effect on the treatment of chronic myelogenous leukemia (CML)?"
Later, we also compare those results to a similar query looking at imatinib's role in treating gastrointestinal stromal tumors (GIST).
To experiment with an executable version of this notebook, load it in Google Colaboratory.
Step 0: Load BioThings Explorer modules
First, install the biothings_explorer and biothings_schema packages, as described in this README. This only needs to be done once (but including it here for compability with colab).
End of explanation
# import modules from biothings_explorer
from biothings_explorer.hint import Hint
from biothings_explorer.user_query_dispatcher import FindConnection
import nest_asyncio
nest_asyncio.apply()
Explanation: Next, import the relevant modules:
Hint: Find corresponding bio-entity representation used in BioThings Explorer based on user input (could be any database IDs, symbols, names)
FindConnection: Find intermediate bio-entities which connects user specified input and output
End of explanation
ht = Hint()
# find all potential representations of CML
cml_hint = ht.query("chronic myelogenous leukemia")
# select the correct representation of CML
cml = cml_hint['Disease'][0]
cml
# find all potential representations of imatinib
imatinib_hint = ht.query("imatinib")
# select the correct representation of imatinib
imatinib = imatinib_hint['ChemicalSubstance'][0]
imatinib
Explanation: Step 1: Find representation of "chronic myelogenous leukemia" and "imatinib" in BTE
In this step, BioThings Explorer translates our query strings "chronic myelogenous leukemia" and "imatinib" into BioThings objects, which contain mappings to many common identifiers. Generally, the top result returned by the Hint module will be the correct item, but you should confirm that using the identifiers shown.
Search terms can correspond to any child of BiologicalEntity from the Biolink Model, including DiseaseOrPhenotypicFeature (e.g., "lupus"), ChemicalSubstance (e.g., "acetaminophen"), Gene (e.g., "CDK2"), BiologicalProcess (e.g., "T cell differentiation"), and Pathway (e.g., "Citric acid cycle").
End of explanation
help(FindConnection.__init__)
Explanation: Step 2: Find intermediate nodes connecting imatinib and chronic myelogenous leukemia
In this section, we find all paths in the knowledge graph that connect imatinib and chronic myelogenous leukemia. To do that, we will use FindConnection. This class is a convenient wrapper around two advanced functions for query path planning and query path execution. More advanced features for both query path planning and query path execution are in development and will be documented in the coming months.
The parameters for FindConnection are described below:
End of explanation
fc = FindConnection(input_obj=cml, output_obj=imatinib, intermediate_nodes='Gene')
Explanation: Here, we formulate a FindConnection query with "CML" as the input_ojb, "imatinib" as the output_obj. We further specify with the intermediate_nodes parameter that we are looking for paths joining chronic myelogenous leukemia and imatinib with one intermediate node that is a Gene. (The ability to search for longer reasoning paths that include additional intermediate nodes will be added shortly.)
End of explanation
# set verbose=True will display all steps which BTE takes to find the connection
fc.connect(verbose=True)
Explanation: We next execute the connect method, which performs the query path planning and query path execution process. In short, BioThings Explorer is deconstructing the query into individual API calls, executing those API calls, then assembling the results.
A verbose log of this process is displayed below:
End of explanation
df = fc.display_table_view()
# because UMLS is not currently well-integrated in our ID-to-object translation system, removing UMLS-only entries here
patternDel = "^UMLS:C\d+"
filter = df.node1_id.str.contains(patternDel)
df = df[~filter]
print(df.shape)
df.sample(10)
Explanation: Step 3: Display and Filter results
This section demonstrates post-query filtering done in Python. Later, more advanced filtering functions will be added to the query path execution module for interleaved filtering, thereby enabling longer query paths. More details to come...
First, all matching paths can be exported to a data frame. Let's examine a sample of those results.
End of explanation
df.node1_name.value_counts().head(10)
Explanation: While most results are based on edges from semmed, edges from DGIdb, biolink, disgenet, mydisease.info and drugcentral were also retrieved from their respective APIs.
Next, let's look to see which genes are mentioned the most.
End of explanation
# fetch all articles connecting 'chronic myelogenous leukemia' and 'ABL1'
articles = []
for info in fc.display_edge_info('chronic myelogenous leukemia', 'ABL1').values():
if 'pubmed' in info['info']:
articles += info['info']['pubmed']
print("There are "+str(len(articles))+" articles supporting the edge between CML and ABL1. Sampling of 10 of those:")
x = [print("http://pubmed.gov/"+str(x)) for x in articles[0:10] ]
# fetch all articles connecting 'ABL1' and 'Imatinib
articles = []
for info in fc.display_edge_info('ABL1', 'imatinib').values():
if 'pubmed' in info['info']:
articles += info['info']['pubmed']
print("There are "+str(len(articles))+" articles supporting the edge between ABL1 and imatinib. Sampling of 10 of those:")
x = [print("http://pubmed.gov/"+str(x)) for x in articles[0:10] ]
Explanation: Not surprisingly, the top two genes that BioThings Explorer found that join imatinib to CML are ABL1 and BCR, the two genes that are fused in the "Philadelphia chromosome", the genetic abnormality that underlies CML, and the validate target of imatinib.
Let's examine some of the PubMed articles linking CML to ABL1 and ABL1 to imatinib.
End of explanation
ht = Hint()
# find all potential representations of CML
gist_hint = ht.query("gastrointestinal stromal tumor")
# select the correct representation of CML
gist = gist_hint['Disease'][0]
gist
fc = FindConnection(input_obj=gist, output_obj=imatinib, intermediate_nodes='Gene')
fc.connect(verbose=False) # skipping the verbose log here
df = fc.display_table_view()
# because UMLS is not currently well-integrated in our ID-to-object translation system, removing UMLS-only entries here
patternDel = "^UMLS:C\d+"
filter = df.node1_id.str.contains(patternDel)
df = df[~filter]
print(df.shape)
df.sample(10)
df.node1_name.value_counts().head(10)
Explanation: Comparing results between CML and GIST
Let's perform another BioThings Explorer query, this time looking to EXPLAIN the relationship between imatinib and gastrointestinal stromal tumors (GIST), another disease treated by imatinib.
End of explanation |
5,570 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examinging NetCDF files using xray
The simplest way that I have found for opening and exploring NetCDF files in python, depends on the python package called xray. Here is a little graphical representation of the way to think about this data. For clarification on how multidimensional data are represented in xray, or to figure out how to download the package, visit
Step1: Loading a NetCDF file into a dataset
The first step when you receive a NetCDF file is to open it up and see what it contains.
Step2: Inspecting and selecting from dataset
To inspect the coordinates at a specific site (for example, 'Open_W') we just write
Step3: Now if we are only interested in soil moisture at the upper depth at a specific time (don't forget that the time is in UTC unless the timezone is explicit), we can pull out just that one data point
Step4: And if we are only interested in the actual value rather than all the attributes
Step5: Concatenate files
Step6: Convert to pandas dataframe
Some analyses are no doubt easier to carry out in pandas, but luckily xray makes it very easy to move back and forth between the two packages. To converat an xray Dataset object to a pandas MultiIndex DataFrame object, just run | Python Code:
from IPython.display import Image
Image(url='http://xray.readthedocs.org/en/latest/_images/dataset-diagram.png', embed=True, width=950, height=300)
Explanation: Examinging NetCDF files using xray
The simplest way that I have found for opening and exploring NetCDF files in python, depends on the python package called xray. Here is a little graphical representation of the way to think about this data. For clarification on how multidimensional data are represented in xray, or to figure out how to download the package, visit: http://xray.readthedocs.org/en/latest/
End of explanation
import os
import posixpath # similar to os, but less dependant on operating system
import numpy as np
import pandas as pd
import xray
NETCDF_DIR = os.getcwd().replace('\\','/')+'/raw_netCDF_output/'
datafile = 'soil'
nc_file = os.listdir(NETCDF_DIR+datafile)[-1]
nc_path = posixpath.join(NETCDF_DIR, datafile, nc_file)
ds = xray.open_dataset(nc_path)
ds
Explanation: Loading a NetCDF file into a dataset
The first step when you receive a NetCDF file is to open it up and see what it contains.
End of explanation
ds.sel(site='Open_W').coords
Explanation: Inspecting and selecting from dataset
To inspect the coordinates at a specific site (for example, 'Open_W') we just write:
End of explanation
print ds.VW_05cm_Avg.sel(site='Open_W', time='2015-06-02T06:10:00')
Explanation: Now if we are only interested in soil moisture at the upper depth at a specific time (don't forget that the time is in UTC unless the timezone is explicit), we can pull out just that one data point:
End of explanation
print ds.VW_05cm_Avg.sel(site='Open_W', time='2015-06-02T06:10:00').values
Explanation: And if we are only interested in the actual value rather than all the attributes:
End of explanation
ds_dict = {}
nc_files = os.listdir(NETCDF_DIR+datafile)
for nc_file in nc_files:
nc_path = posixpath.join(NETCDF_DIR, datafile, nc_file)
ds = xray.open_dataset(nc_path)
date = nc_file.split('Tower_')[1].split('.')[0]
ds_dict.update({date: ds})
ds_dict.keys()
xray.concat(ds_dict.values(), dim='time')
Explanation: Concatenate files
End of explanation
df = ds.to_dataframe()
for i in range(len(df.index.levels)):
print 'df.index.levels[{i}]\n{index}\n'.format(i=i, index=df.index.levels[i])
Explanation: Convert to pandas dataframe
Some analyses are no doubt easier to carry out in pandas, but luckily xray makes it very easy to move back and forth between the two packages. To converat an xray Dataset object to a pandas MultiIndex DataFrame object, just run:
End of explanation |
5,571 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python datastructures notes
This contains operations on lists, dictionaries and tuples
Also covers
- Augumented assignment trick
- Map, filter and lambda
- Iterators
- Generators
List operations
List Copies are shallow
Step1: Copying List techniques
There are three ways to copy a list
Step2: Drawbacks of List copy methods
Step3: Deep copy!
Step4: List repetitions
Step5: List operations
Step6: Removing elements in List
Step7: List Insertions
Step8: Concatenate lists
Step9: Reversing and sorting list
Step10: Remember above methods 'sort' and 'reverse' methods work directly on the original list; So we have to use sorted() and reversed() methods to ensure original list remains unmodified, these methods give a iterator to iterate on the sorted/reversed list
Step11: Shuffle a List
Step12: Randomly pick some element from a List
Step13: Using List as a stack
Step14: Using Lists as Queue
Step15: Dictionaries
TBD
A small introduction to map()
Map is a builtin function where a list of arguments can be sent to a function and it returns a iterator object!
Step16: A small introduction to filter()
Step17: builtin iter() function
Step18: Get an iterator from an object. In the first form, the argument must
supply its own iterator, or be a sequence.
we can call iter on any iterable object. Iterators give a huge performance boost for large data sets when loaded into memory.
refer this link http
Step19: How to expand an iterator object ?
any iterator object will have __iter__ and __next__ dunder method.
- next() method should be called on iterator
- can be used in for loop
- can be used in while loop with an exception handled (StopIteration)
Step20: iter with a senital argument example
Step21: In my view iter(func, sentinal) => senital value should be used only when we are sure of what we are trying to achieve.
or better to try the function on REPL first and then put this in production code. if we see above example - the same can be achieved by writing if fp.readline() == "STOP"
Step22: += and a = a + 1 are not same; they are not syntax equivalent alternatives in python. they have their own behaviours with different datatypes; specifically with strings and lists as shown above
Lets look at byte code to confirm this. We can see BINARY_ADD and INPLACE_ADD for different operations
Step23: Lets watch the same at higher level - find out why it is different for string and list !
Step24: A similar behaviour with tuples but more interesting !
Step25: even though above code throws error - if we print m1, we can see 8 got appended! That is why we should not use augmented assignment should not be used as reference location gets changed.
Step26: Scope !
Step27: Generators
Cons of handling iterators
Step28: Basic Intro to List comprehension
Step29: Decorators | Python Code:
a = [1,2,3,4]
b = a
a[2] = 44 # b list also changes here
b
a is b # This shows a and b references are same
Explanation: Python datastructures notes
This contains operations on lists, dictionaries and tuples
Also covers
- Augumented assignment trick
- Map, filter and lambda
- Iterators
- Generators
List operations
List Copies are shallow
End of explanation
a = [1,2,3]
b = a[:] # list slicing technique
a is b
b = a.copy() # using list copy method
a is b
b = list(a) # using list constructor method
a is b
Explanation: Copying List techniques
There are three ways to copy a list
End of explanation
# List copy methods fail with nested lists
a = [[1,2],[3,4]]
# lets copy this list using any of the list copy methods
b = a.copy()
a is b
# But...
a[0] is b[0] # So the references inside nested list remains same
a[0].append(8) # this will change the values of b[0] as well!
print(a)
print(b)
Explanation: Drawbacks of List copy methods
End of explanation
a = [[1,2],[4,5]]
import copy
b = copy.deepcopy(a) # Deep copy happens
a[0] is b[0]
Explanation: Deep copy!
End of explanation
a = [0]*9
a
# Beware List Repetitions are shallow!
# Example
a = [[-1,+1]]*5
a
a[0].append(8)
a
Explanation: List repetitions
End of explanation
a = [1,2,3,4,'fox',3]
i = a.index('fox')
print('index is {}'.format(i))
print('3 was repeated {} times in list a'.format(a.count(3)))
# Membership of variable is checked using in and not in keywords
print(3 in a)
print(9 in a)
print(10 not in a)
Explanation: List operations
End of explanation
a = [1,2,3,4,5,5,6,7,8,8]
del a[2] # Removing with del keyword
a
a.remove(4)
a
a.remove(8)
a
Explanation: Removing elements in List
End of explanation
a = ['a','b','c','d']
a.insert(1,'f')
a
statement = "I really love to code in python".split()
statement
# Convert a list to string
' '.join(statement)
Explanation: List Insertions
End of explanation
m = [2,3,4]
n = [5,6,7]
m + n # add using +
m += [14,15,16]
m
m.extend(n)
m
Explanation: Concatenate lists
End of explanation
g = [4,6,2,7,8,21,9,1,10]
g.reverse()
g
d = [2,3,5,67,1,3,91] # Sort from lowest to highest
d.sort()
d
d.sort(reverse=True) # Sort from highest to lowest
d
Explanation: Reversing and sorting list
End of explanation
a = [1,2,3,4]
b = reversed(a)
print(list(b))
print(a)
a = [5,4,3,2,1]
list(sorted(a))
a
Explanation: Remember above methods 'sort' and 'reverse' methods work directly on the original list; So we have to use sorted() and reversed() methods to ensure original list remains unmodified, these methods give a iterator to iterate on the sorted/reversed list
End of explanation
from random import shuffle
shuffle(a) # CAUTION: This will modify the original list
a
Explanation: Shuffle a List
End of explanation
from random import choice
choice(a) # This throws a random number from List
Explanation: Randomly pick some element from a List
End of explanation
stack = [1,2,3,4,5,6,7]
stack.append(8) # Push to a stack
stack
stack.pop() # Pops the last element
stack
Explanation: Using List as a stack
End of explanation
from collections import deque
queue = deque(["Eric", "John", "Michael"])
queue
queue.append('Max')
queue
queue.append("Albert")
queue
queue.reverse()
queue
queue.rotate(1)
queue
Explanation: Using Lists as Queue
End of explanation
def square(x):
return x*x
# SYNTAX: map(function, List of arguments)
list_squares = map(square, [1,2,3,4,5,6])
list(list_squares)
for number in list_squares:
print(number, end= ' ')
Explanation: Dictionaries
TBD
A small introduction to map()
Map is a builtin function where a list of arguments can be sent to a function and it returns a iterator object!
End of explanation
def generate_odd_numbers(x):
return x % 2 != 0
list(filter(generate_odd_numbers, range(10))) # Filter returns values which satisfy the confition,
# in simple terms - TRUE ONLY !
Explanation: A small introduction to filter()
End of explanation
# Lets discuss about builtin function iter()
Explanation: builtin iter() function
End of explanation
# Lets call an iterator on List
numbers = [1,2,3,4,5]
num_iter = iter(numbers)
num_iter # returns a list iterator
country = "India"
str_iter = iter(country)
str_iter
tuple_numbers = (1,2,3,4,5)
t_iter = iter(tuple_numbers)
t_iter
sample_dict = {'a':1,'b':2,'c':3,'d':4}
d_iter = iter(sample_dict)
d_iter # remember iter on dictionary gives you all the keys when expanded
sample_set = {1,2,3,4}
s_iter = iter(sample_set)
s_iter
Explanation: Get an iterator from an object. In the first form, the argument must
supply its own iterator, or be a sequence.
we can call iter on any iterable object. Iterators give a huge performance boost for large data sets when loaded into memory.
refer this link http://markmail.org/message/t2a6tp33n5lddzvy for more understanding.
below examples prints various kind of iterators -> string, list, tuple, dictionary and set
End of explanation
next(num_iter)
next(num_iter)
next(num_iter)
next(num_iter)
next(num_iter)
next(num_iter)
# iterating over a dictionary with for loop
for num in t_iter:
print(num, end = ' ')
# Iterating over a dictionary using while loop
while True:
try:
key = next(d_iter)
print(sample_dict[key])
except StopIteration:
print("Iterator ended!")
break
Explanation: How to expand an iterator object ?
any iterator object will have __iter__ and __next__ dunder method.
- next() method should be called on iterator
- can be used in for loop
- can be used in while loop with an exception handled (StopIteration)
End of explanation
# senital is an argument in iterators, we can use this instead of StopIteration exception [ ** Better notes needed here]
# lets check this with a file I/O example
fp = open('sample.txt')
fp
fp_iter = iter(fp.readline, 'STOP\n') # here the second argument is when "STOP" comes ensure iterator is out of loop
fp_iter
list(fp_iter) # only readlines till STOP word is encountered
Explanation: iter with a senital argument example
End of explanation
s1 = s2 = '123'
s1 is s2, s1, s2
s2 = s2 + '4'
s1 is s2, s1, s2
m1 = m2 = [1,2,3]
m1 is m2, m1, m2
m2 = m2 + [4]
m1 is m2, m1,m2
s1 = s2 = '123'
s1 is s2, s1, s2
s2 += '4'
s1 is s2, s1, s2
m1 = m2 = [1,2,3]
m1 is m2, m1, m2
m2 += [4]
m1 is m2, m1,m2
Explanation: In my view iter(func, sentinal) => senital value should be used only when we are sure of what we are trying to achieve.
or better to try the function on REPL first and then put this in production code. if we see above example - the same can be achieved by writing if fp.readline() == "STOP": break , but as iter gives a performance boost so can be used here - but if readability is your first choice - then dont use senital value.
AUGMENTED ASSIGNMENT TRICK !
End of explanation
import codeop, dis
dis.dis(codeop.compile_command("a = a+b"))
dis.dis(codeop.compile_command("a += b"))
Explanation: += and a = a + 1 are not same; they are not syntax equivalent alternatives in python. they have their own behaviours with different datatypes; specifically with strings and lists as shown above
Lets look at byte code to confirm this. We can see BINARY_ADD and INPLACE_ADD for different operations
End of explanation
m2 = [1,2,3]
m2
m2.__iadd__([4])
s2 = "1234"
s2.__iadd__('5')
Explanation: Lets watch the same at higher level - find out why it is different for string and list !
End of explanation
m1 = ([7],)
m1[0]
m1[0] += [8]
Explanation: A similar behaviour with tuples but more interesting !
End of explanation
m1 # ERROR !
Explanation: even though above code throws error - if we print m1, we can see 8 got appended! That is why we should not use augmented assignment should not be used as reference location gets changed.
End of explanation
a = 10
def method():
# if we want to access 'a' declared outside the function, we have to use global
a = 20
print("Inside method 'a' is ", a)
method()
print(a)
a = 10
def method():
# if we want to access 'a' declared outside the function, we have to use global
global a
a = 20
method()
print(a)
x = 0
def outer():
x = 1
def inner():
x = 2
print("inner:", x)
inner()
print("outer:", x)
outer()
print("global:", x)
x = 0
def outer():
x = 1
def inner():
nonlocal x
print("inner:", x)
inner()
print("outer:", x)
outer()
print("global:", x)
x = 0
def outer():
x = 1
def inner():
global x
x = 2
print("inner:", x)
inner()
print("outer:", x)
outer()
print("global:", x)
Explanation: Scope !
End of explanation
# A simple generator function
def my_gen():
n = 1
print('This is printed first')
# Generator function contains yield statements
yield n
n += 1
print('This is printed second')
yield n
n += 1
print('This is printed at last')
yield n
# Using next()
# Using for loop()
def rev_str(my_str):
length = len(my_str)
for i in range(length - 1,-1,-1):
yield my_str[i]
# Demo using For loop
Explanation: Generators
Cons of handling iterators:
iter() and next() method, keep track of internal states, raise StopIteration when there was no values to be returned etc.
This is both lengthy and counter intuitive. Generator comes into rescue in such situations.
Python generators are a simple way of creating iterators. All the overhead we mentioned above are automatically handled by generators in Python.
Simply speaking, a generator is a function that returns an object (iterator) which we can iterate over (one value at a time).
Differences between Generator function and a Normal function
Here is how a generator function differs from a normal function.
Generator function contains one or more yield statement.
When called, it returns an object (iterator) but does not start execution immediately.
Methods like __iter__() and __next__() are implemented automatically. So we can iterate through the items using next().
Once the function yields, the function is paused and the control is transferred to the caller.
Local variables and their states are remembered between successive calls.
Finally, when the function terminates, StopIteration is raised automatically on further calls.
End of explanation
S = [x**2 for x in range(10)]
V = [2**i for i in range(13)]
M = [x for x in S if x % 2 == 0]
noprimes = [j for i in range(2, 8) for j in range(i*2, 50, i)]
primes = [x for x in range(2, 50) if x not in noprimes]
words = 'The quick brown fox jumps over the lazy dog'.split()
stuff = [[w.upper(), w.lower(), len(w)] for w in words]
stuff = map(lambda w: [w.upper(), w.lower(), len(w)], words)
stuff
my_list = [1, 3, 6, 10]
a = (x**2 for x in my_list)
# Output: 1
print(next(a))
# Output: 9
print(next(a))
# Output: 36
print(next(a))
# Output: 100
print(next(a))
# Output: StopIteration
next(a)
Explanation: Basic Intro to List comprehension
End of explanation
def first(msg):
print(msg)
first("Hello")
second = first
second("Hello")
def inc(x):
return x + 1
def dec(x):
return x - 1
def operate(func, x):
result = func(x)
return result
operate(inc, 1)
operate(dec, 3)
def is_called():
def is_returned():
print("Hello")
return is_returned
new = is_called()
new()
def make_pretty(func):
def inner():
print("I got decorated")
func()
return inner
def ordinary():
print("I am ordinary")
ordinary()
pretty = make_pretty(ordinary)
pretty()
# Syntax for decaroators which does the same thing
@make_pretty
def ordinary():
print("I am ordinary")
ordinary()
# Decorating functions with parameters
def smart_divide(func):
def inner(a,b):
print("I am going to divide",a,"and",b)
if b == 0:
print("Whoops! cannot divide")
return
return func(a,b)
return inner
@smart_divide
def divide(a,b):
return a/b
divide(1,0)
divide(2,3)
# Universal decorator
def star(func):
def inner(*args, **kwargs):
print("*" * 30)
func(*args, **kwargs)
print("*" * 30)
return inner
def percent(func):
def inner(*args, **kwargs):
print("%" * 30)
func(*args, **kwargs)
print("%" * 30)
return inner
@star
@percent
def printer(msg):
print(msg)
printer("Hello")
Explanation: Decorators
End of explanation |
5,572 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 2 (solutions)
Problem 1
Construct a function, which will ask the user to input several numbers separated by commas
and will calculate their average. (e.g. if the user inputs 3,5,7 the function must result in 5).
Step1: Problem 2
Upgrade the function above to also ask the user the number of occurrences to calculate the average for. (e.g. if the user inputs 1,3,5,7,9 and as the second argument to function inputs 2, the function must result in (7+9)/2 = 8. If the user inputs the same numbers as a first argument, but inputs 3 as the 2nd, then the function must result in (5+7+9)/3=7)
Step2: Problem 3
Construct a function, which will generate a random number between 1 and 100 (both included). If this number is between 50 and 100 (not included) the function returns "Win", if it is between 1 and 50 (both included) the function returns "Loss" and if it is exactly 100 then the function returns "Draw". (Hint
Step3: Problem 4
Create a list of 3 stocks (choose whatever stocks you want, e.g. ["IBM","AAPL","MSFT"]). Create a for loop that will iterate over the list, get the data of relevant stock and print the first 7 rows of the data.
Step4: Problem 5
Upgrade the for loop above. Now, instead of printing the data, the for loop should iterate over the list, get the data and plot it. | Python Code:
inputted_numbers = input("Please, input some numbers seprated by a comma: ")
def simple_mean(inputted_numbers):
return float( sum(inputted_numbers) ) / float( len(inputted_numbers) )
print simple_mean(inputted_numbers)
Explanation: Homework 2 (solutions)
Problem 1
Construct a function, which will ask the user to input several numbers separated by commas
and will calculate their average. (e.g. if the user inputs 3,5,7 the function must result in 5).
End of explanation
inputted_numbers = input("Please, input some numbers seprated by a comma: ")
occurancy = input("Please, input the number of occurancies to calculate the mean for: ")
def rolling_mean(inputted_numbers,occurancy):
return float( sum(inputted_numbers[-occurancy:]) ) / float( len(inputted_numbers[-occurancy:]) )
print rolling_mean(inputted_numbers, occurancy)
Explanation: Problem 2
Upgrade the function above to also ask the user the number of occurrences to calculate the average for. (e.g. if the user inputs 1,3,5,7,9 and as the second argument to function inputs 2, the function must result in (7+9)/2 = 8. If the user inputs the same numbers as a first argument, but inputs 3 as the 2nd, then the function must result in (5+7+9)/3=7)
End of explanation
import random
number=random.randint(1,100)
def generator(number):
if 100 > number > 50:
return "Win"
elif 50 >= number >= 1:
return "Loss"
else:
return "Draw"
print(number)
print generator(number)
Explanation: Problem 3
Construct a function, which will generate a random number between 1 and 100 (both included). If this number is between 50 and 100 (not included) the function returns "Win", if it is between 1 and 50 (both included) the function returns "Loss" and if it is exactly 100 then the function returns "Draw". (Hint: to generate a random number, one should first import a package
called random (i.e. import random) and then use the function randint from there (e.g. random.randint(x,y), where x=1 and y=100 in our case)
End of explanation
import pandas_datareader.data as web
stock_list = ["IBM","AAPL","MSFT"]
for stock in stock_list:
data = web.DataReader(stock,"google")
print( data.head(7) )
Explanation: Problem 4
Create a list of 3 stocks (choose whatever stocks you want, e.g. ["IBM","AAPL","MSFT"]). Create a for loop that will iterate over the list, get the data of relevant stock and print the first 7 rows of the data.
End of explanation
import pandas_datareader.data as web
import matplotlib.pyplot as plt
stock_list = ["IBM","AAPL","MSFT"]
for stock in stock_list:
data = web.DataReader(stock,"google")
plt.plot(data["Open"])
plt.show()
Explanation: Problem 5
Upgrade the for loop above. Now, instead of printing the data, the for loop should iterate over the list, get the data and plot it.
End of explanation |
5,573 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
12-752
Step1: Short Introduction to Python and Jupyter
Jupyter notebooks consist of cells. This cell is a Markdown cell. Try double-clicking this cell. You can write pretty text and even Latex is supported!
A short Latex example
Step2: Task #3 [10%]
Step3: Plotting
Jupyer notebooks allow you to interactively explore data. Let's plot a sine-wave.
Step4: Task #4 [10%]
Step5: Data structures
The two most widely used data structures in Python are lists and dictionaries.
Lists
Here are some simple examples how to use lists.
If you want to learn more about Python lists, check out https
Step6: Accessing data in list
Step7: Dictionaries
Dictionaries are key-value pairs. We will give some short examples on how to used dictionaries. For a more thorough introduction, see https
Step8: Loops
A couple of example on how to use loops. For more info see https
Step9: Quick exercises
Step10: Loading data
Since you will be dealing with data, you need to know how to read and parse data. Numpy can automatically parse some csv files. Let's assume however, that we need to parse a file that numpy cannot parse out-of-the-box.
Step11: Numpy arrays
Numpy array is a data structure very suitable to store matrices. A list of list (like parsed_lines) can be converted to a numpy array by
Step12: Quick exercises
Step13: Broadcasting over axis
Step14: Quick exercises | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import sys
print('Python version:')
print(sys.version)
print('Numpy version:')
print(np.__version__)
import sklearn
print('Sklearn version:')
print(sklearn.__version__)
Explanation: 12-752: Data-Driven Building Energy Management
Fall 2016, Carnegie Mellon University
Assignment #1
Task 1 [0%]: Making sure everything is installed correctly
Please double click the code cell below and click 'Cell' -> 'Run Cells' in the drop-down bar above.
You should get an output similar to this (the version numbers should be the same):
Python version:
3.5.2 |Anaconda 4.2.0 (x86_64)| (default, Jul 2 2016, 17:52:12)
[GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)]
Numpy version:
1.11.1
Sklearn version:
0.17.1
End of explanation
#This is a code cell
#Jupyter allows you to run code within the browser
#try running this cell
x = 5+15+1000
print(x)
Explanation: Short Introduction to Python and Jupyter
Jupyter notebooks consist of cells. This cell is a Markdown cell. Try double-clicking this cell. You can write pretty text and even Latex is supported!
A short Latex example: $\sum_i x_i = 42$
Task #2 [10%]: Quick exercise
Create a new cell under this cell. Change the cell type to 'Markdown'. Make the cell display Euler's formula (https://en.wikipedia.org/wiki/Euler%27s_formula) in Latex. Do not forget to run the cell you just created.
$e^{ix}=\text{cos}x+ i\text{sin}x$
End of explanation
print(np.sum(np.arange(1,21)))
Explanation: Task #3 [10%]: Another quick exercise
Create a cell under this. Make sure the type of the cell is 'Code'.
Compute the sum of 1 to 20 and print it to the console. Hint: np.sum and np.arange will be your friends.
End of explanation
x = np.arange(0,2*np.pi,2*np.pi/80.0)
y = np.sin(x)
plt.plot(x,y)
Explanation: Plotting
Jupyer notebooks allow you to interactively explore data. Let's plot a sine-wave.
End of explanation
n = 2 # Number of periods
x = np.arange(0,n*2*np.pi,n*2*np.pi/(80.0*n))
y = np.sin(x)
plt.plot(x,y)
Explanation: Task #4 [10%]: Plotting exercise
Write numpy code that plots two periods of a cosine-wave in the cell below.
End of explanation
l = [] #creating an empty list
print('Empty list:')
print(l)
l.append(5) #appending 5 to the end of the list
print('List containing 5:')
print(l)
l = [1,2,3,'hello','world'] #creating a list containing 5 items
print('List with items:')
print(l)
l.extend([4,5,6]) #appending elements from another list to l
print('List with more items:')
print(l)
Explanation: Data structures
The two most widely used data structures in Python are lists and dictionaries.
Lists
Here are some simple examples how to use lists.
If you want to learn more about Python lists, check out https://www.tutorialspoint.com/python/python_lists.htm
Adding data to lists:
End of explanation
print('Printing fourth element in list:')
print(l[3]) #counting starts at 0
print('Printing all elements up until third element in list:')
print(l[:3])
print('Print the last 3 elements in list:')
print(l[-3:])
Explanation: Accessing data in list:
End of explanation
d = {} #creating empty dictionary
print('Empty dictionary:')
print(d)
d['author'] = 'Shakespeare' #adding an item to the dictionary
print('Dictionary with one element')
print(d)
#adding more items:
d['year'] = 1596
d['title'] = 'The merchant of Venice'
#Accessing items in dictionary:
print_string = d['title'] + ' was written by ' + d['author'] + ' in the year ' + str(d['year'])
print(print_string)
Explanation: Dictionaries
Dictionaries are key-value pairs. We will give some short examples on how to used dictionaries. For a more thorough introduction, see https://www.tutorialspoint.com/python/python_dictionary.htm
End of explanation
list_of_numbers = [1.,2.,3.,4.,5.,4.,3.,2.,1.]
incremented_list_of_numbers = []
for i in range(len(list_of_numbers)):
number = list_of_numbers[i]
incremented_list_of_numbers.append(number+1)
print('Incremented list:')
print(incremented_list_of_numbers)
#More elegantly
incremented_list_of_numbers2 = []
for number in list_of_numbers:
incremented_list_of_numbers2.append(number+1)
print('Second incremented list:')
print(incremented_list_of_numbers2)
#We can express the for-loop above also so-called in-line:
#Most elegantly
incremented_list_of_numbers3 = [number + 1 for number in list_of_numbers]
print('Third incremented list:')
print(incremented_list_of_numbers3)
#looping over dictionaries
for key in d:
value = d[key]
print(key,value)
Explanation: Loops
A couple of example on how to use loops. For more info see https://www.tutorialspoint.com/python/python_for_loop.htm or Google.
End of explanation
# Task #4:
mean_of_numbers = 0.
for i in range(len(list_of_numbers)):
number = list_of_numbers[i]
mean_of_numbers += number
mean_of_numbers = mean_of_numbers/len(list_of_numbers)
print("The mean with the first method is:",mean_of_numbers)
# Task #5:
print([n**2 for n in list_of_numbers])
# Task #6:
print([k for k in d])
# Task #7:
print([d[k] for k in d])
Explanation: Quick exercises:
In the cell below, complete the following tasks:
* Task #4 [5%]: Using a for-loop and len(), compute the mean of $list_of_numbers$
* Task #5 [10%]: Using an in-line for-loop, create a list that contains each number squared
* Task #6 [5%]: Using an in-line for-loop, create a list containing all keys of $d$
* Task #7 [10%]:using an in-line for-loop, create a list containing all values of $d$
End of explanation
f = open('testdata.txt')
parsed_lines = []
for line in f:
l = line.split(',') #create a list by splitting the string line at every ','
l = [float(x) for x in l] #in-line for-loop that casts strings to floats
parsed_lines.append(l)
plt.plot(parsed_lines[0])
plt.imshow(np.array(parsed_lines).T)
Explanation: Loading data
Since you will be dealing with data, you need to know how to read and parse data. Numpy can automatically parse some csv files. Let's assume however, that we need to parse a file that numpy cannot parse out-of-the-box.
End of explanation
data_matrix = np.array(parsed_lines)
print(data_matrix[:12,:10]) #print the 10 first columns of the 12 first rows
plt.plot(data_matrix[0,:] - data_matrix[-1,:]) #plots the difference between the first and last row
print(data_matrix.shape) #shows the dimensions of the data_matrix, 200 rows, 80 columns
Explanation: Numpy arrays
Numpy array is a data structure very suitable to store matrices. A list of list (like parsed_lines) can be converted to a numpy array by:
End of explanation
# Task #8
plt.figure()
plt.plot(data_matrix[25,:])
# Task #9
plt.figure()
plt.plot(data_matrix[:,25])
Explanation: Quick exercises:
Task #8 [10%]: Plot the 25th row of data_matrix in the cell below. (The rows are very similar to each other)
Task #9 [10%]: Plot the 25th column of data_matrix in the cell below. Note: the columns don't have as much structure as the rows so don't expect a pretty plot)
End of explanation
plt.plot(np.mean(data_matrix, axis=0)) #mean row
plt.plot(np.mean(data_matrix, axis=1)) #mean column
Explanation: Broadcasting over axis
End of explanation
# Task #10
plt.figure()
plt.plot(np.max(data_matrix,axis=1))
# Task #11
plt.figure()
plt.plot(np.min(data_matrix,axis=0))
Explanation: Quick exercises:
Task #10 [10%]: Plot the maximum value (np.max) over columns
Task #11 [10%]: Plot the minimum value (np.min) over rows
End of explanation |
5,574 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Übungsblatt 7
Präsenzaufgaben
Aufgabe 1 CFG
Step3: Aufgabe 2 CFG
Step5: Hausaufgaben
Aufgabe 7 Plural für das Subjekt
Ergänzen Sie die in den Präsenzaufgaben erstellte Grammatik um die Möglichkeit, das Subjekt in den Plural zu setzen.
Dafür müssen Sie folgendes tun
Step6: Aufgabe 8 Adverben und Verbzweitstellung
Fügen Sie der Grammatik jetzt die zwei Adverben heute und morgen hinzu. Adverben können prinzipiell sehr frei im Satz platziert werden. Eine Besonderheit des Deutschen ist aber die sog. Verbzweitstellung, wie sie z. B. in Sätzen wie Heute schläft der Mann. deutlich wird.
Versuchen Sie alle Möglichkeiten zu implementieren | Python Code:
grammar =
S -> NP VP
NP -> DET[GEN=?x] NOM[GEN=?x]
NOM[GEN=?x] -> ADJ NOM[GEN=?x] | N[GEN=?x]
ADJ -> "schöne" | "kluge" | "dicke"
DET[GEN=mask,KAS=nom] -> "der"
DET[GEN=fem,KAS=dat] -> "der"
DET[GEN=fem,KAS=nom] -> "die"
DET[GEN=fem,KAS=akk] -> "die"
DET[GEN=neut,KAS=nom] -> "das"
DET[GEN=neut,KAS=akk] -> "das"
N[GEN=mask] -> "Mann"
N[GEN=fem] -> "Frau"
N[GEN=neut] -> "Buch"
VP -> V NP NP | V NP | V
V -> "gibt" | "schenkt" | "schläft" | "gefällt" | "kennt"
import nltk
from IPython.display import display
import sys
def test_grammar(grammar, sentences):
cfg = nltk.grammar.FeatureGrammar.fromstring(grammar)
parser = nltk.parse.FeatureEarleyChartParser(cfg)
for i, sent in enumerate(sentences, 1):
print("Satz {}: {}".format(i, sent))
sys.stdout.flush()
results = parser.parse(sent.split())
analyzed = False
for tree in results:
display(tree) # tree.draw() oder print(tree)
analyzed = True
if not analyzed:
print("Keine Analyse möglich", file=sys.stderr)
sys.stderr.flush()
pos_sentences = [
"der Mann schläft",
"der schöne Mann schläft",
"der Mann gibt der Frau das Buch"
]
neg_sentences = ["das Mann schläft", "das schöne Mann schläft"]
test_grammar(grammar, neg_sentences)
test_grammar(grammar, pos_sentences)
Explanation: Übungsblatt 7
Präsenzaufgaben
Aufgabe 1 CFG: Kongruenz in Nominalphrasen
Die folgende Grammatik entspricht der Grammatik von Übungsblatt 4 am Ende der Präsenzaufgaben. (Sie können also stattdessen auch Ihre im Zuge der Übung von damals selbst erstellte Grammatik als Grundlage verwenden.)
Orientieren Sie sich an folgender Tabelle zur Mehrdeutigkeit der Formen des bestimmen Artikels im Deutschen und passen Sie die Grammatik so an, dass sie nur noch grammatikalisch korrekte Nominalphrasen als Teil von Sätzen akzeptiert. Konzentrieren Sie sich auf die Kongruenz von Artikel und Nomen im Genus.
|Form|mögliche Merkmale|
|----|-----------------|
|der|[NUM=sg, GEN=mas, KAS=nom]|
||[NUM=sg, GEN=fem, KAS=dat]|
||[NUM=sg, GEN=fem, KAS=GEN]|
||[NUM=pl, KAS=GEN]|
|die|[NUM=sg, GEN=fem, KAS=nom]|
||[NUM=sg, GEN=fem, KAS=akk]|
||[NUM=pl, KAS=nom]|
||[NUM=pl, KAS=akk]|
|das|[NUM=sg, GEN=neu, KAS=nom]|
||[NUM=sg, GEN=neu, KAS=akk]|
End of explanation
grammar =
S -> NP[KAS=nom] VP
NP[KAS=?y] -> DET[GEN=?x,KAS=?y] NOM[GEN=?x]
NOM[GEN=?x] -> ADJ NOM[GEN=?x] | N[GEN=?x]
ADJ -> "schöne" | "kluge" | "dicke"
DET[GEN=mask,KAS=nom] -> "der"
DET[GEN=fem,KAS=dat] -> "der"
DET[GEN=fem,KAS=nom] -> "die"
DET[GEN=fem,KAS=akk] -> "die"
DET[GEN=neut,KAS=nom] -> "das"
DET[GEN=neut,KAS=akk] -> "das"
N[GEN=mask] -> "Mann"
N[GEN=fem] -> "Frau"
N[GEN=neut] -> "Buch"
VP -> V[SUBCAT=ditr, VAL1=?x, VAL2=?y] NP[KAS=?x] NP[KAS=?y]
VP -> V[VAL=?x,SUBCAT=tr] NP[KAS=?x]
VP -> V[SUBCAT=intr]
V[SUBCAT=ditr, VAL1=dat, VAL2=akk] -> "gibt" | "schenkt"
V[SUBCAT=intr] -> "schläft"
V[SUBCAT=tr,VAL=dat] -> "gefällt"
V[SUBCAT=tr,VAL=akk] -> "kennt"
pos_sentences.extend([
"das Buch gefällt der Frau",
"das Buch kennt die Frau"
])
neg_sentences.extend([
"der Mann schläft das Buch",
"die Frau gefällt das Buch",
"das Buch kennt",
"die Frau gibt das Buch",
"die Frau gibt die Frau das Buch"
])
test_grammar(grammar, pos_sentences)
test_grammar(grammar, neg_sentences)
Explanation: Aufgabe 2 CFG: Kasus
Als nächstes sollen Kasusbedingungen in die Grammatik integriert werden:
Es gibt nur eine Nominalphrase im Nominativ (Subjekt).
Je nach Valenzstellen des Verbes sollen nur Nominalphrasen in den korrekten Kasus akzeptiert werden.
Optional Versuchen Sie die freie Satzstellung des Deutschen zu berücksichtigen.
End of explanation
grammar =
BITTE NACH BEARBEITUNG VON (2) VON OBEN KOPIEREN
pos_sentences.extend([
"die Männer geben der Frau das Buch",
"die Bücher gefallen der Frau",
"die Frauen schlafen"
])
neg_sentences.extend([
"der Mann geben der Frau das Buch",
"das Buch gefällt der Frauen",
"die Frauen schläft"
])
Explanation: Hausaufgaben
Aufgabe 7 Plural für das Subjekt
Ergänzen Sie die in den Präsenzaufgaben erstellte Grammatik um die Möglichkeit, das Subjekt in den Plural zu setzen.
Dafür müssen Sie folgendes tun:
1. Erstellen Sie lexikalische Regeln für Pluralformen der Verben, Adjektive und Substantive (Nominativ ist ausreichend.).
1. Vervollständigen Sie die lexikalischen Regeln für die Form des Artikels die um die korrekte Merkmalstruktur für den Plural.
1. Formulieren Sie eine Kongruenzbedingung in Numerus zwischen Verb und Subjekt.
End of explanation
pos_sentences.extend([
"heute gibt der Mann der Frau das Buch",
"der Mann gibt heute der Frau das Buch",
"der Mann gibt der Frau heute das Buch",
"der Mann gibt der Frau das Buch heute"
])
neg_sentences.extend([
"heute der Mann gibt der Frau das Buch"
])
Explanation: Aufgabe 8 Adverben und Verbzweitstellung
Fügen Sie der Grammatik jetzt die zwei Adverben heute und morgen hinzu. Adverben können prinzipiell sehr frei im Satz platziert werden. Eine Besonderheit des Deutschen ist aber die sog. Verbzweitstellung, wie sie z. B. in Sätzen wie Heute schläft der Mann. deutlich wird.
Versuchen Sie alle Möglichkeiten zu implementieren:
End of explanation |
5,575 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiments
Step1: Let's create an experiment
Step2: Let's see the evolution of this vocabulary, after 20, 50 and 100 interactions.
Step3: We can graph measures on this population (more info on other possibile measures | Python Code:
import naminggamesal.ngsimu as ngsimu
Explanation: Experiments
End of explanation
xp_cfg={
'pop_cfg':{
'voc_cfg':{
'voc_type':'matrix',
'M':5,
'W':10
},
'strat_cfg':{
'strat_type':'success_threshold',
'voc_update':'Minimal'
},
'interact_cfg':{
'interact_type':'speakerschoice'
},
'nbagent':10
},
'step':1
}
testexp=ngsimu.Experiment(**xp_cfg)
testexp
print(testexp)
testexp.continue_exp(1)
print(testexp)
testexp.visual()
Explanation: Let's create an experiment
End of explanation
Tvec=[20,50,100]
for i in range(100):
testexp.continue_exp()
#print str(testexp._poplist[-1])
for i in Tvec:
testexp.visual(tmax=i)
Explanation: Let's see the evolution of this vocabulary, after 20, 50 and 100 interactions.
End of explanation
#testexp.graph("srtheo").show()
test=testexp.graph("srtheo")
test.show()
testexp.graph("Nlinksurs").show()
testexp.graph("entropy").show()
testexp.graph("entropycouples").show()
Explanation: We can graph measures on this population (more info on other possibile measures: Design_newMeasures.ipynb):
End of explanation |
5,576 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wiki Networking - Extended Example
Introduction
Network graphs consisting of nodes and edges and be used to visualize the relationships between people. The results can inform a viewer of groups of relationships. This notebook has examples for all functions available and a variety of text mining examples. It demonstrates how to crawl Wiki articles and represent the links between these articles as a graph.
import statements
First, we will import a few necessary packages, including wikinetworking from this repository. It includes functions for crawling and text mining Wiki data as well as graphing the result of the crawl.
Step1: filter_links
This function accepts a PyQuery object and returns a list of Wikipedia article links from the article's main body text. It will not return links that are redirects or links that are anchors within other pages. Optionally, you can specify a DOM selector for the type of element containing links you wish to retrieve.
In this example, we create a PyQuery object for the Iron Man article on Wikipedia, retrieve its links with filter_links and output the number of links we retrieved.
Step2: In this example, we retrieve links from a list of Hip Hop musicians. Notice that each link is contained inside of a li list item HTML tag.
Step3: In this example, we do the same thing for Harry Potter Characters. Notice that many links are omitted - these links are redirects or links to sections within larger articles.
Step4: In this example, we are retrieving a list of Marvel Comics Characters beginning with the letter D. In this case, we specify the DOM selector .hatnote. This is because we only want links for Marvel characters that have a dedicated Main article. The Main article elements all have the .hatnote CSS class, so it is an easy way of selecting these articles. In this example, we simply print the contents of the list.
Step5: retrieve_multipage
This function retrieves data from a list of URLs. This is useful for when data on a Wikipedia category is broken up into multiple individual pages.
In this example, we are retrieving lists from all subpages from the List of Marvel Comics Characters. The URL pattern for these articles is https
Step6: write_list and read_list
These are convenience functions for writing and reading list data.
In this example, we write all of the Marvel character links we retrieved to a text file, and verify that we can read the data from the file.
Step7: intersection
Returns a list of elements that appear in both provided lists
The intersection function is useful for cross referencing links from one Wikipedia page with another. For example, the List of Hip Hop Musicians contains many musicians, but we may not have the time or resources to crawl every single artist. The BET Hip Hop Awards page has links to artists that have won an award and may be significant for our search, but it also contains links to songs and other media that may not be articles about individual Hip Hop artists. In addition, each Wiki page will have links to completely unrelated articles. By taking the intersection of both, we get a list containing links of only Hip Hop arists that have won a BET Hip Hop Award.
Step8: crawl
This crawls articles iteratively, with a two second delay between each page retrieval. It requires a starting URL fragment and an accept list of URLs the crawler should follow. It returns a dictionary structure with each URL fragment as a key and crawl data (links, title and depth) as a value.
In this example, we start a crawl at the article for Jadakiss. As the accept list of URLs that the crawler is allowed to follow, we will use the award_winner_links list from earlier. This isolates the crawl to only Artist winners of BET Hip Hop Awards. Therefore, our social network will be a network built of BET Hip Hop Award Winners and stored in the bet_winner_crawl dictionary.
Step9: directed_graph and undirected_graph
These functions flatten the raw crawl data into something a little more usable. It returns a dictionary with only article title names as the key, and corresponding urls and a dictionary of edges and weights to each article. The directed_graph function allows each article to retain a separate edge representing back links, while the undirected_graph creates one edge from each pair of article where the weight is a sum of the links between each article. For example, directed_graph might produce
Step10: save_dict and load_dict
These functions save and load Python dictionaries. This is convenient when you want to save crawl data.
Step11: create_graph
Now that we have a dictionary representing an undirected graph, we need to turn it into a networkx.graph object which can be drawn. Optionally, we can pre-generate a node layout and save it.
Step12: make_interactive_graph
The networkx.graph object can be rendered as an interactive, clickable graph.
Step13: You can save the resulting HTML of the interactive graph to a file, if you wish to load it outside of this notebook.
Step14: save_big_graph
If you would like to save a very high resolution version of this graph for use on a display system like Texas Advanced Computing Center's Stallion, you can use the save_big_graph function. The default output size at 600dpi will be 4800x3600 pixels. Warning | Python Code:
import wikinetworking as wn
import networkx as nx
from pyquery import PyQuery
%matplotlib inline
print "OK"
Explanation: Wiki Networking - Extended Example
Introduction
Network graphs consisting of nodes and edges and be used to visualize the relationships between people. The results can inform a viewer of groups of relationships. This notebook has examples for all functions available and a variety of text mining examples. It demonstrates how to crawl Wiki articles and represent the links between these articles as a graph.
import statements
First, we will import a few necessary packages, including wikinetworking from this repository. It includes functions for crawling and text mining Wiki data as well as graphing the result of the crawl.
End of explanation
iron_man_page = PyQuery(url="https://en.wikipedia.org/wiki/Iron_man")
iron_man_links = wn.filter_links(iron_man_page)
print len(iron_man_links), "links retrieved"
Explanation: filter_links
This function accepts a PyQuery object and returns a list of Wikipedia article links from the article's main body text. It will not return links that are redirects or links that are anchors within other pages. Optionally, you can specify a DOM selector for the type of element containing links you wish to retrieve.
In this example, we create a PyQuery object for the Iron Man article on Wikipedia, retrieve its links with filter_links and output the number of links we retrieved.
End of explanation
hip_hop_page = PyQuery(url="https://en.wikipedia.org/wiki/List_of_hip_hop_musicians")
hip_hop_links = wn.filter_links(hip_hop_page, "li")
print len(hip_hop_links), "links retrieved"
Explanation: In this example, we retrieve links from a list of Hip Hop musicians. Notice that each link is contained inside of a li list item HTML tag.
End of explanation
harry_potter_page = PyQuery(url="https://en.wikipedia.org/wiki/List_of_Harry_Potter_characters")
harry_potter_links = wn.filter_links(harry_potter_page, "li")
print len(harry_potter_links), "links retrieved"
Explanation: In this example, we do the same thing for Harry Potter Characters. Notice that many links are omitted - these links are redirects or links to sections within larger articles.
End of explanation
marvel_d_page = PyQuery(url="https://en.wikipedia.org/wiki/List_of_Marvel_Comics_characters:_D")
marvel_d_links = wn.filter_links(marvel_d_page, ".hatnote")
print marvel_d_links
Explanation: In this example, we are retrieving a list of Marvel Comics Characters beginning with the letter D. In this case, we specify the DOM selector .hatnote. This is because we only want links for Marvel characters that have a dedicated Main article. The Main article elements all have the .hatnote CSS class, so it is an easy way of selecting these articles. In this example, we simply print the contents of the list.
End of explanation
sections = [letter for letter in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ']
sections.append('0-9')
urls = ["https://en.wikipedia.org/wiki/List_of_Marvel_Comics_characters:_" + section for section in sections]
print "URLs:", urls
character_links = wn.retrieve_multipage(urls, ".hatnote", True)
print len(character_links), "links retrieved"
Explanation: retrieve_multipage
This function retrieves data from a list of URLs. This is useful for when data on a Wikipedia category is broken up into multiple individual pages.
In this example, we are retrieving lists from all subpages from the List of Marvel Comics Characters. The URL pattern for these articles is https://en.wikipedia.org/wiki/List_of_Marvel_Comics_characters:_ followed by a section name.
End of explanation
wn.write_list(character_links, "all_marvel_chars.txt")
print len(wn.read_list("all_marvel_chars.txt")), "read"
Explanation: write_list and read_list
These are convenience functions for writing and reading list data.
In this example, we write all of the Marvel character links we retrieved to a text file, and verify that we can read the data from the file.
End of explanation
hip_hop_page = PyQuery(url="https://en.wikipedia.org/wiki/List_of_hip_hop_musicians")
hip_hop_links = wn.filter_links(hip_hop_page, "li")
print len(hip_hop_links), "links retrieved from List of Hip Hop Musicians"
bet_hip_hop_awards_page = PyQuery(url="https://en.wikipedia.org/wiki/BET_Hip_Hop_Awards")
bet_hip_hop_awards_links = wn.filter_links(bet_hip_hop_awards_page, "li")
print len(bet_hip_hop_awards_links), "links retrieved from BET Hip Hop Awards"
award_winner_links = wn.intersection(hip_hop_links, bet_hip_hop_awards_links)
print "BET Hip Hop Award winners:", award_winner_links
Explanation: intersection
Returns a list of elements that appear in both provided lists
The intersection function is useful for cross referencing links from one Wikipedia page with another. For example, the List of Hip Hop Musicians contains many musicians, but we may not have the time or resources to crawl every single artist. The BET Hip Hop Awards page has links to artists that have won an award and may be significant for our search, but it also contains links to songs and other media that may not be articles about individual Hip Hop artists. In addition, each Wiki page will have links to completely unrelated articles. By taking the intersection of both, we get a list containing links of only Hip Hop arists that have won a BET Hip Hop Award.
End of explanation
bet_winner_crawl = wn.crawl("/wiki/Jadakiss", accept=award_winner_links)
print bet_winner_crawl
Explanation: crawl
This crawls articles iteratively, with a two second delay between each page retrieval. It requires a starting URL fragment and an accept list of URLs the crawler should follow. It returns a dictionary structure with each URL fragment as a key and crawl data (links, title and depth) as a value.
In this example, we start a crawl at the article for Jadakiss. As the accept list of URLs that the crawler is allowed to follow, we will use the award_winner_links list from earlier. This isolates the crawl to only Artist winners of BET Hip Hop Awards. Therefore, our social network will be a network built of BET Hip Hop Award Winners and stored in the bet_winner_crawl dictionary.
End of explanation
bet_directed_graph_data = wn.directed_graph(bet_winner_crawl)
print bet_directed_graph_data
bet_undirected_graph_data = wn.undirected_graph(bet_winner_crawl)
print bet_undirected_graph_data
Explanation: directed_graph and undirected_graph
These functions flatten the raw crawl data into something a little more usable. It returns a dictionary with only article title names as the key, and corresponding urls and a dictionary of edges and weights to each article. The directed_graph function allows each article to retain a separate edge representing back links, while the undirected_graph creates one edge from each pair of article where the weight is a sum of the links between each article. For example, directed_graph might produce:
{
'Iron Man' : {
'url' : '/wiki/Iron_Man',
'edges' : {
'Captain America' : 11
}
},
'Captain America' : {
'url' : '/wiki/Captain_America',
'edges' : {
'Iron Man' : 8
}
}
}
...while the same data passed to undirected_graph might produce:
{
'Iron Man' : {
'url' : '/wiki/Iron_Man',
'edges' : {
'Captain America' : 19
}
},
'Captain America' : {
'url' : /wiki/Captain_America',
'edges' : {
}
}
}
End of explanation
wn.save_dict(bet_undirected_graph_data, "bet_network.json")
print wn.load_dict("bet_network.json")
Explanation: save_dict and load_dict
These functions save and load Python dictionaries. This is convenient when you want to save crawl data.
End of explanation
# Just in case we don't want to re-run the crawl, we will load the data directly
import wikinetworking as wn
import networkx as nx
%matplotlib inline
bet_undirected_graph_data = wn.load_dict("bet_network.json")
# Now create the graph
graph = wn.create_graph(bet_undirected_graph_data)
layout = nx.spring_layout(graph)
Explanation: create_graph
Now that we have a dictionary representing an undirected graph, we need to turn it into a networkx.graph object which can be drawn. Optionally, we can pre-generate a node layout and save it.
End of explanation
graph_html = wn.make_interactive_graph(graph, pos=layout)
Explanation: make_interactive_graph
The networkx.graph object can be rendered as an interactive, clickable graph.
End of explanation
with open("bet_network.html", "w") as f:
f.write(graph_html)
f.close()
Explanation: You can save the resulting HTML of the interactive graph to a file, if you wish to load it outside of this notebook.
End of explanation
wn.save_big_graph(graph, pos=layout)
Explanation: save_big_graph
If you would like to save a very high resolution version of this graph for use on a display system like Texas Advanced Computing Center's Stallion, you can use the save_big_graph function. The default output size at 600dpi will be 4800x3600 pixels. Warning: This function can take a little while to run.
End of explanation |
5,577 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Variational Auto-Encoder notebook using tensorflow
(Edit) I fixed this notebook so that it can be run top to bottom to reproduce everything. Also to get the ipython notebook format, change "html" to "ipynb" at the end of the URL above to access the notebook file.
Introduction
Step1: Description of the MNIST dataset
Step3: Building the model
Let me briefly explain in words how a VAE works. First, like all generative models, VAEs are inherently unsupervised, here we aren't going to use the "labels" at all and we aren't going to use the validation or testing sets. How a VAE works, is that it takes an input, in this case a MNIST image, and maps that onto an internal encoding/embedding, which is then used to reconstruct the original input image.
In the specific case of deep VAE, we have a neural network (the enncoder/recognition network) that transforms the input, x, into a set of parameters that determine a probability distribution. This probability distribution is then sampled from to get a vector z. That vector is fed into a second network (the decoder/generator network) which attempts to output the original image, x, with maximum accuracy. The whole thing can be trained iteratively with simple gradient decent via back propogation.
Note that n_z samples of the latent distribution will be drawn for each input image
Step4: Loss and Optimizer
The loss function is in two parts. One that measures how well the reconstruction "fits" the original image, and one that measures the "complexity" of the latent distribution (acting as a regularizer).
Step5: Graphs, sessions and state-initialization
The above cells determine the tensorflow "graph" i.e. the graph consisting of nodes of ops connected by directed tensor-edges. But the graph above cannot be run yet. It hasn't been initialized into a "working graph" with statefull variables. The tensorflow tutorials refer to the graph as something like a "blueprint".
The session contains the state for a specific graph, including the current values of all of the tensors in the graph. Once the session is created, the variables are initialized. From now on, the variables have state, and can be read/viewed, or updated iteratively according to the data and the optimization op.
Step6: Interactive tensorflow
Because this is an ipython notebook and we are doing exploratory machine learning, we should have an ability to interact with the tensorflow model interactively. Note the line above sess = tf.InteractiveSession() which allows us to do things like some_tensor.eval(feed_dict={x
Step7: Interpretting the above interactive results
Note that the three printed reconstructions are not the same!! This isn't due to some bug. This is because the model is inherently stocastic. Given the same input, the reconstructions will be different. Especially here, with an untrained model.
Also note for the above tensors we printed, they are 100 by 784 dim tensors. The first dim, 100 is the batch size. The second dim, 784, is the flattened pixels in the reconstructed image.
This interactive testing can be usefull for tensorflow noobs, who want to make sure that their tensors and ops are compatible as they build thier evaluation graph one tensor/op at a time.
Remember, when using interactive tensorflow you have access to the current state of the model at any time. Just use eval. Even without interactive mode, it is still easy, you just have to make sure to keep the session which holds the tensorflow graph's state.
Adding a Saver to record checkpoints
The session contains state, namely the current values of all of the variables (parameters) in the model. We can save this state to a file periodically which can be used to restart training after an interuption, or to reload the session/model for any reason (including perhaps deploying a trained model to be evaluated on another machine)
Step8: Use scopes and names to organize your tensorflow graph
group tensors with with tf.name_scope('hidden') as scope
Step9: Run train with summaries
Now we finally run the training loop. We loop over all batches passing the feed dictionary in to the session run command with feed_dict. After display_step worth of epochs, w
we print the epoch number and the cost to the screen.
we save a checkpoint of the model.
we save summaries of the loss to a file.
Step10: Evaluate the training
One thing that we can do to evaluate training is print the reconstruction image with a new (untrained) session and compare that visually to the reconstruction that can be achieved with our trained model. See the plotting of those reconstructions bellow. The difference is immediately apparent! | Python Code:
import sys
import os
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(0)
tf.set_random_seed(0)
# get the script bellow from
# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/input_data.py
import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
n_samples = mnist.train.num_examples
Explanation: Variational Auto-Encoder notebook using tensorflow
(Edit) I fixed this notebook so that it can be run top to bottom to reproduce everything. Also to get the ipython notebook format, change "html" to "ipynb" at the end of the URL above to access the notebook file.
Introduction:
In this blog series we are going to build towards a rather complicated, deep learning model that can be applied to an interesting bioinformatics application. I don't want to explain the end goal right now, since I want to keep that confidential until it is published. However the end goal is complicated enough, and I am unfamiliar with both the deep learning models, and the tensorflow framwork, that we aren't going to try to implement the end goal in one shot, rather we are going to implement a series of smaller simpler tensorflow models, to build up some pieces that can then be assembled later.
In this series we will investigate deep generative models such as Variational Auto-Encoders (VAE) and Generative Adversarial Networks (GAN). We will also develop recurrent neural networks, including LSTM and especially seq2seq models.
To start off, in this notebook, we look below at a VAE on the MNIST dataset, just to get familiar with VAE's, and generative models overall (which will be a piece that we need later), and to get familliar with tensorflow.
A lot of this notebook is taken and modified from https://jmetzen.github.io/2015-11-27/vae.html
End of explanation
# We have training data Images 28x28=782 gray scale pixels (a list of 782 length numpy vectors):
print(mnist.train.images[0].shape) # 784
print(len(mnist.train.images)) # 55000
print(sys.getsizeof(mnist.train.images)) # 172 million bytes (fits in memory, yay!)
# the labels are stored as one-hot encoding (10 dimensional numpy vectors)
print(mnist.train.labels[0].shape) # 10
print(len(mnist.train.labels)) # 55000
# We also have 5000 validation image-label pairs
print(len(mnist.validation.labels)) # 5000
# and 10000 testing image-label pairs
print(len(mnist.test.labels)) # 10000
Explanation: Description of the MNIST dataset:
Let me describe the data which was downloaded/loaded in the cell above:
The data is the familliar mnist dataset, a classic dataset for supervised machine learning, consisting of images of handrawn digits and their labels.
End of explanation
# hyper params:
n_hidden_recog_1=500 # 1st layer encoder neurons
n_hidden_recog_2=500 # 2nd layer encoder neurons
n_hidden_gener_1=500 # 1st layer decoder neurons
n_hidden_gener_2=500 # 2nd layer decoder neurons
n_input=784 # MNIST data input (img shape: 28*28)
n_z=20
transfer_fct=tf.nn.softplus
learning_rate=0.001
batch_size=100
# CREATE NETWORK
# 1) input placeholder
x = tf.placeholder(tf.float32, [None, n_input])
# 2) weights and biases variables
def xavier_init(fan_in, fan_out, constant=1):
Xavier initialization of network weights
# https://stackoverflow.com/questions/33640581/how-to-do-xavier-initialization-on-tensorflow
low = -constant*np.sqrt(6.0/(fan_in + fan_out))
high = constant*np.sqrt(6.0/(fan_in + fan_out))
return tf.random_uniform((fan_in, fan_out),
minval=low, maxval=high,
dtype=tf.float32)
wr_h1 = tf.Variable(xavier_init(n_input, n_hidden_recog_1))
wr_h2 = tf.Variable(xavier_init(n_hidden_recog_1, n_hidden_recog_2))
wr_out_mean = tf.Variable(xavier_init(n_hidden_recog_2, n_z))
wr_out_log_sigma = tf.Variable(xavier_init(n_hidden_recog_2, n_z))
br_b1 = tf.Variable(tf.zeros([n_hidden_recog_1], dtype=tf.float32))
br_b2 = tf.Variable(tf.zeros([n_hidden_recog_2], dtype=tf.float32))
br_out_mean = tf.Variable(tf.zeros([n_z], dtype=tf.float32))
br_out_log_sigma = tf.Variable(tf.zeros([n_z], dtype=tf.float32))
wg_h1 = tf.Variable(xavier_init(n_z, n_hidden_gener_1))
wg_h2 = tf.Variable(xavier_init(n_hidden_gener_1, n_hidden_gener_2))
wg_out_mean = tf.Variable(xavier_init(n_hidden_gener_2, n_input))
# wg_out_log_sigma = tf.Variable(xavier_init(n_hidden_gener_2, n_input))
bg_b1 = tf.Variable(tf.zeros([n_hidden_gener_1], dtype=tf.float32))
bg_b2 = tf.Variable(tf.zeros([n_hidden_gener_2], dtype=tf.float32))
bg_out_mean = tf.Variable(tf.zeros([n_input], dtype=tf.float32))
# 3) recognition network
# use recognition network to predict mean and (log) variance of (latent) Gaussian distribution z (n_z dimensional)
r_layer_1 = transfer_fct(tf.add(tf.matmul(x, wr_h1), br_b1))
r_layer_2 = transfer_fct(tf.add(tf.matmul(r_layer_1, wr_h2), br_b2))
z_mean = tf.add(tf.matmul(r_layer_2, wr_out_mean), br_out_mean)
z_sigma = tf.add(tf.matmul(r_layer_2, wr_out_log_sigma), br_out_log_sigma)
# 4) do sampling on recognition network to get latent variables
# draw one n_z dimensional sample (for each input in batch), from normal distribution
eps = tf.random_normal((batch_size, n_z), 0, 1, dtype=tf.float32)
# scale that set of samples by predicted mu and epsilon to get samples of z, the latent distribution
# z = mu + sigma*epsilon
z = tf.add(z_mean, tf.mul(tf.sqrt(tf.exp(z_sigma)), eps))
# 5) use generator network to predict mean of Bernoulli distribution of reconstructed input
g_layer_1 = transfer_fct(tf.add(tf.matmul(z, wg_h1), bg_b1))
g_layer_2 = transfer_fct(tf.add(tf.matmul(g_layer_1, wg_h2), bg_b2))
x_reconstr_mean = tf.nn.sigmoid(tf.add(tf.matmul(g_layer_2, wg_out_mean), bg_out_mean))
Explanation: Building the model
Let me briefly explain in words how a VAE works. First, like all generative models, VAEs are inherently unsupervised, here we aren't going to use the "labels" at all and we aren't going to use the validation or testing sets. How a VAE works, is that it takes an input, in this case a MNIST image, and maps that onto an internal encoding/embedding, which is then used to reconstruct the original input image.
In the specific case of deep VAE, we have a neural network (the enncoder/recognition network) that transforms the input, x, into a set of parameters that determine a probability distribution. This probability distribution is then sampled from to get a vector z. That vector is fed into a second network (the decoder/generator network) which attempts to output the original image, x, with maximum accuracy. The whole thing can be trained iteratively with simple gradient decent via back propogation.
Note that n_z samples of the latent distribution will be drawn for each input image
End of explanation
# DEFINE LOSS AND OPTIMIZER
# The loss is composed of two terms:
# 1.) The reconstruction loss (the negative log probability
# of the input under the reconstructed Bernoulli distribution
# induced by the decoder in the data space).
# This can be interpreted as the number of "nats" required
# for reconstructing the input when the activation in latent
# is given.
# Adding 1e-10 to avoid evaluatio of log(0.0)
reconstr_loss = -tf.reduce_sum(x * tf.log(1e-10 + x_reconstr_mean) + (1-x) * tf.log(1e-10 + 1 - x_reconstr_mean), 1)
# 2.) The latent loss, which is defined as the Kullback Leibler divergence
## between the distribution in latent space induced by the encoder on
# the data and some prior. This acts as a kind of regularizer.
# This can be interpreted as the number of "nats" required
# for transmitting the the latent space distribution given
# the prior.
latent_loss = -0.5 * tf.reduce_sum(1 + z_sigma - tf.square(z_mean) - tf.exp(z_sigma), 1)
# Since reconstr_loss and latent_loss are in terms of "nats" they
# should be on simmilar scales. So we can add them together.
cost = tf.reduce_mean(reconstr_loss + latent_loss) # average over batch
# 3) set up optimizer (use ADAM optimizer)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
Explanation: Loss and Optimizer
The loss function is in two parts. One that measures how well the reconstruction "fits" the original image, and one that measures the "complexity" of the latent distribution (acting as a regularizer).
End of explanation
# INITALIZE VARIABLES AND TF SESSION
init = tf.initialize_all_variables()
sess = tf.InteractiveSession()
sess.run(init)
Explanation: Graphs, sessions and state-initialization
The above cells determine the tensorflow "graph" i.e. the graph consisting of nodes of ops connected by directed tensor-edges. But the graph above cannot be run yet. It hasn't been initialized into a "working graph" with statefull variables. The tensorflow tutorials refer to the graph as something like a "blueprint".
The session contains the state for a specific graph, including the current values of all of the tensors in the graph. Once the session is created, the variables are initialized. From now on, the variables have state, and can be read/viewed, or updated iteratively according to the data and the optimization op.
End of explanation
# INTERACTIVE TESTING
# get a batch of inputs
batch_xs, _ = mnist.train.next_batch(batch_size)
# get the reconstructed mean for those inputs
# (on a completely untrained model)
# below is the "interactive version" to "read" a tensor (given an input in the feed_dict)
print(x_reconstr_mean.eval(feed_dict={x: batch_xs}).shape)
print(x_reconstr_mean.eval(feed_dict={x: batch_xs}))
print("----------------------------------------------------")
# below is the same but using explicit reference to the session state
print(tf.get_default_session().run(x_reconstr_mean, feed_dict={x: batch_xs}).shape)
print(tf.get_default_session().run(x_reconstr_mean, feed_dict={x: batch_xs}))
# and this is also the same thing:
print(sess.run(x_reconstr_mean, feed_dict={x: batch_xs}).shape)
print(sess.run(x_reconstr_mean, feed_dict={x: batch_xs}))
Explanation: Interactive tensorflow
Because this is an ipython notebook and we are doing exploratory machine learning, we should have an ability to interact with the tensorflow model interactively. Note the line above sess = tf.InteractiveSession() which allows us to do things like some_tensor.eval(feed_dict={x: batch_xs}). Some tensors can simplly be evaluated without passing in an input, but only if they are something that does not depend on an input, remember tensor flow is a directed graph of op-nodes and tensor-edges that denote dependancies. Variables like weights and biases do not depend on any input, but something like the reconstructed input depends on the input.
End of explanation
# make a saver object
saver = tf.train.Saver()
# save the current state of the session to file "model.ckpt"
save_path = saver.save(sess, "model.ckpt")
# a binding to a session can be restored from a file with the following
restored_session = tf.Session()
saver.restore(restored_session, "model.ckpt")
# prove the two sessions are equal:
# here we evaluate a weight variable that does not depend on input hence no feed_dict is nessisary
print(sess.run(wr_h1))
print("-------------------------------------------------------------------")
print(restored_session.run(wr_h1))
Explanation: Interpretting the above interactive results
Note that the three printed reconstructions are not the same!! This isn't due to some bug. This is because the model is inherently stocastic. Given the same input, the reconstructions will be different. Especially here, with an untrained model.
Also note for the above tensors we printed, they are 100 by 784 dim tensors. The first dim, 100 is the batch size. The second dim, 784, is the flattened pixels in the reconstructed image.
This interactive testing can be usefull for tensorflow noobs, who want to make sure that their tensors and ops are compatible as they build thier evaluation graph one tensor/op at a time.
Remember, when using interactive tensorflow you have access to the current state of the model at any time. Just use eval. Even without interactive mode, it is still easy, you just have to make sure to keep the session which holds the tensorflow graph's state.
Adding a Saver to record checkpoints
The session contains state, namely the current values of all of the variables (parameters) in the model. We can save this state to a file periodically which can be used to restart training after an interuption, or to reload the session/model for any reason (including perhaps deploying a trained model to be evaluated on another machine)
End of explanation
# remake the whole graph with scopes, names, summaries
wr_h1 = tf.Variable(xavier_init(n_input, n_hidden_recog_1))
wr_h2 = tf.Variable(xavier_init(n_hidden_recog_1, n_hidden_recog_2))
wr_out_mean = tf.Variable(xavier_init(n_hidden_recog_2, n_z))
wr_out_log_sigma = tf.Variable(xavier_init(n_hidden_recog_2, n_z))
br_b1 = tf.Variable(tf.zeros([n_hidden_recog_1], dtype=tf.float32))
br_b2 = tf.Variable(tf.zeros([n_hidden_recog_2], dtype=tf.float32))
br_out_mean = tf.Variable(tf.zeros([n_z], dtype=tf.float32))
br_out_log_sigma = tf.Variable(tf.zeros([n_z], dtype=tf.float32))
wg_h1 = tf.Variable(xavier_init(n_z, n_hidden_gener_1))
wg_h2 = tf.Variable(xavier_init(n_hidden_gener_1, n_hidden_gener_2))
wg_out_mean = tf.Variable(xavier_init(n_hidden_gener_2, n_input))
# wg_out_log_sigma = tf.Variable(xavier_init(n_hidden_gener_2, n_input))
bg_b1 = tf.Variable(tf.zeros([n_hidden_gener_1], dtype=tf.float32))
bg_b2 = tf.Variable(tf.zeros([n_hidden_gener_2], dtype=tf.float32))
bg_out_mean = tf.Variable(tf.zeros([n_input], dtype=tf.float32))
# 3) recognition network
# use recognition network to predict mean and (log) variance of (latent) Gaussian distribution z (n_z dimensional)
with tf.name_scope('recognition-encoding'):
r_layer_1 = transfer_fct(tf.add(tf.matmul(x, wr_h1), br_b1))
r_layer_2 = transfer_fct(tf.add(tf.matmul(r_layer_1, wr_h2), br_b2))
z_mean = tf.add(tf.matmul(r_layer_2, wr_out_mean), br_out_mean)
z_sigma = tf.add(tf.matmul(r_layer_2, wr_out_log_sigma), br_out_log_sigma)
# 4) do sampling on recognition network to get latent variables
# draw one n_z dimensional sample (for each input in batch), from normal distribution
eps = tf.random_normal((batch_size, n_z), 0, 1, dtype=tf.float32)
# scale that set of samples by predicted mu and epsilon to get samples of z, the latent distribution
# z = mu + sigma*epsilon
z = tf.add(z_mean, tf.mul(tf.sqrt(tf.exp(z_sigma)), eps))
# 5) use generator network to predict mean of Bernoulli distribution of reconstructed input
with tf.name_scope('generator-decoding'):
g_layer_1 = transfer_fct(tf.add(tf.matmul(z, wg_h1), bg_b1))
g_layer_2 = transfer_fct(tf.add(tf.matmul(g_layer_1, wg_h2), bg_b2))
x_reconstr_mean = tf.nn.sigmoid(tf.add(tf.matmul(g_layer_2, wg_out_mean), bg_out_mean))
reconstr_loss = -tf.reduce_sum(x * tf.log(1e-10 + x_reconstr_mean) + (1-x) * tf.log(1e-10 + 1 - x_reconstr_mean), 1)
latent_loss = -0.5 * tf.reduce_sum(1 + z_sigma - tf.square(z_mean) - tf.exp(z_sigma), 1)
batch_cost = reconstr_loss + latent_loss
cost = tf.reduce_mean(batch_cost) # average over batch
stdev_cost =tf.sqrt(tf.reduce_sum(tf.square(batch_cost - cost)))
tf.scalar_summary("mean_cost", cost)
tf.scalar_summary("sttdev_cost", stdev_cost)
tf.histogram_summary("histo_cost", batch_cost)
# 3) set up optimizer (use ADAM optimizer)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# this is the "summary op"
merged = tf.merge_all_summaries()
# INITALIZE VARIABLES AND TF SESSION
sess = tf.Session()
sess.run(tf.initialize_all_variables())
# this is the summary writer
train_writer = tf.train.SummaryWriter("summaries/train", sess.graph)
Explanation: Use scopes and names to organize your tensorflow graph
group tensors with with tf.name_scope('hidden') as scope:
and name tensors with the name property like a = tf.constant(5, name='alpha') prior to graph visualization.
Use summaries to record how your model changes over training-time
Lets reorganize our computational graph with scopes and names, and add some summaries:
There are three summaries which we will add to the "cost" tensor.
We will record its mean, standard deviation and histogram over a batch.
Below is the whole graph from above again. Lots of copying and pasting.
End of explanation
# DO TRAINING
learning_rate=0.001
batch_size=100
training_epochs=10
display_step=1
save_ckpt_dir="model/"
os.mkdir(save_ckpt_dir)
saver = tf.train.Saver()
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0
total_batch = int(n_samples / batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, _ = mnist.train.next_batch(batch_size)
# Fit training using batch data ( don't want to eval summaries every time during training)
_, batch_cost = sess.run((optimizer, cost), feed_dict={x: batch_xs})
_, loss = sess.run([optimizer, cost],
feed_dict={x: batch_xs})
avg_cost += loss / n_samples * batch_size
if epoch % display_step == 0:
# At the end of every epoch print loss, save model, and save summaries
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
# run op to get summaries (don't want to be running this everytime during training)
summary = sess.run(merged,
feed_dict={x: batch_xs},
options=run_options,
run_metadata=run_metadata)
# write summaries
train_writer.add_run_metadata(run_metadata, 'epoch%03d' % epoch)
train_writer.add_summary(summary, epoch)
# write ckpt
save_path = saver.save(sess, save_ckpt_dir + "model_" + str(epoch) + ".ckpt")
# print ave cost
avg_cost += loss / n_samples * batch_size
print("Epoch:", '%04d' % (epoch+1), "average loss:", "{:.9f}".format(avg_cost))
Explanation: Run train with summaries
Now we finally run the training loop. We loop over all batches passing the feed dictionary in to the session run command with feed_dict. After display_step worth of epochs, w
we print the epoch number and the cost to the screen.
we save a checkpoint of the model.
we save summaries of the loss to a file.
End of explanation
# DO RECONSTRUCTION / PLOTTING
def do_reconstruction(sess):
x_sample = mnist.test.next_batch(100)[0]
x_reconstruct = sess.run(x_reconstr_mean, feed_dict={x: x_sample})
plt.figure(figsize=(8, 12))
examples_to_plot = 3
for i in range(examples_to_plot):
plt.subplot(examples_to_plot, 2, 2*i + 1)
plt.imshow(x_sample[i].reshape(28, 28), vmin=0, vmax=1)
plt.title("Test input")
plt.colorbar()
plt.subplot(examples_to_plot, 2, 2*i + 2)
plt.imshow(x_reconstruct[i].reshape(28, 28), vmin=0, vmax=1)
plt.title("Reconstruction")
plt.colorbar()
plt.tight_layout()
new_sess = tf.Session()
new_sess.run(tf.initialize_all_variables())
# do reconstruction before training
do_reconstruction(new_sess)
# do reconstruction after training
do_reconstruction(sess)
Explanation: Evaluate the training
One thing that we can do to evaluate training is print the reconstruction image with a new (untrained) session and compare that visually to the reconstruction that can be achieved with our trained model. See the plotting of those reconstructions bellow. The difference is immediately apparent!
End of explanation |
5,578 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating MNE-Python data structures from scratch
This tutorial shows how to create MNE-Python's core data structures using an
existing
Step1: Creating ~mne.Info objects
.. sidebar
Step2: You can see in the output above that, by default, the channels are assigned
as type "misc" (where it says chs
Step3: If the channel names follow one of the standard montage naming schemes, their
spatial locations can be automatically added using the
~mne.Info.set_montage method
Step4: .. sidebar
Step5: Creating ~mne.io.Raw objects
.. sidebar
Step6: Creating ~mne.Epochs objects
To create an ~mne.Epochs object from scratch, you can use the
mne.EpochsArray class constructor, which takes an ~mne.Info object and a
Step7: Since we did not supply an events array, the ~mne.EpochsArray constructor
automatically created one for us, with all epochs having the same event
number
Step8: If we want to simulate having different experimental conditions, we can pass
an event array (and an event ID dictionary) to the constructor. Since our
epochs are 1 second long and have 200 samples/second, we'll put our events
spaced 200 samples apart, and pass tmin=-0.5, so that the events
land in the middle of each epoch (the events are always placed at time=0 in
each epoch).
Step9: You could also create simulated epochs by using the normal ~mne.Epochs
(not ~mne.EpochsArray) constructor on the simulated ~mne.io.RawArray
object, by creating an events array (e.g., using
mne.make_fixed_length_events) and extracting epochs around those events.
Creating ~mne.Evoked Objects
If you already have data that was averaged across trials, you can use it to
create an ~mne.Evoked object using the ~mne.EvokedArray class
constructor. It requires an ~mne.Info object and a data array of shape
(n_channels, n_times), and has an optional tmin parameter like
~mne.EpochsArray does. It also has a parameter nave indicating how many
trials were averaged together, and a comment parameter useful for keeping
track of experimental conditions, etc. Here we'll do the averaging on our
NumPy array and use the resulting averaged data to make our ~mne.Evoked. | Python Code:
import mne
import numpy as np
Explanation: Creating MNE-Python data structures from scratch
This tutorial shows how to create MNE-Python's core data structures using an
existing :class:NumPy array <numpy.ndarray> of (real or synthetic) data.
We begin by importing the necessary Python modules:
End of explanation
# Create some dummy metadata
n_channels = 32
sampling_freq = 200 # in Hertz
info = mne.create_info(n_channels, sfreq=sampling_freq)
print(info)
Explanation: Creating ~mne.Info objects
.. sidebar:: Info objects
For full documentation on the `~mne.Info` object, see
`tut-info-class`.
The core data structures for continuous (~mne.io.Raw), discontinuous
(~mne.Epochs), and averaged (~mne.Evoked) data all have an info
attribute comprising an mne.Info object. When reading recorded data using
one of the functions in the mne.io submodule, ~mne.Info objects are
created and populated automatically. But if we want to create a
~mne.io.Raw, ~mne.Epochs, or ~mne.Evoked object from scratch, we need
to create an appropriate ~mne.Info object as well. The easiest way to do
this is with the mne.create_info function to initialize the required info
fields. Additional fields can be assigned later as one would with a regular
:class:dictionary <dict>.
To initialize a minimal ~mne.Info object requires a list of channel names,
and the sampling frequency. As a convenience for simulated data, channel
names can be provided as a single integer, and the names will be
automatically created as sequential integers (starting with 0):
End of explanation
ch_names = [f'MEG{n:03}' for n in range(1, 10)] + ['EOG001']
ch_types = ['mag', 'grad', 'grad'] * 3 + ['eog']
info = mne.create_info(ch_names, ch_types=ch_types, sfreq=sampling_freq)
print(info)
Explanation: You can see in the output above that, by default, the channels are assigned
as type "misc" (where it says chs: 32 MISC). You can assign the channel
type when initializing the ~mne.Info object if you want:
End of explanation
ch_names = ['Fp1', 'Fp2', 'Fz', 'Cz', 'Pz', 'O1', 'O2']
ch_types = ['eeg'] * 7
info = mne.create_info(ch_names, ch_types=ch_types, sfreq=sampling_freq)
info.set_montage('standard_1020')
Explanation: If the channel names follow one of the standard montage naming schemes, their
spatial locations can be automatically added using the
~mne.Info.set_montage method:
End of explanation
info['description'] = 'My custom dataset'
info['bads'] = ['O1'] # Names of bad channels
print(info)
Explanation: .. sidebar:: Info consistency
When assigning new values to the fields of an `~mne.Info` object, it is
important that the fields stay consistent. if there are ``N`` channels:
- The length of the channel information field ``chs`` must be ``N``.
- The length of the ``ch_names`` field must be ``N``.
- The ``ch_names`` field should be consistent with the ``name``
field of the channel information contained in ``chs``.
Note the new field dig that includes our seven channel locations as well
as theoretical values for the three
:term:cardinal scalp landmarks <fiducial point>.
Additional fields can be added in the same way that Python dictionaries are
modified, using square-bracket key assignment:
End of explanation
times = np.linspace(0, 1, sampling_freq, endpoint=False)
sine = np.sin(20 * np.pi * times)
cosine = np.cos(10 * np.pi * times)
data = np.array([sine, cosine])
info = mne.create_info(ch_names=['10 Hz sine', '5 Hz cosine'],
ch_types=['misc'] * 2,
sfreq=sampling_freq)
simulated_raw = mne.io.RawArray(data, info)
simulated_raw.plot(show_scrollbars=False, show_scalebars=False)
Explanation: Creating ~mne.io.Raw objects
.. sidebar:: Units
The expected units for the different channel types are:
- Volts: eeg, eog, seeg, dbs, emg, ecg, bio, ecog
- Teslas: mag
- Teslas/meter: grad
- Molar: hbo, hbr
- Amperes: dipole
- Arbitrary units: misc
To create a ~mne.io.Raw object from scratch, you can use the
mne.io.RawArray class constructor, which takes an ~mne.Info object and a
:class:NumPy array <numpy.ndarray> of shape (n_channels, n_samples).
Here, we'll create some sinusoidal data and plot it:
End of explanation
data = np.array([[0.2 * sine, 1.0 * cosine],
[0.4 * sine, 0.8 * cosine],
[0.6 * sine, 0.6 * cosine],
[0.8 * sine, 0.4 * cosine],
[1.0 * sine, 0.2 * cosine]])
simulated_epochs = mne.EpochsArray(data, info)
simulated_epochs.plot(picks='misc', show_scrollbars=False)
Explanation: Creating ~mne.Epochs objects
To create an ~mne.Epochs object from scratch, you can use the
mne.EpochsArray class constructor, which takes an ~mne.Info object and a
:class:NumPy array <numpy.ndarray> of shape (n_epochs, n_channels,
n_samples). Here we'll create 5 epochs of our 2-channel data, and plot it.
Notice that we have to pass picks='misc' to the ~mne.Epochs.plot
method, because by default it only plots :term:data channels.
End of explanation
print(simulated_epochs.events[:, -1])
Explanation: Since we did not supply an events array, the ~mne.EpochsArray constructor
automatically created one for us, with all epochs having the same event
number:
End of explanation
events = np.column_stack((np.arange(0, 1000, sampling_freq),
np.zeros(5, dtype=int),
np.array([1, 2, 1, 2, 1])))
event_dict = dict(condition_A=1, condition_B=2)
simulated_epochs = mne.EpochsArray(data, info, tmin=-0.5, events=events,
event_id=event_dict)
simulated_epochs.plot(picks='misc', show_scrollbars=False, events=events,
event_id=event_dict)
Explanation: If we want to simulate having different experimental conditions, we can pass
an event array (and an event ID dictionary) to the constructor. Since our
epochs are 1 second long and have 200 samples/second, we'll put our events
spaced 200 samples apart, and pass tmin=-0.5, so that the events
land in the middle of each epoch (the events are always placed at time=0 in
each epoch).
End of explanation
# Create the Evoked object
evoked_array = mne.EvokedArray(data.mean(axis=0), info, tmin=-0.5,
nave=data.shape[0], comment='simulated')
print(evoked_array)
evoked_array.plot()
Explanation: You could also create simulated epochs by using the normal ~mne.Epochs
(not ~mne.EpochsArray) constructor on the simulated ~mne.io.RawArray
object, by creating an events array (e.g., using
mne.make_fixed_length_events) and extracting epochs around those events.
Creating ~mne.Evoked Objects
If you already have data that was averaged across trials, you can use it to
create an ~mne.Evoked object using the ~mne.EvokedArray class
constructor. It requires an ~mne.Info object and a data array of shape
(n_channels, n_times), and has an optional tmin parameter like
~mne.EpochsArray does. It also has a parameter nave indicating how many
trials were averaged together, and a comment parameter useful for keeping
track of experimental conditions, etc. Here we'll do the averaging on our
NumPy array and use the resulting averaged data to make our ~mne.Evoked.
End of explanation |
5,579 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading Data
Statistis for my data
Step1: Loading Data using Pandas
Step2: Hypothesis and Questions
..........
Things I need to do
Step5: Interpretation of Histograms
Based on the histograms of the respective strain it is clear that the data does not follow a normal distribution.
Therefore t-tests and linear regression cannot be applied to this data set as planned. | Python Code:
# Identitfy version of software used
pd.__version__
#Identify version of software used
np.__version__
# import libraries
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
#stats library
import statsmodels.api as sm
import scipy
#T-test is imported to complete the statistical analysis
from scipy.stats import ttest_ind
from scipy import stats
#The function below is used to show the plots within the notebook
%matplotlib inline
Explanation: Loading Data
Statistis for my data
End of explanation
data=pd.read_csv('../data/Testdata.dat', delimiter=' ')
# Print the first 8 rows of the dataset
data.head(8)
#Print the last 8 rows of the dataset
data.tail(8)
# Commands used to check the title names in each column as some of the titles were omitted
data.dtypes.head()
Explanation: Loading Data using Pandas
End of explanation
# Here we extract only two columns from the data as these are the main variables for the statistcal analysis
strain_df=data[['strain','speed']]
strain_df.head()
# Eliminate NaN from the dataset
strain_df=strain_df.dropna()
strain_df.head()
#Resample the data to group by strain
strain_resampled=strain_df.groupby('strain')
strain_resampled.head()
#Created a histogram to check the normal distribution the data
strain_resampled.hist(column='speed', bins=50)
# I need help adding titles to these histograms
Explanation: Hypothesis and Questions
..........
Things I need to do:
Format the date properly. I need help do use regular expression to fix the format of my date
End of explanation
# I know I should applu Apply Kruskal. wallis statistics to data, however I am not sure how to deal with the array
scipy.stats.mstats.kruskalwallis( , )
def test_mean1():
'''This function created to give the mean values for the different strains in the dataset.
The input of the function is the raw speed of the strains
while the output is the mean of the strains tested'''
mean=strain_resampled.mean()
assert mean > -1, 'The mean should be greater than 0.00'
return(mean)
#assert mean == speed > 0.00, 'The mean should be greater than 0'
#assert mean == speed < 0.00, 'The mean should not be less than 0'
def test_mean1():
'''This function created to give the mean values for the different strains in the dataset.
The input of the function is the raw speed of the strains
while the output is the mean of the strains tested'''
mean=MX1027_mean.mean()
assert mean < -1, 'The mean should be greater than 0.00'
return(mean)
#assert mean == speed > 0.00, 'The mean should be greater than 0'
#assert mean == speed < 0.00, 'The mean should not be less than 0'
MX1027=strain_df.groupby('strain').get_group('MX1027')
MX1027.head()
MX1027.mean()
N2=strain_df.groupby('strain').get_group('N2')
N2.head()
N2.mean()
def test_mean():
n=('N2')
for n in N2:
assert n == -1, 'The mean is greater than 0'
assert n > 0, 'Yes, the mean is greater than 0'
mean = N2.mean()
return(mean)
def test_mean2():
n= ('MX1027')
for n in MX1027:
assert n >= 0.1, "The mean is greater than 0.1"
mean_2 = MX1027.mean()
return (mean_2)
print('mean is:', mean_2)
test_mean()
test_mean2()
N2.mean()
print(MX1027_mean.mean())
mean()
MX1027_mean=['0.127953']
N2_mean= ['0.084662']
new_data= strain_df.iloc[3,1]
new_data1= strain_df.iloc[4,1]
new_data.mean()
new_data
new_data.mean()
#def mean():
#mean=strain_resampled.mean()
#return(mean)
#mean()
# more generic fucntion to find the mean
def hope_mean(strain_df, speed):
n= len(strain_df)
if n == 0.127953:
return 0.127953
hope_mean =(sum(strain_df.speed))/n
print(hope_mean)
return hope_mean
#Create test functions of the mean of N2
def test_mean1():
The input of the function is the mean of N2
where the output is the expected mean
#obs = N2.mean()
#exp = 0.084662
#assert obs == exp, ' The mean of N2 should be 0.084662'
#Create test function for the mean of MX1027
#def test_mean2():
#The input of the function is the mean of MX1027
#where the output is the expected mean
#obs = MX1027.mean()
#exp= 0.127953
#assert obs == exp, ' The mean of MX1027 should be 0.127953'
Explanation: Interpretation of Histograms
Based on the histograms of the respective strain it is clear that the data does not follow a normal distribution.
Therefore t-tests and linear regression cannot be applied to this data set as planned.
End of explanation |
5,580 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ApJdataFrames Rayner et al. 2009
Title
Step1: Table 7 - Strong metal lines in the Arcturus spectrum
Step2: This is a verbose way to do this, but whatever, it works
Step3: Finally
Step4: The lines drop off towards $K-$band.
Save the file | Python Code:
import warnings
warnings.filterwarnings("ignore")
from astropy.io import ascii
import pandas as pd
Explanation: ApJdataFrames Rayner et al. 2009
Title: THE INFRARED TELESCOPE FACILITY (IRTF) SPECTRAL LIBRARY: COOL STARS
Authors: John T. Rayner, Michael C. Cushing, and William D. Vacca
Data is from this paper:
http://iopscience.iop.org/article/10.1088/0067-0049/185/2/289/meta
End of explanation
#! curl http://iopscience.iop.org/0067-0049/185/2/289/suppdata/apjs311476t7_ascii.txt > ../data/Rayner2009/apjs311476t7_ascii.txt
#! head ../data/Rayner2009/apjs311476t7_ascii.txt
nn = ['wl1', 'id1', 'wl2', 'id2', 'wl3', 'id3', 'wl4', 'id4']
tbl7 = pd.read_csv("../data/Rayner2009/apjs311476t7_ascii.txt", index_col=False,
sep="\t", skiprows=[0,1,2,3], names= nn)
Explanation: Table 7 - Strong metal lines in the Arcturus spectrum
End of explanation
line_list_unsorted = pd.concat([tbl7[[nn[0], nn[1]]].rename(columns={"wl1":"wl", "id1":"id"}),
tbl7[[nn[2], nn[3]]].rename(columns={"wl2":"wl", "id2":"id"}),
tbl7[[nn[4], nn[5]]].rename(columns={"wl3":"wl", "id3":"id"}),
tbl7[[nn[6], nn[7]]].rename(columns={"wl4":"wl", "id4":"id"})], ignore_index=True, axis=0)
line_list = line_list_unsorted.sort_values('wl').dropna().reset_index(drop=True)
Explanation: This is a verbose way to do this, but whatever, it works:
End of explanation
#line_list.tail()
sns.distplot(line_list.wl)
Explanation: Finally:
End of explanation
line_list.to_csv('../data/Rayner2009/tbl7_clean.csv', index=False)
Explanation: The lines drop off towards $K-$band.
Save the file:
End of explanation |
5,581 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sebastian Raschka, 2015
Python Machine Learning Essentials
Chapter 10 - Predicting Continuous Target Variables with Regression Analysis
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
Step1: <br>
<br>
Sections
Exploring the Housing dataset
Implementing at a simple regression model - Ordinary least squares
Solving regression parameters with gradient descent
Estimating coefficient of a regression model via scikit-learn
Fitting a robust regression model using RANSAC
Evaluating the performance of linear regression models
Using regularized methods for regression
Turning a linear regression model into a curve - Polynomial regression
Modeling nonlinear relationships in the Housing dataset
Dealing with nonlinear relationships using random forests
Decision tree regression
Random forest regression
<br>
<br>
Exploring the Housing dataset
[back to top]
Source
Step2: <br>
<br>
Implementing at a simple regression model - Ordinary least squares
[back to top]
Solving regression parameters with gradient descent
[back to top]
Step3: <br>
<br>
Estimating coefficient of a regression model via scikit-learn
[back to top]
Step4: <br>
<br>
Fitting a robust regression model using RANSAC
[back to top]
Step5: <br>
<br>
Evaluating the performance of linear regression models
[back to top]
Step6: <br>
<br>
Using regularized methods for regression
[back to top]
Step7: <br>
<br>
Turning a linear regression model into a curve - Polynomial regression
[back to top]
Step8: <br>
<br>
Modeling nonlinear relationships in the Housing dataset
[back to top]
Step9: Transforming the dataset
Step10: <br>
<br>
Dealing with nonlinear relationships using random forests
[back to top]
<br>
<br>
Decision tree regression
[back to top]
Step11: <br>
<br>
Random forest regression
[back to top] | Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn,seaborn
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
Explanation: Sebastian Raschka, 2015
Python Machine Learning Essentials
Chapter 10 - Predicting Continuous Target Variables with Regression Analysis
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data',
header=None, sep='\s+')
df.columns = ['CRIM', 'ZN', 'INDUS', 'CHAS',
'NOX', 'RM', 'AGE', 'DIS', 'RAD',
'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
df.head()
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='whitegrid', context='notebook')
cols = ['LSTAT', 'INDUS', 'NOX', 'RM', 'MEDV']
sns.pairplot(df[cols], size=2.5);
plt.tight_layout()
# plt.savefig('./figures/scatter.png', dpi=300)
plt.show()
import numpy as np
cm = np.corrcoef(df[cols].values.T)
sns.set(font_scale=1.5)
hm = sns.heatmap(cm,
cbar=True,
annot=True,
square=True,
fmt='.2f',
annot_kws={'size': 15},
yticklabels=cols,
xticklabels=cols)
plt.tight_layout()
# plt.savefig('./figures/corr_mat.png', dpi=300)
plt.show()
sns.reset_orig()
%matplotlib inline
Explanation: <br>
<br>
Sections
Exploring the Housing dataset
Implementing at a simple regression model - Ordinary least squares
Solving regression parameters with gradient descent
Estimating coefficient of a regression model via scikit-learn
Fitting a robust regression model using RANSAC
Evaluating the performance of linear regression models
Using regularized methods for regression
Turning a linear regression model into a curve - Polynomial regression
Modeling nonlinear relationships in the Housing dataset
Dealing with nonlinear relationships using random forests
Decision tree regression
Random forest regression
<br>
<br>
Exploring the Housing dataset
[back to top]
Source: https://archive.ics.uci.edu/ml/datasets/Housing
Attributes:
<pre>
1. CRIM per capita crime rate by town
2. ZN proportion of residential land zoned for lots over
25,000 sq.ft.
3. INDUS proportion of non-retail business acres per town
4. CHAS Charles River dummy variable (= 1 if tract bounds
river; 0 otherwise)
5. NOX nitric oxides concentration (parts per 10 million)
6. RM average number of rooms per dwelling
7. AGE proportion of owner-occupied units built prior to 1940
8. DIS weighted distances to five Boston employment centres
9. RAD index of accessibility to radial highways
10. TAX full-value property-tax rate per $10,000
11. PTRATIO pupil-teacher ratio by town
12. B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks
by town
13. LSTAT % lower status of the population
14. MEDV Median value of owner-occupied homes in $1000's
</pre>
End of explanation
class LinearRegressionGD(object):
def __init__(self, eta=0.001, n_iter=20):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
self.w_ = np.zeros(1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
output = self.net_input(X)
errors = (y - output)
self.w_[1:] += self.eta * X.T.dot(errors)
self.w_[0] += self.eta * errors.sum()
cost = (errors**2).sum() / 2.0
self.cost_.append(cost)
return self
def net_input(self, X):
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
return self.net_input(X)
X = df[['RM']].values
y = df['MEDV'].values
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
sc_y = StandardScaler()
X_std = sc_x.fit_transform(X)
y_std = sc_y.fit_transform(y)
lr = LinearRegressionGD()
lr.fit(X_std, y_std)
plt.plot(range(1, lr.n_iter+1), lr.cost_)
plt.ylabel('SSE')
plt.xlabel('Epoch')
plt.tight_layout()
plt.savefig('./figures/cost.png', dpi=300)
plt.show()
def lin_regplot(X, y, model):
plt.scatter(X, y, c='lightblue')
plt.plot(X, model.predict(X), color='red', linewidth=2)
return
lin_regplot(X_std, y_std, lr)
plt.xlabel('Average number of rooms [RM] (standardized)')
plt.ylabel('Price in $1000\'s [MEDV] (standardized)')
plt.tight_layout()
# plt.savefig('./figures/gradient_fit.png', dpi=300)
plt.show()
print('Slope: %.3f' % lr.w_[1])
print('Intercept: %.3f' % lr.w_[0])
num_rooms_std = sc_x.transform([5.0])
price_std = lr.predict(num_rooms_std)
print("Price in $1000's: %.3f" % sc_y.inverse_transform(price_std))
Explanation: <br>
<br>
Implementing at a simple regression model - Ordinary least squares
[back to top]
Solving regression parameters with gradient descent
[back to top]
End of explanation
from sklearn.linear_model import LinearRegression
slr = LinearRegression()
slr.fit(X, y)
y_pred = slr.predict(X)
print('Slope: %.3f' % slr.coef_[0])
print('Intercept: %.3f' % slr.intercept_)
lin_regplot(X, y, slr)
plt.xlabel('Average number of rooms [RM]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.tight_layout()
# plt.savefig('./figures/scikit_lr_fit.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Estimating coefficient of a regression model via scikit-learn
[back to top]
End of explanation
from sklearn.linear_model import RANSACRegressor
ransac = RANSACRegressor(LinearRegression(),
max_trials=100,
min_samples=50,
residual_metric=lambda x: np.sum(np.abs(x), axis=1),
residual_threshold=5.0,
random_state=0)
ransac.fit(X, y)
inlier_mask = ransac.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
line_X = np.arange(3, 10, 1)
line_y_ransac = ransac.predict(line_X[:, np.newaxis])
plt.scatter(X[inlier_mask], y[inlier_mask], c='blue', marker='o', label='Inliers')
plt.scatter(X[outlier_mask], y[outlier_mask], c='lightgreen', marker='s', label='Outliers')
plt.plot(line_X, line_y_ransac, color='red')
plt.xlabel('Average number of rooms [RM]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./figures/ransac_fit.png', dpi=300)
plt.show()
print('Slope: %.3f' % ransac.estimator_.coef_[0])
print('Intercept: %.3f' % ransac.estimator_.intercept_)
Explanation: <br>
<br>
Fitting a robust regression model using RANSAC
[back to top]
End of explanation
from sklearn.cross_validation import train_test_split
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
slr = LinearRegression()
slr.fit(X_train, y_train)
y_train_pred = slr.predict(X_train)
y_test_pred = slr.predict(X_test)
plt.scatter(y_train_pred, y_train_pred - y_train, c='blue', marker='o', label='Training data')
plt.scatter(y_test_pred, y_test_pred - y_test, c='lightgreen', marker='s', label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red')
plt.xlim([-10, 50])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
Explanation: <br>
<br>
Evaluating the performance of linear regression models
[back to top]
End of explanation
from sklearn.linear_model import Lasso
lasso = Lasso(alpha=0.1)
lasso.fit(X_train, y_train)
y_train_pred = lasso.predict(X_train)
y_test_pred = lasso.predict(X_test)
print(lasso.coef_)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
Explanation: <br>
<br>
Using regularized methods for regression
[back to top]
End of explanation
X = np.array([258.0, 270.0, 294.0,
320.0, 342.0, 368.0,
396.0, 446.0, 480.0, 586.0])[:, np.newaxis]
y = np.array([236.4, 234.4, 252.8,
298.6, 314.2, 342.2,
360.8, 368.0, 391.2,
390.8])
from sklearn.preprocessing import PolynomialFeatures
lr = LinearRegression()
pr = LinearRegression()
quadratic = PolynomialFeatures(degree=2)
X_quad = quadratic.fit_transform(X)
# fit linear features
lr.fit(X, y)
X_fit = np.arange(250,600,10)[:, np.newaxis]
y_lin_fit = lr.predict(X_fit)
# fit quadratic features
pr.fit(X_quad, y)
y_quad_fit = pr.predict(quadratic.fit_transform(X_fit))
# plot results
plt.scatter(X, y, label='training points')
plt.plot(X_fit, y_lin_fit, label='linear fit', linestyle='--')
plt.plot(X_fit, y_quad_fit, label='quadratic fit')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./figures/poly_example.png', dpi=300)
plt.show()
y_lin_pred = lr.predict(X)
y_quad_pred = pr.predict(X_quad)
print('Training MSE linear: %.3f, quadratic: %.3f' % (
mean_squared_error(y, y_lin_pred),
mean_squared_error(y, y_quad_pred)))
print('Training R^2 linear: %.3f, quadratic: %.3f' % (
r2_score(y, y_lin_pred),
r2_score(y, y_quad_pred)))
Explanation: <br>
<br>
Turning a linear regression model into a curve - Polynomial regression
[back to top]
End of explanation
X = df[['LSTAT']].values
y = df['MEDV'].values
regr = LinearRegression()
# create quadratic features
quadratic = PolynomialFeatures(degree=2)
cubic = PolynomialFeatures(degree=3)
X_quad = quadratic.fit_transform(X)
X_cubic = cubic.fit_transform(X)
# fit features
X_fit = np.arange(X.min(), X.max(), 1)[:, np.newaxis]
regr = regr.fit(X, y)
y_lin_fit = regr.predict(X_fit)
linear_r2 = r2_score(y, regr.predict(X))
regr = regr.fit(X_quad, y)
y_quad_fit = regr.predict(quadratic.fit_transform(X_fit))
quadratic_r2 = r2_score(y, regr.predict(X_quad))
regr = regr.fit(X_cubic, y)
y_cubic_fit = regr.predict(cubic.fit_transform(X_fit))
cubic_r2 = r2_score(y, regr.predict(X_cubic))
# plot results
plt.scatter(X, y, label='training points', color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$' % linear_r2,
color='blue',
lw=2,
linestyle=':')
plt.plot(X_fit, y_quad_fit,
label='quadratic (d=2), $R^2=%.2f$' % quadratic_r2,
color='red',
lw=2,
linestyle='-')
plt.plot(X_fit, y_cubic_fit,
label='cubic (d=3), $R^2=%.2f$' % cubic_r2,
color='green',
lw=2,
linestyle='--')
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.legend(loc='upper right')
plt.tight_layout()
# plt.savefig('./figures/polyhouse_example.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Modeling nonlinear relationships in the Housing dataset
[back to top]
End of explanation
X = df[['LSTAT']].values
y = df['MEDV'].values
# transform features
X_log = np.log(X)
y_sqrt = np.sqrt(y)
# fit features
X_fit = np.arange(X_log.min()-1, X_log.max()+1, 1)[:, np.newaxis]
regr = regr.fit(X_log, y_sqrt)
y_lin_fit = regr.predict(X_fit)
linear_r2 = r2_score(y_sqrt, regr.predict(X_log))
# plot results
plt.scatter(X_log, y_sqrt, label='training points', color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$' % linear_r2,
color='blue',
lw=2)
plt.xlabel('log(% lower status of the population [LSTAT])')
plt.ylabel('$\sqrt{Price \; in \; \$1000\'s [MEDV]}$')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/transform_example.png', dpi=300)
plt.show()
Explanation: Transforming the dataset:
End of explanation
from sklearn.tree import DecisionTreeRegressor
X = df[['LSTAT']].values
y = df['MEDV'].values
tree = DecisionTreeRegressor(max_depth=3)
tree.fit(X, y)
sort_idx = X.flatten().argsort()
lin_regplot(X[sort_idx], y[sort_idx], tree)
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000\'s [MEDV]')
# plt.savefig('./figures/tree_regression.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Dealing with nonlinear relationships using random forests
[back to top]
<br>
<br>
Decision tree regression
[back to top]
End of explanation
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.4, random_state=1)
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=1000,
criterion='mse',
random_state=1,
n_jobs=-1)
forest.fit(X_train, y_train)
y_train_pred = forest.predict(X_train)
y_test_pred = forest.predict(X_test)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
plt.scatter(y_train_pred,
y_train_pred - y_train,
c='black',
marker='o',
s=35,
alpha=0.5,
label='Training data')
plt.scatter(y_test_pred,
y_test_pred - y_test,
c='lightgreen',
marker='s',
s=35,
alpha=0.7,
label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red')
plt.xlim([-10, 50])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Random forest regression
[back to top]
End of explanation |
5,582 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
11. Machine Translation — Lab exercises
Preparations
Introduction
In this lab, we will be using Python Natural Language Toolkit (nltk) again to get to know the IBM models better. There are proper, open-source MT systems out there (such as Apertium and MOSES); however, getting to know them would require more than 90 minutes.
Infrastructure
For today's exercises, you will need the docker image again. Provided you have already downloaded it last time, you can start it by
Step1: Exercises
1. Corpus acquisition
We download and preprocess a subset of the Hunglish corpus. It consists of English-Hungarian translation pairs extracted from open-source software documentation. The sentences are already aligned, but it lacks word alignment.
1.1 Download
Download the corpus. The url is ftp
Step2: 1.3 Tokenization
The text is not tokenized. Use nltk's word_tokenize() function to tokenize the snippets. Also, lowercase them. You can do this in read_files() above if you wish, or in the code you write for 1.4 below.
Note
Step3: 1.4 Create the training corpus
The models we are going to try expect a list of nltk.translate.api.AlignedSent objects. Create a bitext variable that is a list of AlignedSent objects created from the preprocessed, tokenized corpus.
Note that AlignedSent also allows you to specify an alignment between the words in the two texts. Unfortunately (but not unexpectedly), the corpus doesn't have this information.
Step4: 2. IBM Models
NLTK implements IBM models 1-5. Unfortunately, the implementations don't provide the end-to-end machine translation systems, only their alignment models.
2.1 IBM Model 1
Train an IBM Model 1 alignment. We do it in a separate code block, so that we don't rerun it by accident – training even a simple model takes some time.
Step6: 2.2 Alignment conversion
While the model doesn't have a translate() function, it does provide a way to compute the translation probability $P(F|E)$ with some additional codework. That additional work is what you have to put in.
Remember that the formula for the translation probability is $P(F|E) = \sum_AP(F,A|E)$. Computing $P(F,A|E)$ is a bit hairy; luckily IBMModel1 has a method to calculate at least part of it
Step7: 2.3. Compute $P(F,A|E)$
Your task is to write a function that, given a source and a target sentence and an alignment, creates an AlignmentInfo an object and calls prob_t_a_given_s() of the model with it. The code here (test_prob_t_a_given_s()) might give you some clue as to how to construct the object.
Since prob_t_a_given_s() only computes $P(F|A,E)$, you have to add the $P(A|E)$ component. See page 38, slide 95 and page 39, side 100 in the lecture. What is $J$ and $K$ in the inverse setup?
Important
Step8: 2.4. Compute $P(F, A_{best}|E)$
Write a function that, given an AlignedSent object, computes $P(F,A|E)$. Since IBMModel1 aligns the sentences of the training set with the most probable alignment, this function will effectively compute $P(F,A_{best}|E)$.
Don't forget to convert the alignment with the function you wrote in Exercise 2.1. before passing it to prob_f_a_e().
Step9: 2.5. Compute $P(F|E)$
Write a function that, given an AlignedSent object, computes $P(F|E)$. It should enumerate all possible alignments (in the tuple format) and call the function you wrote in Exercise 2.2 with them.
Note
Step10: Test cases for Exercises 2.3 – 2.5.
Step11: 3. Phrase-based translation
NLTK also has some functions related to phrase-based translation, but these are all but finished. The components are scattered into two packages | Python Code:
import os
import shutil
import urllib
import nltk
def download_file(url, directory=''):
real_dir = os.path.realpath(directory)
if not os.path.isdir(real_dir):
os.makedirs(real_dir)
file_name = url.rsplit('/', 1)[-1]
real_file = os.path.join(real_dir, file_name)
if not os.path.isfile(real_file):
with urllib.request.urlopen(url) as inf:
with open(real_file, 'wb') as outf:
shutil.copyfileobj(inf, outf)
Explanation: 11. Machine Translation — Lab exercises
Preparations
Introduction
In this lab, we will be using Python Natural Language Toolkit (nltk) again to get to know the IBM models better. There are proper, open-source MT systems out there (such as Apertium and MOSES); however, getting to know them would require more than 90 minutes.
Infrastructure
For today's exercises, you will need the docker image again. Provided you have already downloaded it last time, you can start it by:
docker ps -a: lists all the containers you have created. Pick the one you used last time (with any luck, there is only one)
docker start <container id>
docker exec -it <container id> bash
When that's done, update your git repository:
bash
cd /nlp/python_nlp_2017_fall/
git pull
If git pull returns with errors, it is most likely because some of your files have changes in it (most likely the morphology or syntax notebooks, which you worked on the previous labs). You can check this with git status. If the culprit is the file A.ipynb, you can resolve this problem like so:
cp A.ipynb A_mine.ipynb
git checkout A.ipynb
After that, git pull should work.
And start the notebook:
jupyter notebook --port=8888 --ip=0.0.0.0 --no-browser --allow-root
If you started the notebook, but cannot access it in your browser, make sure jupyter is not running on the host system as well. If so, stop it.
Boilerplate
The following code imports the packages and defines the functions we are going to use.
End of explanation
def read_files(directory=''):
pass
Explanation: Exercises
1. Corpus acquisition
We download and preprocess a subset of the Hunglish corpus. It consists of English-Hungarian translation pairs extracted from open-source software documentation. The sentences are already aligned, but it lacks word alignment.
1.1 Download
Download the corpus. The url is ftp://ftp.mokk.bme.hu/Hunglish2/softwaredocs/bi/opensource_X.bi, where X is a number that ranges from 1 to 9. Use the download_file function defined above.
1.2 Conversion
Read the whole corpus (all files). Try not to read it all into memory. Write a function that
reads all files you have just downloaded
is a generator that yields tuples (Hungarian snippet, English snippet)
Note:
- the files are encoded with the iso-8859-2 (a.k.a. Latin-2) encoding
- the Hungarian and English snippets are separated by a tab
- don't forget to strip whitespace from the returned snippets
- throw away pairs with empty snippets
End of explanation
from nltk.tokenize import sent_tokenize, word_tokenize
Explanation: 1.3 Tokenization
The text is not tokenized. Use nltk's word_tokenize() function to tokenize the snippets. Also, lowercase them. You can do this in read_files() above if you wish, or in the code you write for 1.4 below.
Note:
- The model for the sentence tokenizer (punkt) is not installed by default. You have to download() it.
- NLTK doesn't have Hungarian tokenizer models, so there might be errors in the Hungarian result
- instead of just lowercasing everything, we might have chosen a more sophisticated solution, e.g. by first calling sent_tokenize() and then just lowercase the word at the beginning of the sentence, or even better, tag the snippets for NER. However, we have neither the time nor the resources (models) to do that now.
End of explanation
from nltk.translate.api import AlignedSent
bitext = [] # Your code here
assert len(bitext) == 135439
Explanation: 1.4 Create the training corpus
The models we are going to try expect a list of nltk.translate.api.AlignedSent objects. Create a bitext variable that is a list of AlignedSent objects created from the preprocessed, tokenized corpus.
Note that AlignedSent also allows you to specify an alignment between the words in the two texts. Unfortunately (but not unexpectedly), the corpus doesn't have this information.
End of explanation
from nltk.translate import IBMModel1
ibm1 = IBMModel1(bitext, 5)
Explanation: 2. IBM Models
NLTK implements IBM models 1-5. Unfortunately, the implementations don't provide the end-to-end machine translation systems, only their alignment models.
2.1 IBM Model 1
Train an IBM Model 1 alignment. We do it in a separate code block, so that we don't rerun it by accident – training even a simple model takes some time.
End of explanation
from nltk.translate.ibm_model import AlignmentInfo
def alignment_to_info(alignment):
Converts from an Alignment object to the alignment format required by AlignmentInfo.
pass
assert alignment_to_info([(0, 2), (1, 1), (2, 3)]) == (0, 3, 2, 4)
Explanation: 2.2 Alignment conversion
While the model doesn't have a translate() function, it does provide a way to compute the translation probability $P(F|E)$ with some additional codework. That additional work is what you have to put in.
Remember that the formula for the translation probability is $P(F|E) = \sum_AP(F,A|E)$. Computing $P(F,A|E)$ is a bit hairy; luckily IBMModel1 has a method to calculate at least part of it: prob_t_a_given_s(), which is in fact only $P(F|A,E)$. This function accepts an AlignmentInfo object that contains the source and target sentences as well as the aligment between them.
Unfortunately, AlignmentInfo's representation of an alignment is completely different from the Alignment object's. Your first is task to do the conversion from the latter to the former. Given the example pair John loves Mary / De szereti János Marcsit,
- Aligment is basically a list of source-target, 0-based index pairs, [(0, 2), (1, 1), (2, 3)]
- The alignment in the AlignmentInfo objects is a tuple (!), where the ith position is the index of the target word that is aligned to the ith source word, or 0, if the ith source word is unaligned. Indices are 1-based, because the 0th word is NULL on both sides (see lecture page 35, slide 82). The tuple you return must also contain the alignment for this NULL word, which is not aligned with the NULL on the other side - in other words, the returned tuple starts with a 0. Example: (0, 3, 2, 4). If multiple target words are aligned with the same source word, you are free to use the index of any of them.
End of explanation
def prob_f_a_e(model, src_sentence, tgt_sentence, alig_in_tuple_format):
pass
Explanation: 2.3. Compute $P(F,A|E)$
Your task is to write a function that, given a source and a target sentence and an alignment, creates an AlignmentInfo an object and calls prob_t_a_given_s() of the model with it. The code here (test_prob_t_a_given_s()) might give you some clue as to how to construct the object.
Since prob_t_a_given_s() only computes $P(F|A,E)$, you have to add the $P(A|E)$ component. See page 38, slide 95 and page 39, side 100 in the lecture. What is $J$ and $K$ in the inverse setup?
Important: "interestingly", prob_t_a_given_s() translates from target to source. However, you still want to translate from source to target, so take care when filling the fields of the AlignmentInfo object.
Also note:
1. the alignment you pass to the function should already be in the right (AlignmentInfo) format. Don't bother converting it for now!
1. Test cases for Exercises 2.3 – 2.5 are available below Exercise 2.5.
End of explanation
def prob_best_a(model, aligned_sent):
pass
Explanation: 2.4. Compute $P(F, A_{best}|E)$
Write a function that, given an AlignedSent object, computes $P(F,A|E)$. Since IBMModel1 aligns the sentences of the training set with the most probable alignment, this function will effectively compute $P(F,A_{best}|E)$.
Don't forget to convert the alignment with the function you wrote in Exercise 2.1. before passing it to prob_f_a_e().
End of explanation
def prob_f_e(model, aligned_sent):
pass
Explanation: 2.5. Compute $P(F|E)$
Write a function that, given an AlignedSent object, computes $P(F|E)$. It should enumerate all possible alignments (in the tuple format) and call the function you wrote in Exercise 2.2 with them.
Note: the itertools.product function can be very useful in enumerating the alignments.
End of explanation
import numpy
testext = [
AlignedSent(['klein', 'ist', 'das', 'haus'], ['the', 'house', 'is', 'small']),
AlignedSent(['das', 'haus', 'ist', 'ja', 'groß'], ['the', 'house', 'is', 'big']),
AlignedSent(['das', 'buch', 'ist', 'ja', 'klein'], ['the', 'book', 'is', 'small']),
AlignedSent(['das', 'haus'], ['the', 'house']),
AlignedSent(['das', 'buch'], ['the', 'book']),
AlignedSent(['ein', 'buch'], ['a', 'book'])
]
ibm2 = IBMModel1(testext, 5)
# Tests for Exercise 2.3
assert numpy.allclose(prob_f_a_e(ibm2, ['ein', 'buch'], ['a', 'book'], (0, 1, 2)), 0.08283000979778607)
assert numpy.allclose(prob_f_a_e(ibm2, ['ein', 'buch'], ['a', 'book'], (0, 2, 1)), 0.0002015158225914316)
# Tests for Exercise 2.4
assert numpy.allclose(prob_best_a(ibm2, testext[4]), 0.059443309368677)
assert numpy.allclose(prob_best_a(ibm2, testext[2]), 1.3593610057711997e-05)
# Tests for Exercise 2.5
assert numpy.allclose(prob_f_e(ibm2, testext[4]), 0.13718805082588842)
assert numpy.allclose(prob_f_e(ibm2, testext[2]), 0.0001809283308942621)
Explanation: Test cases for Exercises 2.3 – 2.5.
End of explanation
from collections import defaultdict
from math import log
from nltk.translate import PhraseTable
from nltk.translate.stack_decoder import StackDecoder
# The (probabilistic) phrase table
phrase_table = PhraseTable()
phrase_table.add(('niemand',), ('nobody',), log(0.8))
phrase_table.add(('niemand',), ('no', 'one'), log(0.2))
phrase_table.add(('erwartet',), ('expects',), log(0.8))
phrase_table.add(('erwartet',), ('expecting',), log(0.2))
phrase_table.add(('niemand', 'erwartet'), ('one', 'does', 'not', 'expect'), log(0.1))
phrase_table.add(('die', 'spanische', 'inquisition'), ('the', 'spanish', 'inquisition'), log(0.8))
phrase_table.add(('!',), ('!',), log(0.8))
# The "language model"
language_prob = defaultdict(lambda: -999.0)
language_prob[('nobody',)] = log(0.5)
language_prob[('expects',)] = log(0.4)
language_prob[('the', 'spanish', 'inquisition')] = log(0.2)
language_prob[('!',)] = log(0.1)
# Note: type() with three parameters creates a new type object
language_model = type('',(object,), {'probability_change': lambda self, context, phrase: language_prob[phrase],
'probability': lambda self, phrase: language_prob[phrase]})()
stack_decoder = StackDecoder(phrase_table, language_model)
stack_decoder.translate(['niemand', 'erwartet', 'die', 'spanische', 'inquisition', '!'])
Explanation: 3. Phrase-based translation
NLTK also has some functions related to phrase-based translation, but these are all but finished. The components are scattered into two packages:
- phrase_based defines the function phrase_extraction() that can extract phrases from parallel text, based on an alignment
- stack_decoder defines the StackDecoder object, which can be used to translate sentences based on a phrase table and a language model
3.1. Decoding example
If you are wondering where the rest of the training functionality is, you spotted the problem: unfortunately, the part that assembles the phrase table based on the extracted phrases is missing. Also missing are the classes that represent and compute a language model. So in the code block below, we only run the decoder on an example sentence with a "hand-crafted" model.
Note: This is the same code as in the documentation of the decoder (above).
End of explanation |
5,583 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Load-signals" data-toc-modified-id="Load-signals-1"><span class="toc-item-num">1 </span>Load signals</a></span></li><li><span><a href="#Compute-the-TNR-and-PR-of-stationary-signal" data-toc-modified-id="Compute-the-TNR-and-PR-of-stationary-signal-2"><span class="toc-item-num">2 </span>Compute the TNR and PR of stationary signal</a></span></li><li><span><a href="#Compute-TNR-and-PR-of-time-varying-signal" data-toc-modified-id="Compute-TNR-and-PR-of-time-varying-signal-3"><span class="toc-item-num">3 </span>Compute TNR and PR of time varying signal</a></span></li><li><span><a href="#Compute-TNR-and-PR-from-spectrum" data-toc-modified-id="Compute-TNR-and-PR-from-spectrum-4"><span class="toc-item-num">4 </span>Compute TNR and PR from spectrum</a></span></li></ul></div>
How to compute tone-to-noise and prominence ratios according to ECMA method
This tutorial explains how to use MOSQITO to compute the TNR and PR, indicating the proeminence of tonal components in a signal. For more information on the implementation of the metric, you can refer to the documentation
The following commands are used to import the necessary functions.
Step1: Load signals
In this tutorial, the signals are imported from .wav files. The tutorial Audio signal basic operations gives more information about the syntax of the import and the other supported file types. You can use any .wav file to perform the tutorial or you can download the time varying signal and the sationary signal from MOSQITO that are used in the following.
Step2: Compute the TNR and PR of stationary signal
The test signal consists in 2 tones at 442Hz and 1768Hz added to a white noise at 80 dB. The commands below are used to plot its 24th octave band spectrum.
Step3: The spectrum obviously shows a high-level tonal component.
To study this tonal component, the functions "tone_to_noise_ecma" and "prominence_ratio_ecma" are then used with 4 parameters
Step4: The results can also be plotted, with the prominence limit to indicate either the tones are proeminent or not.
Step5: Compute TNR and PR of time varying signal
The commands below shows how to compute the TNR and PR of a time varying signal using the functions from MOSQITO. Only the TNR results are plotted, but the PR plots can be obtained by the exact same way just by replacing "tnr" by "pr".
Step6: Compute TNR and PR from spectrum
The commands below shows how to compute the TNR and PR from a frequency spectrum either in complex values or amplitude values using the functions from MOSQITO.
The input spectrum can be either 1D with size (Nfrequency) or 2D with size (fNrequency x Ntime). The corresponding time axis can be either the same for all the spectra, with size (Nfrequency) or different for each spectrum with size (Nfrequency x Ntime).
Step7: | Python Code:
# Add MOSQITO to the Python path
import sys
sys.path.append('..')
# Import numpy
import numpy as np
# Import plot function
import matplotlib.pyplot as plt
# Import multiple spectrum computation tool
from scipy.signal import stft
# Import mosqito functions
from mosqito.utils import load
from mosqito.sound_level_meter import noct_spectrum
from mosqito.sq_metrics import pr_ecma_st, pr_ecma_tv, pr_ecma_freq
from mosqito.sq_metrics import tnr_ecma_st, tnr_ecma_tv, tnr_ecma_freq
# Import MOSQITO color sheme [Optional]
from mosqito import COLORS
# To get inline plots (specific to Jupyter notebook)
%matplotlib notebook
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Load-signals" data-toc-modified-id="Load-signals-1"><span class="toc-item-num">1 </span>Load signals</a></span></li><li><span><a href="#Compute-the-TNR-and-PR-of-stationary-signal" data-toc-modified-id="Compute-the-TNR-and-PR-of-stationary-signal-2"><span class="toc-item-num">2 </span>Compute the TNR and PR of stationary signal</a></span></li><li><span><a href="#Compute-TNR-and-PR-of-time-varying-signal" data-toc-modified-id="Compute-TNR-and-PR-of-time-varying-signal-3"><span class="toc-item-num">3 </span>Compute TNR and PR of time varying signal</a></span></li><li><span><a href="#Compute-TNR-and-PR-from-spectrum" data-toc-modified-id="Compute-TNR-and-PR-from-spectrum-4"><span class="toc-item-num">4 </span>Compute TNR and PR from spectrum</a></span></li></ul></div>
How to compute tone-to-noise and prominence ratios according to ECMA method
This tutorial explains how to use MOSQITO to compute the TNR and PR, indicating the proeminence of tonal components in a signal. For more information on the implementation of the metric, you can refer to the documentation
The following commands are used to import the necessary functions.
End of explanation
# Define path to the .wav file
# To be replaced by your own path
path_st = "../tests/input/white_noise_442_1768_Hz_stationary.wav"
# load signal
sig_st, fs_st = load(path_st, wav_calib=0.01)
# plot signal
t_st = np.linspace(0, (len(sig_st) - 1) / fs_st, len(sig_st))
plt.figure(1)
plt.plot(t_st, sig_st, color=COLORS[0])
plt.xlabel('Time [s]')
plt.ylabel('Acoustic pressure [Pa]')
plt.title('Stationary signal')
# Define path to the .wav file
# To be replaced by your own path
path_tv = "../tests/input/white_noise_442_1768_Hz_varying.wav"
# load signal
sig_tv, fs_tv = load(path_tv, wav_calib=0.01)
# plot signal
t_tv = np.linspace(0, (len(sig_tv) - 1) / fs_tv, len(sig_tv))
plt.figure(2)
plt.plot(t_tv, sig_tv, color=COLORS[1])
plt.xlabel('Time [s]')
plt.ylabel('Acoustic pressure [Pa]')
plt.title('Time varying signal')
Explanation: Load signals
In this tutorial, the signals are imported from .wav files. The tutorial Audio signal basic operations gives more information about the syntax of the import and the other supported file types. You can use any .wav file to perform the tutorial or you can download the time varying signal and the sationary signal from MOSQITO that are used in the following.
End of explanation
# Compute spectrum
plt.figure(3)
spectrum, freqs = noct_spectrum(sig_st, fs_st, fmin=24, fmax=12600, n=24)
plt.semilogx(freqs, 20*np.log10(spectrum/2e-5), color=COLORS[0])
plt.xlabel("Frequency [Hz]")
plt.ylabel("Amplitude [dB re. 2e-5Pa]")
Explanation: Compute the TNR and PR of stationary signal
The test signal consists in 2 tones at 442Hz and 1768Hz added to a white noise at 80 dB. The commands below are used to plot its 24th octave band spectrum.
End of explanation
# Tone-to-noise Ratio calculation
t_tnr, tnr, prom, tones_freqs = tnr_ecma_st(sig_st, fs_st, prominence=True)
# Prominence Ratio calculation
t_pr, pr, prom, tones_freq = pr_ecma_st(sig_st, fs_st, prominence=True)
# print the results
print("Tone-to-noise ratio: " , tnr , " dB at ", tones_freqs, " Hz")
print("T-TNR: ", t_tnr," dB")
print("Prominence ratio: " , pr , " dB at ", tones_freq, " Hz")
print("T-PR:", t_pr,"dB")
Explanation: The spectrum obviously shows a high-level tonal component.
To study this tonal component, the functions "tone_to_noise_ecma" and "prominence_ratio_ecma" are then used with 4 parameters:
The signal type (stationary / non-stationary),
The signal values,
The sampling frequency,
The prominence criteria (if True the algorithm only returns the prominent values according to ECMA 74)
The commands below shows how to compute the TNR and PR of a steady signal by using the functions from MOSQITO. The scripts computes the ratios in dB and returns both the total and individual values (with specification of the tonal frequency). There is no need to enter the frequency of the potential tonal components, the algorithm automatically detects them according to Sottek method.
End of explanation
# Prominence criteria
freqs = np.arange(90,11200,100)
limit = np.zeros((len(freqs)))
for i in range(len(freqs)):
if freqs[i] >= 89.1 and freqs[i] < 1000:
limit[i] = 8 + 8.33 * np.log10(1000/freqs[i])
if freqs[i] >= 1000 and freqs[i] < 11200:
limit[i] = 8
# Plot
plt.figure(figsize=(10,6))
plt.plot(freqs, limit, color='#e69f00', linewidth=2,dashes=[6,2],label='Prominence criteria')
plt.bar(tones_freqs, tnr,width=10, color='#69c3c5')
plt.legend(fontsize=16)
plt.grid(axis='y')
plt.ylabel("TNR [dB]")
plt.title("Discrete tones TNR values \n (Total TNR = "+str(np.around(t_tnr, decimals=1))+" dB)", fontsize=16)
plt.legend()
# Frequency axis
plt.xlabel("Frequency [Hz]")
plt.xscale('log')
xticks_pos = [100,1000,10000] + list(tones_freqs)
xticks_pos = np.sort(xticks_pos)
xticks_label = [str(elem) for elem in xticks_pos]
plt.xticks(xticks_pos, labels=xticks_label, rotation = 30)
Explanation: The results can also be plotted, with the prominence limit to indicate either the tones are proeminent or not.
End of explanation
# TNR calculation
t_tnr, tnr, promi, freqs, time = tnr_ecma_tv(sig_tv, fs_tv, prominence=False, overlap=0)
# PR calculation
t_pr, pr, promi, freqs, time = pr_ecma_tv(sig_tv, fs_tv, prominence=False, overlap=0)
# Plot
plt.figure(figsize=(10,8))
plt.pcolormesh(np.nan_to_num(tnr), vmin=0)
plt.colorbar(label = "TNR value in dB")
plt.title("Tone-to-noise ratio (all discrete tones)", fontsize=14)
plt.xlabel("Time [s]")
plt.ylabel("Frequency [Hz]")
freq_axis = np.logspace(np.log10(90),np.log10(11200),num=1000)
freq_labels = [90,200,500,1000,2000,5000,10000]
freq_ticks = []
for i in range(len(freq_labels)):
freq_ticks.append(np.argmin(np.abs(freq_axis - freq_labels[i])))
plt.yticks(freq_ticks, labels=[str(elem) for elem in freq_labels])
nb_frame = np.floor(sig_tv.size / 0.5*fs_tv)
plt.xticks([0,1,2,3,4,5,6],labels=["0","0.5","1","1.5","2","2.5","3"])
Explanation: Compute TNR and PR of time varying signal
The commands below shows how to compute the TNR and PR of a time varying signal using the functions from MOSQITO. Only the TNR results are plotted, but the PR plots can be obtained by the exact same way just by replacing "tnr" by "pr".
End of explanation
# Compute multiple spectra along time
f, t, spectrum = stft(sig_tv, fs=fs_tv)
# TNR
t_tnr, tnr, promi_tnr, freqs_tnr = tnr_ecma_freq(spectrum, f, prominence=False)
# PR
t_pr, pr, promi_pr, freqs_pr = pr_ecma_freq(spectrum, f, prominence=False)
Explanation: Compute TNR and PR from spectrum
The commands below shows how to compute the TNR and PR from a frequency spectrum either in complex values or amplitude values using the functions from MOSQITO.
The input spectrum can be either 1D with size (Nfrequency) or 2D with size (fNrequency x Ntime). The corresponding time axis can be either the same for all the spectra, with size (Nfrequency) or different for each spectrum with size (Nfrequency x Ntime).
End of explanation
from datetime import date
print("Tutorial generation date:", date.today().strftime("%B %d, %Y"))
Explanation:
End of explanation |
5,584 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 1
An introduction to the Jupyter Notebook and some practice with probability ideas from Chapter 1.
1.1 Probability
1.1.1 Moments of Measured Data
The Jupyter Notebook has two primary types of cells "Markdown" cells for text (like this one) and "Code" cells for running python code. The cell below this one is a code cell that loads the plotting functions into the plt namespace and loads several functions from the numpy library. The last line requests that all plots show up inline in the notebook (instead of in other windows or as files on your computer).
Step1: Arrays of data, and their average and sum in Python. We use some definitions from numpy. Notice the way these operators can be applied to arrays (with the "." operator).
Step2: The formal definition (and to make sure we match with the book) is to take the sum and divide by the number of items in the sample
Step3: Higher order moments
Operate on the sample, python does this element-by-element, then do the same thing as above. You may be surprised that to raise a power is "**" instead of "^". This is a difference between python and other languages, just something to keep in mind!
Step4: It works for functions too
sin(x) will find the sin of each element in the array x
Step5: Variance
Step6: We can test if two quanties are equal with the == operator. Not the same as = since the = is an assignment operator. This will trip you up if you are new to programming, but you'll get over it.
Step7: Example 1.1
Step8: Close enough!
1.1.2 Probability
Example 1.2
This is an illustration of how to implement the histogram from Example 1.2 in the text. Note the use of setting the number of bins. The hist command will pick for you, and you should try other values to see the impact. There is no one correct value, but the too many bins doesn't illustrate clusters of data, and too-few bins tends to oversimplify the data.
Step9: The hist function has several possible arguments, we use bins=7 to match the example.
Step10: Both of these results are close to the previous value, but not exact. Remember, the historgram is a representation of the data and the agreement will improve for larger data sets.
1.2 Linear Algebra
1.2.1 Vectors and Basis sets
We'll use the qutip library later, even though this is all just linear algebra. For now, try using standard-python for vector math
Step11: The dot function properly computes the dot product that we know and love from workshop physics | Python Code:
import matplotlib.pyplot as plt
from numpy import array, sin, sqrt, dot, outer
%matplotlib inline
Explanation: Chapter 1
An introduction to the Jupyter Notebook and some practice with probability ideas from Chapter 1.
1.1 Probability
1.1.1 Moments of Measured Data
The Jupyter Notebook has two primary types of cells "Markdown" cells for text (like this one) and "Code" cells for running python code. The cell below this one is a code cell that loads the plotting functions into the plt namespace and loads several functions from the numpy library. The last line requests that all plots show up inline in the notebook (instead of in other windows or as files on your computer).
End of explanation
x = array([1,2,3])
x.sum()
x.mean()
Explanation: Arrays of data, and their average and sum in Python. We use some definitions from numpy. Notice the way these operators can be applied to arrays (with the "." operator).
End of explanation
x.sum()/len(x)
Explanation: The formal definition (and to make sure we match with the book) is to take the sum and divide by the number of items in the sample:
End of explanation
x**2
(x**2).sum() # Note the parenthesis!
(x**2).sum()/len(x)
(x**2).mean()
Explanation: Higher order moments
Operate on the sample, python does this element-by-element, then do the same thing as above. You may be surprised that to raise a power is "**" instead of "^". This is a difference between python and other languages, just something to keep in mind!
End of explanation
sin(x)
sin(x).sum()/len(x)
sin(x).mean()
Explanation: It works for functions too
sin(x) will find the sin of each element in the array x:
End of explanation
x.var() # Variance
x.std() # Standard Deviation
Explanation: Variance
End of explanation
x.std()**2 == x.var() # Related by a square root
Explanation: We can test if two quanties are equal with the == operator. Not the same as = since the = is an assignment operator. This will trip you up if you are new to programming, but you'll get over it.
End of explanation
x_m = array([9,5,25,23,10,22,8,8,21,20])
x_m.mean()
(x_m**2).sum()/len(x_m)
sqrt(281.3 - (15.1)**2)
x_m.std()
sqrt(281.3 - (15.1)**2) == x_m.std()
Explanation: Example 1.1
End of explanation
n, bins, patches = plt.hist(x_m,bins=7)
Explanation: Close enough!
1.1.2 Probability
Example 1.2
This is an illustration of how to implement the histogram from Example 1.2 in the text. Note the use of setting the number of bins. The hist command will pick for you, and you should try other values to see the impact. There is no one correct value, but the too many bins doesn't illustrate clusters of data, and too-few bins tends to oversimplify the data.
End of explanation
# an array of the counts in each bin:
n
n/10.0*array([6,9,12,15,18,21,24]) # counts times each bin-center value
# sum of the last cell should be the mean:
sum(_)
n/10.0*array([6,9,12,15,18,21,24])**2 # counts times each bin-center value
# sum of the last cell should be the second moment:
sum(_)
Explanation: The hist function has several possible arguments, we use bins=7 to match the example.
End of explanation
rvec = array([1,2]) # A row vector
rvec
cvec = array([[1],[2]]) # A column vector
cvec
cvec*rvec # Actually the outer product:
rvec*cvec # still the outer product... so this simple `*` doesn't respect the rules of linear algebra!
Explanation: Both of these results are close to the previous value, but not exact. Remember, the historgram is a representation of the data and the agreement will improve for larger data sets.
1.2 Linear Algebra
1.2.1 Vectors and Basis sets
We'll use the qutip library later, even though this is all just linear algebra. For now, try using standard-python for vector math:
End of explanation
dot(rvec,cvec)
outer(cvec,rvec)
dot(cvec,rvec) # This doesn't work, because `dot` knows what shape the vectors should be
Explanation: The dot function properly computes the dot product that we know and love from workshop physics:
End of explanation |
5,585 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Real World Considerations for the Lomb-Scargle Periodogram
Version 0.2
By AA Miller (Northwestern/CIERA)
23 Sep 2021
In Lecture III we built the software necessary to estimate the power spectrum via the Lomb-Scargle periodogram.
We also discovered that LS is somewhat slow. We will now leverage the faster implementation in astropy, while exploring some specific challenges related to real astrophysical light curves.
The helper functions from Lecture III are recreated at the end of this notebook - execute those cells and then the cell below to recreate the simulated data from Lecture III.
Step1: Problem 1) Other Considerations and Faster Implementations
While our "home-grown" ls_periodogram works, it would take a loooooong time to evaluate $\sim4\times 10^5$ frequencies for $\sim2\times 10^7$ variable LSST sources. (as is often the case...) astropy to the rescue!
Problem 1a
LombScargle in astropy.timeseries is fast. Run it below to compare to ls_periodogram.
Step2: Unlike ls_periodogram, LombScargle effectively takes no time to run on the simulated data.
Problem 1b
Plot the periodogram for the simulated data.
Step3: There are many choices regarding the calculation of the periodogram, so read the docs.
Floating Mean Periodogram
A basic assumption that we preivously made is that the data are "centered" - in other words, our model explicitly assumes that the signal oscillates about a mean of 0.
For astronomical applications, this assumption can be harmful. Instead, it is useful to fit for the mean of the signal in addition to the periodic component (as is the default in LombScargle)
Step4: We can see that the best fit model doesn't match the signal in the case where we do not allow a floating mean.
Step5: Window Functions
Recall that the convolution theorem tells us that
Step6: Problem 1e
Calculate and plot the periodogram for the window function (i.e., set y = 1 in LombScargle) of the observations. Do you notice any significant power?
Hint - you may need to zoom in on the plot to see all the relevant features.
Step7: Interestingly, there are very strong peaks in the data at $P \approx 3\,\mathrm{d} \;\&\; 365\,\mathrm{d}$.
What is this telling us? Essentially that observations are likely to be repeated at intervals of 3 or 365 days (shorter period spikes are aliases of the 3 d peak).
This is important to understand, however, because this same power will be present in the periodogram where we search for the periodic signal.
Problem 1f
Calculate the periodogram for the data and compare it to the periodogram for the window function.
Step8: Uncertainty on the best-fit period
How do we report uncertainties on the best-fit period from LS? For example, for the previously simulated LSST light curve we would want to report something like $P = 102 \pm 4\,\mathrm{d}$. However, the uncertainty from LS periodograms cannot be determined in this way.
Naively, one could report the width of the peak in the periodogram as the uncertainty in the fit. However, we previously saw that the peak width $\propto 1/T$ (the peak width does not decrease as the number of observations or their S/N increases; see Vander Plas 2017). Reporting such an uncertainty is particularly ridiculous for long duration surveys, whereby the peaks become very very narrow.
An alternative approach is to report the False Alarm Probability (FAP), which estimates the probability that a dataset with no periodic signal could produce a peak of similar magnitude, due to random gaussian fluctuations, as the data.
There are a few different methods to calculate the FAP. Perhaps the most useful, however, is the bootstrap method. To obtain a bootstrap estimate of the LS FAP one leaves the observation times fixed, and then draws new observation values with replacement from the actual set of observations. This procedure is then repeated many times to determine the FAP.
One nice advantage of this procedure is that any effects due to the window function will be imprinted in each iteration of the bootstrap resampling.
The major disadvantage is that many many periodograms must be calculated. The rule of thumb is that to acieve a FAP $= p_\mathrm{false}$, one must run $n_\mathrm{boot} \approx 10/p_\mathrm{false}$ bootstrap periodogram calculations. Thus, an FAP $\approx 0.1\%$ requires an increase of 1000 in computational time.
LombScargle provides the false_alarm_probability method, including a bootstrap option. We skip that for now in the interest of time.
As a final note of caution - be weary of over-interpreting the FAP. The specific question answered by the FAP is, what is the probability that gaussian fluctations could produce a signal of equivalent magnitude? Whereas, the question we generally want to answer is
Step9: Problem 2b
Use LombScargle to measure the periodogram. Then plot the periodogram and the phase folded light curve at the best-fit period.
Hint - search periods longer than 2 hr.
Step10: Problem 2c
Now plot the light curve folded at twice the best LS period.
Which of these 2 is better?
Step11: Herein lies a fundamental issue regarding the LS periodogram
Step12: One way to combat this very specific issue is to include more Fourier terms at the harmonic of the best fit period. This is easy to implement in LombScargle with the nterms keyword. [Though always be weary of adding degrees of freedom to a model, especially at the large pipeline level of analysis.]
Problem 2e
Calculate the LS periodogram for the eclipsing binary, with nterms = 1, 2, 3, 4, 5. Report the best-fit period for each of these models.
Hint - we have good reason to believe that the best fit frequency is < 3 in this case, so set maximum_frequency = 3.
Step13: Interestingly, for the $n=2, 3, 4$ harmonics, it appears as though we get the period that we have visually confirmed. However, by $n=5$ harmonics we no longer get a reasonable answer. Again - be very careful about adding harmonics, especially in large analysis pipelines.
Problem 2f
Plot the the $n = 4$ model on top of the light curve folded at the correct period.
Step14: This example also shows why it is somewhat strange to provide an uncertainty with a LS best-fit period. Errors tend to be catastrophic, and not some small fractional percentage, with the LS periodogram.
In the case of the above EB, the "best-fit" period was off by a factor 2. This is not isolated to EBs, however, LS periodograms frequently identify a correct harmonic of the true period, but not the actual period of variability.
Problem 3) The "Real" World
Problem 3a
Re-write gen_periodic_data to create periodic signals using the first 4 harmonics in a Fourier series. The $n > 1$ harmonics should have random phase offsets, and the amplitude of the $n > 1$ harmonics should be drawn randomly from a uniform distribution between 0 and amplitude the amplitude of the first harmonic.
Step15: Problem 3b
Confirm the updated version of gen_periodic_data works by creating a phase plot for a simulated signal with amplitude = 4, period = 1.234, and noise=0.81, and 100 observations obtained on a regular grid from 0 to 50.
Step16: Problem 3c
Simulate 1000 "realistic" light curves using the astronomical cadence from 1d for a full survey duration of 2 years.
For each light curve draw the period randomly from [0.2, 10], and the amplitude randomly from [1, 5], and the noise randomly from [1,2].
Record the period in an array true_p and estimate the period for the simulated data via LS and store the result in an array ls_p.
Step17: Problem 3d
Plot the LS recovered period vs. the true period for the simulated sources.
Do you notice anything interesting? Do you manage to recover the correct period most of the time?
Step18: Problem 3e
For how many of the simulated sources do you recover the correct period? Consider a period estimate "correct" if the LS estimate is within 10% of the true period.
Step19: The results are a bit disappointing. However, it is also clear that there is strong structure in the plot off the 1
Step20: What in the...
We see that these lines account for a lot of the off-diagonal structure in this plot.
In this case, what is happening is that the true frequency of the signal $f_\mathrm{true}$ is being aliased by the window function and it's harmonics. In other words LS is pulling out $f_\mathrm{true} + n\delta{f}$, where $n$ is an integer and $\delta{f}$ is the observational cadence $3\,\mathrm{d}$. Many of the false positives can be explained via the window function.
Similarly, LS might be recovering higher order harmonics of the true period since we aren't trying to recover pure sinusoidal signals in this simulation. These harmonics would also be aliased by the window function, so LS will pull out $f_\mathrm{true}/m + n\delta{f}$, where $m$ is a postive integer.
Problem 3g
Recreate the plot in 3d and overplot lines for the $m = 2$ harmonic aliased with $n = -1, 1, 2$.
Hint - only plot the values where $P_\mathrm{LS} > 0$ since, by definition, we do not search for negative periods.
Step21: The last bit of structure can be understood via the symmetry in the LS periodogram about 0. In particular, if there is an aliased frequency that is less than 0, which will occur for $n < 0$ in the equations above, then there will also be power at the positive value of that frequency. In other words, LS will pull out $|f_\mathrm{true}/m + n\delta{f}|$.
Problem 3h
Recreate the plot in 3d and overplot lines for the "reflected" $m = 1$ harmonic aliased with $n = -3, -2, -1$.
Hint - only plot the values where $P_\mathrm{LS} < 0$ since we are looking for "reflected" peaks in the periodogram in this case.
Step22: Now we have seen that nearly all the structure in the LS period vs. true period plot can be explained via aliasing with the window function! This is good (we understand why the results do not conform to what we expect), but also bad, (we were not able to recover the correct period for the majority of our sources). Ultimately, this means - be careful when inspecting the results of the LS periodogram as the peaks aren't driven solely by the signal from the source in question!
(If only there were some way to get rid of the sun, then we'd never have these problems...)
Conclusions
The Lomb-Scargle periodogram is a useful tool to search for sinusoidal signals in noisy, irregular data.
However, as highlighted throughout, there are many ways in which the methodology can run awry.
In closing, I will summarize some practical considerations from VanderPlas (2017)
Step23: Helper 2
Create a function, phase_plot, that takes x, y, and $P$ as inputs to create a phase-folded light curve (i.e., plot the data at their respective phase values given the period $P$).
Include an optional argument, y_unc, to include uncertainties on the y values, when available. | Python Code:
np.random.seed(185)
# calculate the periodogram
x = 10*np.random.rand(100)
y = gen_periodic_data(x, period=5.25, amplitude=7.4, noise=0.8)
y_unc = np.ones_like(x)*np.sqrt(0.8)
Explanation: Real World Considerations for the Lomb-Scargle Periodogram
Version 0.2
By AA Miller (Northwestern/CIERA)
23 Sep 2021
In Lecture III we built the software necessary to estimate the power spectrum via the Lomb-Scargle periodogram.
We also discovered that LS is somewhat slow. We will now leverage the faster implementation in astropy, while exploring some specific challenges related to real astrophysical light curves.
The helper functions from Lecture III are recreated at the end of this notebook - execute those cells and then the cell below to recreate the simulated data from Lecture III.
End of explanation
from astropy.timeseries import LombScargle
frequency, power = LombScargle(x, y, y_unc).autopower()
Explanation: Problem 1) Other Considerations and Faster Implementations
While our "home-grown" ls_periodogram works, it would take a loooooong time to evaluate $\sim4\times 10^5$ frequencies for $\sim2\times 10^7$ variable LSST sources. (as is often the case...) astropy to the rescue!
Problem 1a
LombScargle in astropy.timeseries is fast. Run it below to compare to ls_periodogram.
End of explanation
fig, ax = plt.subplots()
ax.plot( # complete
# complete
# complete
# complete
fig.tight_layout()
Explanation: Unlike ls_periodogram, LombScargle effectively takes no time to run on the simulated data.
Problem 1b
Plot the periodogram for the simulated data.
End of explanation
# complete
freq_no_mean, power_no_mean = LombScargle( # complete
freq_fit_mean, power_fit_mean = LombScargle( # complete
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
ax1.plot( # complete
ax2.plot( # complete
ax1.set_xlim(0,15)
fig.tight_layout()
Explanation: There are many choices regarding the calculation of the periodogram, so read the docs.
Floating Mean Periodogram
A basic assumption that we preivously made is that the data are "centered" - in other words, our model explicitly assumes that the signal oscillates about a mean of 0.
For astronomical applications, this assumption can be harmful. Instead, it is useful to fit for the mean of the signal in addition to the periodic component (as is the default in LombScargle):
$$y(t;f) = y_0(f) + A_f \sin(2\pi f(t - \phi_f).$$
To illustrate why this is important for astronomy, assume that any signal fainter than $-2$ in our simulated data cannot be detected.
Problem 1c
Remove the observations from x and y where $y \le -2$ and calculate the periodogram both with and without fitting the mean (fit_mean = False in the call to LombScargle). Plot the periodograms. Do both methods recover the correct period?
End of explanation
fit_mean_model = LombScargle(x[bright], y[bright], y_unc[bright],
fit_mean=True).model(np.linspace(0,10,1000),
freq_fit_mean[np.argmax(power_fit_mean)])
no_mean_model = LombScargle(x[bright], y[bright], y_unc[bright],
fit_mean=False).model(np.linspace(0,10,1000),
freq_no_mean[np.argmax(power_no_mean)])
fig, ax = plt.subplots()
ax.errorbar(x[bright], y[bright], y_unc[bright], fmt='o', label='data')
ax.plot(np.linspace(0,10,1000), fit_mean_model, label='fit mean')
ax.plot(np.linspace(0,10,1000), no_mean_model, label='no mean')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.legend()
fig.tight_layout()
Explanation: We can see that the best fit model doesn't match the signal in the case where we do not allow a floating mean.
End of explanation
# set up simulated observations
t_obs = np.arange(0, 10*365, 3) # 3d cadence
# complete
# complete
# complete
y = gen_periodic_data( # complete
# complete
fig, ax = plt.subplots()
ax.errorbar( # complete
ax.set_xlabel("Time (d)")
ax.set_ylabel("Flux (arbitrary units)")
Explanation: Window Functions
Recall that the convolution theorem tells us that:
$$\mathcal{F}[f\cdot g] = \mathcal{F}(f) \ast \mathcal{F}(g)$$
Telescope observations are effectively the product of a continous signal with several delta functions (corresponding to the times of observations). As a result, the convolution that produces the periodogram will retain signal from both the source and the observational cadence.
To illustrate this effect, let us simulate "realistic" observations for a 10 year telescope survey. We do this by assuming that a source is observed every 3 nights (the LSST cadence) within $\pm 4\,\mathrm{hr}$ of the same time, and that $\sim 30\%$ of the observations did not occur due to bad weather. We further assume that the source cannot be observed for 40% of the year because it is behind the sun.
Simulate a periodic signal with this cadence, a period = 220 days (typical for Miras), amplitude = 12.4, and noise = 1. Plot the simulated light curve.
Problem 1d
Simulate a periodic signal with 3 day cadence (and the observing conditions described above), a period = 220 days (typical for Miras), amplitude = 12.4, and variance of the noise = 1. Plot the simulated light curve.
End of explanation
ls = LombScargle( # complete
freq_window, power_window = # complete
fig, ax = plt.subplots()
ax.plot( # complete
ax.set_ylabel("Power")
ax.set_xlabel("Period (d)")
ax.set_xlim(0,500)
axins = plt.axes([.2, .65, .5, .2])
axins.plot( # complete
axins.set_xlim(0,5)
Explanation: Problem 1e
Calculate and plot the periodogram for the window function (i.e., set y = 1 in LombScargle) of the observations. Do you notice any significant power?
Hint - you may need to zoom in on the plot to see all the relevant features.
End of explanation
ls = LombScargle( # complete
frequency, power = # complete
fig, (ax,ax2) = plt.subplots(2,1, sharex=True)
ax.plot( # complete
ax.set_ylabel("Power")
ax.set_ylim(0,1)
ax2.plot( # complete
ax2.set_ylabel("Power")
ax2.set_xlabel("Period (d)")
ax2.set_xlim(0,10)
fig.tight_layout()
Explanation: Interestingly, there are very strong peaks in the data at $P \approx 3\,\mathrm{d} \;\&\; 365\,\mathrm{d}$.
What is this telling us? Essentially that observations are likely to be repeated at intervals of 3 or 365 days (shorter period spikes are aliases of the 3 d peak).
This is important to understand, however, because this same power will be present in the periodogram where we search for the periodic signal.
Problem 1f
Calculate the periodogram for the data and compare it to the periodogram for the window function.
End of explanation
data = # complete
fig, ax = plt.subplots()
ax.errorbar( # complete
ax.set_xlabel('HJD (d)')
ax.set_ylabel('V (mag)')
ax.set_ylim(ax.get_ylim()[::-1])
fig.tight_layout()
Explanation: Uncertainty on the best-fit period
How do we report uncertainties on the best-fit period from LS? For example, for the previously simulated LSST light curve we would want to report something like $P = 102 \pm 4\,\mathrm{d}$. However, the uncertainty from LS periodograms cannot be determined in this way.
Naively, one could report the width of the peak in the periodogram as the uncertainty in the fit. However, we previously saw that the peak width $\propto 1/T$ (the peak width does not decrease as the number of observations or their S/N increases; see Vander Plas 2017). Reporting such an uncertainty is particularly ridiculous for long duration surveys, whereby the peaks become very very narrow.
An alternative approach is to report the False Alarm Probability (FAP), which estimates the probability that a dataset with no periodic signal could produce a peak of similar magnitude, due to random gaussian fluctuations, as the data.
There are a few different methods to calculate the FAP. Perhaps the most useful, however, is the bootstrap method. To obtain a bootstrap estimate of the LS FAP one leaves the observation times fixed, and then draws new observation values with replacement from the actual set of observations. This procedure is then repeated many times to determine the FAP.
One nice advantage of this procedure is that any effects due to the window function will be imprinted in each iteration of the bootstrap resampling.
The major disadvantage is that many many periodograms must be calculated. The rule of thumb is that to acieve a FAP $= p_\mathrm{false}$, one must run $n_\mathrm{boot} \approx 10/p_\mathrm{false}$ bootstrap periodogram calculations. Thus, an FAP $\approx 0.1\%$ requires an increase of 1000 in computational time.
LombScargle provides the false_alarm_probability method, including a bootstrap option. We skip that for now in the interest of time.
As a final note of caution - be weary of over-interpreting the FAP. The specific question answered by the FAP is, what is the probability that gaussian fluctations could produce a signal of equivalent magnitude? Whereas, the question we generally want to answer is: did a periodic signal produce these data?
These questions are very different, and thus, the FAP cannot be used to prove that a source is periodic.
Problem 2) Real-world considerations
We have covered many, though not all, considerations that are necessary when employing a Lomb Scargle periodogram. We have not yet, however, encountered real world data. Here we highlight some of the issues associated with astronomical light curves.
We will now use LS to analyze actual data from the All Sky Automated Survey (ASAS). Download the example light curve.
Problem 2a
Read in the light curve from example_asas_lc.dat. Plot the light curve.
Hint - I recommend using astropy Tables or pandas dataframe.
End of explanation
frequency, power = # complete
# complete
fig,ax = plt.subplots()
ax.plot(# complete
ax.set_ylabel("Power")
ax.set_xlabel("Period (d)")
ax.set_xlim(0, 800)
axins = plt.axes([.25, .55, .6, .3])
axins.plot( # complete
axins.set_xlim(0,5)
fig.tight_layout()
# plot the phase folded light curve
phase_plot( # complete
Explanation: Problem 2b
Use LombScargle to measure the periodogram. Then plot the periodogram and the phase folded light curve at the best-fit period.
Hint - search periods longer than 2 hr.
End of explanation
phase_plot( # complete
Explanation: Problem 2c
Now plot the light curve folded at twice the best LS period.
Which of these 2 is better?
End of explanation
phase_plot( # complete
phase_grid = np.linspace(0,1,1000)
plt.plot( # complete
plt.tight_layout()
Explanation: Herein lies a fundamental issue regarding the LS periodogram: the model does not search for "periodicity." The LS model asks if the data support a sinusoidal signal. As astronomers we typically assume this question is good enough, but as we can see in the example of this eclipsing binary that is not the case [and this is not limited to eclipsing binaries].
We can see why LS is not sufficient for an EB by comparing the model to the phase-folded light curve:
Problem 2d
Overplot the model on top of the phase folded light curve.
Hint – you can access the best LS fit via the .model() method on LombScargle objects in astropy.
End of explanation
for i in np.arange(1,6):
frequency, power = # complete
# complete
print('For {:d} harmonics, P_LS = {:.8f}'.format( # complete
Explanation: One way to combat this very specific issue is to include more Fourier terms at the harmonic of the best fit period. This is easy to implement in LombScargle with the nterms keyword. [Though always be weary of adding degrees of freedom to a model, especially at the large pipeline level of analysis.]
Problem 2e
Calculate the LS periodogram for the eclipsing binary, with nterms = 1, 2, 3, 4, 5. Report the best-fit period for each of these models.
Hint - we have good reason to believe that the best fit frequency is < 3 in this case, so set maximum_frequency = 3.
End of explanation
best_period = # complete
phase_plot( # complete
phase_grid = np.linspace(0,1,1000)
plt.plot( # complete
plt.tight_layout()
Explanation: Interestingly, for the $n=2, 3, 4$ harmonics, it appears as though we get the period that we have visually confirmed. However, by $n=5$ harmonics we no longer get a reasonable answer. Again - be very careful about adding harmonics, especially in large analysis pipelines.
Problem 2f
Plot the the $n = 4$ model on top of the light curve folded at the correct period.
End of explanation
def gen_periodic_data(x, period=1, amplitude=1, phase=0, noise=0):
'''Generate periodic data
Parameters
----------
x : array-like
input values to evaluate the array
period : float (default=1)
period of the periodic signal
amplitude : float (default=1)
amplitude of the periodic signal
phase : float (default=0)
phase offset of the periodic signal
noise : float (default=0)
variance of the noise term added to the periodic signal
Returns
-------
y : array-like
Periodic signal evaluated at all points x
'''
y1 = # complete
amp2 = # complete
phase2 = # complete
y2 = # complete
amp3 = # complete
phase3 = # complete
y3 = # complete
amp4 = # complete
phase4 = # complete
y4 = # complete
dy = # complete
return y1 + y2 + y3 + y4 + dy
Explanation: This example also shows why it is somewhat strange to provide an uncertainty with a LS best-fit period. Errors tend to be catastrophic, and not some small fractional percentage, with the LS periodogram.
In the case of the above EB, the "best-fit" period was off by a factor 2. This is not isolated to EBs, however, LS periodograms frequently identify a correct harmonic of the true period, but not the actual period of variability.
Problem 3) The "Real" World
Problem 3a
Re-write gen_periodic_data to create periodic signals using the first 4 harmonics in a Fourier series. The $n > 1$ harmonics should have random phase offsets, and the amplitude of the $n > 1$ harmonics should be drawn randomly from a uniform distribution between 0 and amplitude the amplitude of the first harmonic.
End of explanation
np.random.seed(185)
x_grid = np.linspace( # complete
y = gen_periodic_data( # complete
phase_plot( # complete
Explanation: Problem 3b
Confirm the updated version of gen_periodic_data works by creating a phase plot for a simulated signal with amplitude = 4, period = 1.234, and noise=0.81, and 100 observations obtained on a regular grid from 0 to 50.
End of explanation
n_lc = # complete
true_p = np.zeros(n_lc)
ls_p = np.zeros_like(true_p)
for lc in range(n_lc):
# set up simulated observations
t_obs = np.arange(0, 2*365, 3) # 3d cadence
# complete
# complete
# complete
period = # complete
true_p[lc] = # complete
amp = # complete
noise = # complete
y = gen_periodic_data( # complete
freq, power = LombScargle( # complete
ls_p[lc] = 1/freq[np.argmax(power)]
Explanation: Problem 3c
Simulate 1000 "realistic" light curves using the astronomical cadence from 1d for a full survey duration of 2 years.
For each light curve draw the period randomly from [0.2, 10], and the amplitude randomly from [1, 5], and the noise randomly from [1,2].
Record the period in an array true_p and estimate the period for the simulated data via LS and store the result in an array ls_p.
End of explanation
fig, ax = plt.subplots()
ax.plot( # complete
ax.set_ylim(0, 20)
ax.set_xlabel('True period (d)')
ax.set_ylabel('LS peak (d)')
fig.tight_layout()
Explanation: Problem 3d
Plot the LS recovered period vs. the true period for the simulated sources.
Do you notice anything interesting? Do you manage to recover the correct period most of the time?
End of explanation
# complete
Explanation: Problem 3e
For how many of the simulated sources do you recover the correct period? Consider a period estimate "correct" if the LS estimate is within 10% of the true period.
End of explanation
p_grid = np.linspace(1e-1,10,100)
fig, ax = plt.subplots()
ax.plot(# complete
# complete
# complete
# complete
ax.set_ylim(0, 9)
ax.set_xlabel('True period (d)')
ax.set_ylabel('LS peak (d)')
fig.tight_layout()
Explanation: The results are a bit disappointing. However, it is also clear that there is strong structure in the plot off the 1:1 line. That structure can be understood in terms of the window function that was discussed in 1d, 1e, and 1f.
Problem 3f
Recreate the plot in 3d and overplot the line
$$P_\mathrm{LS} = \left(\frac{1}{P_\mathrm{true}} + \frac{n}{3}\right)^{-1}$$
for $n = -2, -1, 1, 2$.
Hint - only plot the values where $P_\mathrm{LS} > 0$ since, by definition, we do not search for negative periods.
End of explanation
p_grid = np.linspace(1e-1,10,100)
fig, ax = plt.subplots()
ax.plot(# complete
# complete
# complete
# complete
ax.set_ylim(0, 2)
ax.set_xlabel('True period (d)')
ax.set_ylabel('LS peak (d)')
fig.tight_layout()
Explanation: What in the...
We see that these lines account for a lot of the off-diagonal structure in this plot.
In this case, what is happening is that the true frequency of the signal $f_\mathrm{true}$ is being aliased by the window function and it's harmonics. In other words LS is pulling out $f_\mathrm{true} + n\delta{f}$, where $n$ is an integer and $\delta{f}$ is the observational cadence $3\,\mathrm{d}$. Many of the false positives can be explained via the window function.
Similarly, LS might be recovering higher order harmonics of the true period since we aren't trying to recover pure sinusoidal signals in this simulation. These harmonics would also be aliased by the window function, so LS will pull out $f_\mathrm{true}/m + n\delta{f}$, where $m$ is a postive integer.
Problem 3g
Recreate the plot in 3d and overplot lines for the $m = 2$ harmonic aliased with $n = -1, 1, 2$.
Hint - only plot the values where $P_\mathrm{LS} > 0$ since, by definition, we do not search for negative periods.
End of explanation
p_grid = np.linspace(1e-1,10,1000)
fig, ax = plt.subplots()
ax.plot(# complete
# complete
# complete
# complete
ax.set_ylim(0, 15)
ax.set_xlabel('True period (d)')
ax.set_ylabel('LS peak (d)')
fig.tight_layout()
Explanation: The last bit of structure can be understood via the symmetry in the LS periodogram about 0. In particular, if there is an aliased frequency that is less than 0, which will occur for $n < 0$ in the equations above, then there will also be power at the positive value of that frequency. In other words, LS will pull out $|f_\mathrm{true}/m + n\delta{f}|$.
Problem 3h
Recreate the plot in 3d and overplot lines for the "reflected" $m = 1$ harmonic aliased with $n = -3, -2, -1$.
Hint - only plot the values where $P_\mathrm{LS} < 0$ since we are looking for "reflected" peaks in the periodogram in this case.
End of explanation
def gen_periodic_data(x, period=1, amplitude=1, phase=0, noise=0):
'''Generate periodic data given the function inputs
y = A*cos(x/p - phase) + noise
Parameters
----------
x : array-like
input values to evaluate the array
period : float (default=1)
period of the periodic signal
amplitude : float (default=1)
amplitude of the periodic signal
phase : float (default=0)
phase offset of the periodic signal
noise : float (default=0)
variance of the noise term added to the periodic signal
Returns
-------
y : array-like
Periodic signal evaluated at all points x
'''
y = amplitude*np.sin(2*np.pi*x/(period) - phase) + np.random.normal(0, np.sqrt(noise), size=len(x))
return y
Explanation: Now we have seen that nearly all the structure in the LS period vs. true period plot can be explained via aliasing with the window function! This is good (we understand why the results do not conform to what we expect), but also bad, (we were not able to recover the correct period for the majority of our sources). Ultimately, this means - be careful when inspecting the results of the LS periodogram as the peaks aren't driven solely by the signal from the source in question!
(If only there were some way to get rid of the sun, then we'd never have these problems...)
Conclusions
The Lomb-Scargle periodogram is a useful tool to search for sinusoidal signals in noisy, irregular data.
However, as highlighted throughout, there are many ways in which the methodology can run awry.
In closing, I will summarize some practical considerations from VanderPlas (2017):
Choose an appropriate frequency grid (defaults in LombScargle are not sufficient)
Calculate the LS periodogram for the observation times to search for dominant signals (e.g., 1 day in astro)
Compute LS periodogram for data (avoid multi-Fourier models if signal unknown)
Plot periodogram and various FAP levels (do not over-interpret FAP)
If window function shows strong aliasing, plot phased light curve at each peak (now add more Fourier terms if necessary)
If looking for a particular signal (e.g., detatched EBs), consider different methods that better match expected signal
Inject fake signals into data to understand systematics if using LS in a survey pipeline
Finally, Finally
As a very last note: know that there are many different ways to search for periodicity in astronomical data. Depending on your application (and computational resources), LS may be a poor choice (even though this is often the default choice by all astronomers!) Graham et al. (2013) provides a summary of several methods using actual astronomical data. The results of that study show that no single method is best. However, they also show that no single method performs particularly well: the detection efficiences in Graham et al. (2013) are disappointing given the importance of periodicity in astronomical signals.
Period detection is a fundamental problem for astronomical time-series, but it is especially difficult in "production" mode. Be careful when setting up pipelines to analyze large datasets.
Challenge Problem
Re-create problem 4, but include additional terms in the fit with the LS periodogram. What differences do you notice when comparing the true period to the best-fit LS periods?
Helper Functions
We developed useful helper functions as part of Lecture III from this session.
These functions generate periodic data, and phase fold light curves on a specified period. These functions will once again prove useful, so we include them here in order to simulate data above.
Helper 1
Create a function, gen_periodic_data, that creates simulated data (including noise) over a grid of user supplied positions:
$$ y = A\,cos\left(\frac{x}{P} - \phi\right) + \sigma_y$$
where $A, P, \phi$ are inputs to the function. gen_periodic_data should include Gaussian noise, $\sigma_y$, for each output $y_i$.
End of explanation
def phase_plot(x, y, period, y_unc = 0.0, mag_plot=False):
'''Create phase-folded plot of input data x, y
Parameters
----------
x : array-like
data values along abscissa
y : array-like
data values along ordinate
period : float
period to fold the data
y_unc : array-like
uncertainty of the
'''
phases = (x/period) % 1
if isinstance(y_unc, (np.floating, float)):
y_unc = np.ones_like(x)*y_unc
plot_order = np.argsort(phases)
fig, ax = plt.subplots()
ax.errorbar(phases[plot_order], y[plot_order], y_unc[plot_order],
fmt='o', mec="0.2", mew=0.1)
ax.set_xlabel("phase")
ax.set_ylabel("signal")
if mag_plot:
ax.set_ylim(ax.get_ylim()[::-1])
fig.tight_layout()
Explanation: Helper 2
Create a function, phase_plot, that takes x, y, and $P$ as inputs to create a phase-folded light curve (i.e., plot the data at their respective phase values given the period $P$).
Include an optional argument, y_unc, to include uncertainties on the y values, when available.
End of explanation |
5,586 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 4
Step1: Day 4
Step2: Day 4
Step3: Normal Distribution with mean=0 and std. dev=1
x ~ N(0, 1)
Normal Distribution with mean=205 and std. dev=15
x ~ N(205, 15)
Normal Distribution with mean=205 and std. dev=15, multiplied by n samples
nx ~ N(205n, 15*sqrt(n))
Step4: Day 4 | Python Code:
# #Distribution
scipy.stats.norm(30, 4)
scipy.stats.norm(30, 4).cdf(40)
1 - scipy.stats.norm(30, 4).cdf(21)
scipy.stats.norm(30, 4).cdf(35) - scipy.stats.norm(30, 4).cdf(30)
Explanation: Day 4: Normal Distribution #1
Objective
In this challenge, we practice solving problems with normally distributed variables.
Task
XX is a normally distributed variable with a mean of μ=30 and a standard deviation of σ=4. Find:
P(x<40)
P(x>21)
P(30<x<35)
End of explanation
scipy.stats.norm(20, 2).cdf(19.5)
scipy.stats.norm(20, 2).cdf(22) - scipy.stats.norm(20, 2).cdf(20)
Explanation: Day 4: Normal Distribution #2
Objective
In this challenge, we practice solving problems with normally distributed variables.
Task
In a certain plant, the time taken to assemble a car is a random variable having a normal distribution with a mean of 20 hours and a standard deviation of 2 hours. What is the probability that a car can be assembled at this plant in:
Less than 19.5 hours?
Between 20 and 22 hours?
End of explanation
# #Distribution
scipy.stats.norm(205, 15)
Explanation: Day 4: The Central Limit Theorem #1
Objective
In this challenge, we practice solving problems based on the Central Limit Theorem.
Task
A large elevator can transport a maximum of 9800 pounds. Suppose a load of cargo containing 49 boxes must be transported via the elevator. The box weight of this type of cargo follows a distribution with a mean of µ=205 pounds and a standard deviation of σ=15 pounds. Based on this information, what is the probability that all 49 boxes can be safely loaded onto the freight elevator and transported?
End of explanation
scipy.stats.norm(205*49, 15*7).cdf(9800)
Explanation: Normal Distribution with mean=0 and std. dev=1
x ~ N(0, 1)
Normal Distribution with mean=205 and std. dev=15
x ~ N(205, 15)
Normal Distribution with mean=205 and std. dev=15, multiplied by n samples
nx ~ N(205n, 15*sqrt(n))
End of explanation
scipy.stats.norm(2.4*100, 2*10).cdf(250)
Explanation: Day 4: The Central Limit Theorem #2
Objective
In this challenge, we practice solving problems based on the Central Limit Theorem.
Task
The number of tickets purchased by each student for the University X vs. University Y football game follows a distribution that has a mean of µ=2.4 and a standard deviation of σ=2.0.
Suppose that a few hours before the game starts, there are 100 eager students standing in line to purchase tickets. If there are only 250 tickets left, what is the probability that all 100 students will be able to purchase the tickets they want?
End of explanation |
5,587 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preparing German Wikipedia to train a fast.ai (ULMFiT) model for German
(should work with most other languages, too)
Thomas Viehmann tv@lernapparat.de
The core idea of Howard and Ruder's ULMFiT paper, see also https
Step1: Standarize format
You can skip this entire section if you like the results. In this case continue at Tokenize.
Step2: Determine article lengths and only keep at most the largest million and only those with at least 2000 characters
Step3: Splitting 10% for validation.
Step4: I'm always trying to produce notebooks that you can run through in one go, so here is my attempt at getting rid of old stuff.
Step5: Tokenize
Note
Step6: Numericalize
Get the Counter object from all the splitted files.
Step7: Numericalize each partial file. | Python Code:
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from fastai.text import *
import html
from matplotlib import pyplot
import numpy
import time
BOS = 'xbos' # beginning-of-sentence tag
FLD = 'xfld' # data field tag
LANG='de'
datasetpath = Path('/home/datasets/nlp/wiki/')
# I ran this: wikiextractor/WikiExtractor.py -s --json -o de_wikipedia_extracted dewiki-latest-pages-articles.xml.bz2
work_path = Path('~/data/nlp/german_lm/data/de_wiki/tmp/').expanduser()
work_path.mkdir(exist_ok=True)
Explanation: Preparing German Wikipedia to train a fast.ai (ULMFiT) model for German
(should work with most other languages, too)
Thomas Viehmann tv@lernapparat.de
The core idea of Howard and Ruder's ULMFiT paper, see also https://nlp.fast.ai/, is to pretrain a language model on some corpus.
Naturally, we also want such a thing for German. And happily I just launched MathInf, a great mathematical modelling, machine learning and actuarial consulting company, that allows me to do this type of research and make it public.
I have very raw info (and hope to add more description soon) on my blog. I'm making this available early at public request and hope it is useful to you to build great things, it is not as clean or well-commented I would love it to be, yet.
I would love to hear from you if you make good use of it!
So we take a wikipedia dump (de_wikipedia_extracted dewiki-latest-pages-articles.xml.bz2 downloaded from dumps.wikipedia.org and prepocessed by wikiextractor/WikiExtractor.py -s --json -o de_wikipedia_extracted dewiki-latest-pages-articles.xml.bz2) and make token files out of them.
Note that the German Wikipedia contains more tokens (i.e. words) than recommended 100M to train the language model.
I don't cut off much here, but only do this later when loading the tokens to start the training. That is a bit wasteful and follows a "keep as much data as long as you can" approach.
Credit for all the good things in the Notebook likely belong to Sylvain Gugger (see his notebook) and Jeremy Howard see the original imdb notebook from his great course, whose work I built on, all errors are my own.
Enough talk, here is the data preparation.
End of explanation
LANG_FILENAMES = [str(f) for f in datasetpath.rglob("de_wikipedia_extracted/*/*")]
len(LANG_FILENAMES), LANG_FILENAMES[:5]
LANG_TEXT = []
for fn in tqdm(LANG_FILENAMES):
for line in open(fn, encoding='utf8'):
LANG_TEXT.append(json.loads(line))
LANG_TEXT = pd.DataFrame(LANG_TEXT)
LANG_TEXT.head()
# Getting rid of the title name in the text field
def split_title_from_text(text):
words = text.split("\n\n", 1)
if len(words) == 2:
return words[1]
else:
return words[0]
LANG_TEXT['text'] = LANG_TEXT['text'].apply(lambda x: split_title_from_text(x))
LANG_TEXT.head()
Explanation: Standarize format
You can skip this entire section if you like the results. In this case continue at Tokenize.
End of explanation
LANG_TEXT['label'] = 0 # dummy
LANG_TEXT['length'] = LANG_TEXT['text'].str.len()
MAX_ARTICLES = 1_000_000
# keep at most 1 million articles and only those of more than 2000 characters
MIN_LENGTH_CHARS = max(2000, int(numpy.percentile(LANG_TEXT['length'], 100-min(100*MAX_ARTICLES/len(LANG_TEXT), 100))))
LANG_TEXT = LANG_TEXT[LANG_TEXT['length'] >= MIN_LENGTH_CHARS] # Chars not words...
LANG_TEXT.to_csv(datasetpath/'wiki_de.csv', header=True, index=False) # I must say, I think the header is good! If in doubt, you should listen to Jeremy though.
LANG_TEXT = pd.read_csv(datasetpath/'wiki_de.csv')
percentages = range(0,110,10)
print ('Article length percentiles' , ', '.join(['{}%: {}'.format(p, int(q)) for p,q in zip(percentages, numpy.percentile(LANG_TEXT['length'], percentages))]))
print ('Number of articles', len(LANG_TEXT))
#LANG_TEXT = LANG_TEXT.sort_values(by=['length'], ascending=False)
LANG_TEXT.head()
Explanation: Determine article lengths and only keep at most the largest million and only those with at least 2000 characters
End of explanation
df_trn,df_val = sklearn.model_selection.train_test_split(LANG_TEXT.pipe(lambda x: x[['label', 'text']]), test_size=0.1)
df_trn.to_csv(work_path/'train.csv', header=False, index=False)
df_val.to_csv(work_path/'valid.csv', header=False, index=False)
Explanation: Splitting 10% for validation.
End of explanation
del LANG_TEXT
import gc
gc.collect()
Explanation: I'm always trying to produce notebooks that you can run through in one go, so here is my attempt at getting rid of old stuff.
End of explanation
chunksize = 4000
N_CPUS = num_cpus() # I like to use all cores here, needs a patch to fast ai
re1 = re.compile(r' +')
def fixup(x):
x = x.replace('#39;', "'").replace('amp;', '&').replace('#146;', "'").replace(
'nbsp;', ' ').replace('#36;', '$').replace('\\n', "\n").replace('quot;', "'").replace(
'<br />', "\n").replace('\\"', '"').replace('<unk>','u_n').replace(' @.@ ','.').replace(
' @-@ ','-').replace('\\', ' \\ ')
return re1.sub(' ', html.unescape(x))
df_trn = pd.read_csv(work_path/'train.csv', header=None, chunksize=chunksize)
df_val = pd.read_csv(work_path/'valid.csv', header=None, chunksize=chunksize)
def get_texts(df, n_lbls=1):
labels = df.iloc[:,range(n_lbls)].values.astype(np.int64)
texts = f'\n{BOS} {FLD} 1 ' + df[n_lbls].astype(str)
for i in range(n_lbls+1, len(df.columns)): texts += f' {FLD} {i-n_lbls} ' + df[i].astype(str)
texts = texts.apply(fixup).values.astype(str)
#tok = Tokenizer.proc_all(texts, lang=LANG) # use this if you have memory trouble
tok = Tokenizer.proc_all_mp(partition(texts, (len(texts)+N_CPUS-1)//N_CPUS), lang=LANG, ncpus=N_CPUS)
return tok, list(labels)
def get_all(df, name, n_lbls=1):
time_start = time.time()
for i, r in enumerate(df):
print("\r", i, end=" ")
if i > 0:
print ('time per chunk {}s'.format(int((time.time() - time_start) / i)), end="")
tok_, labels_ = get_texts(r, n_lbls)
#save the partial tokens instead of regrouping them in one big array.
np.save(work_path/f'{name}_tok{i}.npy', tok_)
get_all(df_trn,'trn',1)
get_all(df_val,'val',1)
Explanation: Tokenize
Note: be sure to care for your memory. I had all my memory allocated (for having several wikipedia copies in memory) and was swapping massively with the multiprocessing tokenization. My fix was to restart the notebook after after I had finished the above.
End of explanation
def count_them_all(names):
cnt = Counter()
for name in names:
for file in work_path.glob(f'{name}_tok*'):
tok = np.load(file)
cnt_tok = Counter(word for sent in tok for word in sent)
cnt += cnt_tok
return cnt
cnt = count_them_all(['trn'])
cnt.most_common(25)
max_vocab = 60000
min_freq = 5
itos = [o for o,c in cnt.most_common(max_vocab) if c > min_freq]
itos.insert(0,'_pad_')
itos.insert(0,'_unk_')
len(itos)
pickle.dump(itos, open(work_path/'itos.pkl', 'wb'))
stoi = collections.defaultdict(int,{s:i for (i,s) in enumerate(itos)})
Explanation: Numericalize
Get the Counter object from all the splitted files.
End of explanation
def numericalize(name):
results = []
for file in tqdm(work_path.glob(f'{name}_tok*')):
tok = np.load(file)
results.append(np.array([[stoi[word] for word in sent] for sent in tok]))
return np.concatenate(results)
trn_ids = numericalize('trn')
np.save(work_path/'trn_ids.npy', trn_ids)
val_ids = numericalize('val')
np.save(work_path/'val_ids.npy', val_ids)
Explanation: Numericalize each partial file.
End of explanation |
5,588 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center><u><u>Bayesian Modeling for the Busy and the Confused - Part I</u></u></center>
<center><i>Basic Principles of Bayesian Computation and the Grid Approximation</i><center>
Currently, the capacity to gather data is far ahead of the ability to generate meaningful insight using conventional approaches. Hopes of alleviating this bottleneck has come through the application of machine learning tools. Among these tools one that is increasingly garnering traction is probabilistic programming, particularly Bayesian modeling. In this paradigm, variables that are used to define models carry a probabilistic distribution rather than a scalar value. "Fitting" a model to data can then , simplistically, be construed as finding the appropriate parameterization for these distributions, given the model structure and the data. This offers a number of advantages over other methods, not the least of which is the estimation of uncertainty around model results. This in turn can better inform subsequent processes, such as decision-making, and/or scientific discovery.
<br><br>
<u>Part-I overview</u>
Step1: <a id='BASIC'></a>
Back to Contents
1. <u>Basics</u>
Step2: For ease of manipulation I will use a pandas DataFrame, which at first sight looks deceivingly like a 'lame' spreadsheet, to store the grid coordinates. I use this dataframe to subsequently store the prior definitions, and the results of likelihood and posterior computation at each grid point. Here's the code that defines the DataFrame, named and populates the first two columns \(\mu\) and \(\sigma\).
Step3: Accessing say the column \(\mu\) is as simple as typing
Step4: Note that the code above computes the log (prior) probability of each parameter at each grid point. Because the parameters \(\mu\) and \(\sigma\) are assumed independent, the joint prior probability at each grid point is just the product the individual prior probability. Products of probabilities can result in underflow errors. Log-transformed probabilities can be summed and exponentiated to compute joint probabilities of the entire grid can be computed by summing log probabilities followed by taking the exponent of the result. I store both the joint log-probability and the log-probability at each grid point in the pandas dataframe with the code snippet below
Step5: Since there are only two parameters, visualizing the joint prior probability is straighforward
Step6: In the figure above looking across the \(\sigma\)-axis reveals the 'wall' of uniform probability where none of the positive values, bounded here between 0 and 2.0, is expected to be more likely. Looking down the \(\mu\)-axis, on the other hand, reveals the gaussian peak around 1, within a grid of floats extending from -2.0 to 2.0.
Once priors have been defined, the model is ready to be fed some data. The chl_ loaded earlier had several thousand observations. Because grid approximation is computationally intensive, I'll only pick a handful of data. For reasons discussed further below, this will enable the comparison of the effects different priors can have on the final result.
I'll start by selecting 10 observations.
<a id='GRID'></a>
Building the Grid
For this example I simply want to approximate the distribution of chl_l following these steps
Step7: here are two columns. MxBl-Gr is a blue-to-green ratio that will serve as predictor of chlorophyll when I address regression. For now, MxBl-Gr is ignored, only chl_l is of interest. Here is what the distribution of chl_l, smoothed by kernel density estimation, looks like
Step8: ... and here is what it looks like.
Step9: In the figure above looking down the \(\sigma\)-axis shows the 'wall' of uniform probability where none of the positive values, capped here at 2.0 has is expected to be more likely. Looking down the \(\mu\)-axis, on the other hand, reveals the gaussian peak around 1, within a grid of floats extending from -2.0 to 2.0.
Once priors have been defined, the model is ready to be fed some data. The chl_ loaded earlier had several thousand observations. Because grid approximation is computationally intensive, I'll only pick a handful of data. For reasons discussed further below, this will enable the comparison of the effects different priors can have on the final result.
I'll start by selecting 10 observations.
Step10: Compute Log-Likelihood of the data given every pair \( ( \mu ,\sigma)\). This is done by summing the log-probability of each datapoint, given each grid point; i.e. each \((\mu, \sigma)\) pair.
Step11: Compute Posterior $P(\mu,\sigma\ | data) \propto P(data | \mu, \sigma) \times P(\mu, \sigma)$
Step12: <img src='./resources/grid1.svg'/>
Step14: Back to Contents
<a id='PriorImpact'></a>
Impact of Priors
Step15: Try two priors
Step16: <img src="./resources/grid3.svg?modified=3"/>
$\mu \sim \mathcal{N}(-1.5, 0.1)$, $\sigma \sim \mathcal{U}(0, 2)$ - a strongly informative prior
Step17: Back to Contents
<a id='DataImpact'></a>
Impact of data set size
sub-sample size is now 500 samples,
same two priors used
Step18: <img src=./resources/grid5.svg/>
Step19: <img src=./resources/grid6.svg/>
Step20: And using all the data? | Python Code:
import pickle
import warnings
import sys
import pandas as pd
import numpy as np
from scipy.stats import norm as gaussian, uniform
import seaborn as sb
import matplotlib.pyplot as pl
from matplotlib import rcParams
from matplotlib import ticker as mtick
print('Versions:')
print('---------')
print(f'python: {sys.version.split("|")[0]}')
print(f'numpy: {np.__version__}')
print(f'pandas: {pd.__version__}')
print(f'seaborn: {sb.__version__}')
%matplotlib inline
warnings.filterwarnings('ignore', category=FutureWarning)
Explanation: <center><u><u>Bayesian Modeling for the Busy and the Confused - Part I</u></u></center>
<center><i>Basic Principles of Bayesian Computation and the Grid Approximation</i><center>
Currently, the capacity to gather data is far ahead of the ability to generate meaningful insight using conventional approaches. Hopes of alleviating this bottleneck has come through the application of machine learning tools. Among these tools one that is increasingly garnering traction is probabilistic programming, particularly Bayesian modeling. In this paradigm, variables that are used to define models carry a probabilistic distribution rather than a scalar value. "Fitting" a model to data can then , simplistically, be construed as finding the appropriate parameterization for these distributions, given the model structure and the data. This offers a number of advantages over other methods, not the least of which is the estimation of uncertainty around model results. This in turn can better inform subsequent processes, such as decision-making, and/or scientific discovery.
<br><br>
<u>Part-I overview</u>:
The present is the first of a two-notebook series, the subject of which is a brief, basic, but hands-on programmatic introduction to Bayesian modeling. This first notebook begins with an overview of a few key probability principles relevant to Bayesian inference. An illustration of how to put these in practice follows. In particular, I will demonstrate one of the more intuitve approaches to Bayesian computation; Grid Approximation (GA). With this framework I will show how to create simple models that can be used to interpret and predict real world data. <br>
<u>Part-II overview</u>:
GA is computationally intensive and runs into problems quickly when the data set is large and/or the model increases in complexity. One of the more popular solutions to this problem is the Markov Chain Monte-Carlo (MCMC) algorithm. The implementation of MCMC in Bayesian models will be the subject of the second notebook of this series.
<br>
<u>Hands-on approach with Python</u>:
Bayesian modeling cannot be understood without practice. To that end, this notebook uses code snippets that should be iteratively modified and run for better insight.
As of this writing the most popular programming language in machine learning is Python. Python is an easy language to pickup. Python is free, open source, and a large number of very useful libraries have been written over the years that have propelled it to its current place of prominence in a number of fields, in addition to machine learning.
<br><br>
I use Python (3.6+) code to illustrate the mechanics of Bayesian inference in lieu of lengthy explanations. I also use a number of dedicated Python libraries that shortens the code considerably. A solid understanding of Bayesian modeling cannot be spoon-fed and can only come from getting one's hands dirty.. Emphasis is therefore on readable reproducible code. This should ease the work the interested has to do to get some practice re-running the notebook and experimenting with some of the coding and Bayesian modeling patterns presented. Some know-how is required regarding installing and running a Python distribution, the required libraries, and jupyter notebooks; this is easily gleaned from the internet. A popular option in the machine learning community is Anaconda.
<a id='TOP'></a>
Notebook Contents
Basics: Joint probability, Inverse probability and Bayes' Theorem
Example: Inferring the Statistical Distribution of Chlorophyll from Data
Grid Approximation
Impact of priors
Impact of data set size
MCMC
PyMC3
Regression
Data Preparation
Regression in PyMC3
Checking Priors
Model Fitting
Flavors of Uncertainty
[Final Comments](#Conclusion
End of explanation
μ = np.linspace(-2, 2, num=200) # μ-axis
σ = np.linspace(0, 2, num=200) # σ-axis
Explanation: <a id='BASIC'></a>
Back to Contents
1. <u>Basics</u>:
$\Rightarrow$Joint probability, Inverse probability and Bayes' rule
<br>
Here's a circumspect list of basic concepts that will help understand what is going on:
Joint probability of two events $A$, $B$:
$$P(A, B)=P(A|B)\times P(B)=P(B|A)\times P(A)$$
If A and B are independent: $$P(A|B) = P(A)\ \leftrightarrow P(A,B) = P(A)\times P(B)$$
Inverse probability:$$\boxed{P(A|B) = \frac{P(B|A) \times P(A)}{P(B)}}$$
$\rightarrow$Inverse probability is handy when $P(A|B)$ is desired but hard to compute, but its counterpart, $P(B|A)$ is easy to compute. The result above which is derived directly from the joint probability formulation above, is referred to as Bayes' theorem/rule. One might ask next, how this is used to build a "Bayesian model."
$\Rightarrow$Extending Bayes' theorem to model building
<br>
Given a model:
Hypotheses (\(H\)): values that model parameters can take
\( P(H) \): probability of each value in H
Data (\( D \))
\( P(D) \): probability of the data, commonly referred to as "Evidence."
Approach
* formulate initial opinion on what $H$ might include and with what probability, $P(H)$
* collect data ($D$)
* update $P(H)$ using $D$ and Bayes' theorem
$$\frac{P(H)\times P(D|H)}{P(D)} = P(H|D)$$
Computing the "Evidence", P(D), can yield intractable integrals to solve. Fortunately, it turns out that we can approximate the posterior, and give those integrals a wide berth. Hereafter, P(D), will be considered a normalization constant and will therefore be dropped; without prejudice, as it turns out.<br><br>
$$\boxed{P(H) \times P(D|H) \propto P(H|D)}$$
Note that what we care about is updating H, model parameters, after evaluating some observations.
Let's go over each of the elements of this proportionality statement.
The prior
$$\underline{P(H)}\times P(D|H) \propto P(H|D)$$
$H$: set of values that model parameters might take with corresponding probability $P(H)$.
Priors should encompass justifiable assumptions/context information and nothing more.
We can use probability distributions to express $P(H)$ as shown below.
The likelihood
$$P(H)\times \underline{P(D|H)} \propto P(H|D)$$
probability of the data, \(D\), given \(H\).
in the frequentist framework, this quantity is maximized to find the "best" fit \(\rightarrow\) Likelihood Maximization.
maximizing the likelihood means finding a particular value for H, \(\hat{H}\).
for simple models and uninformative priors, \(\hat{H}\) often corresponds to the mode of the Bayesian posterior (see below).
likelihood maximization discards a lot of potentially valuable information (the posterior).
The posterior:
$$P(H)\times P(D|H) \propto \underline{P(H|D)}$$
it's what Bayesians are after!!!
updated probability of \(H\) after exposing the model to \(D\).
used as prior for next iteration \(P(H|D)\rightarrow P(H)\), when new data become available.
$P(H|D)$ naturally yields uncertainty around the estimate via propagation.
In the next section I will attempt to illustrate the mechanics of Bayesian inference on real-world data.
Back to Contents
<a id='JustCHL'></a>
2. <u>Bayesian "Hello World": Inferring the Statistical Distribution of Chlorophyll</u>
<p>
The goal of Bayesian modeling is to approximate the process that generated a set of outcomes observed. Often, a set of input observations can be used to modify the expected outcome via a deterministic model expression. In a first instance, neither input observations nor deterministic expression are included. Only the set of outcomes is of concern here and the model is reduced to a probability assignment, using a simple statistical distribution. <br>
For the present example the outcome of interest are some chlorophyll measurements. Assuming that the process generating these observations can be approximated, <u>after log-transformation of the data</u>, by a Gaussian distribution, the scalar parameters of which are not expected to vary. The goal is to the range of values these parameters - a constant central tendency, \\(\mu\\), and a constant spread \\(\sigma\\) - could take. Note that this example, while not realistic, is intended to help build intuition. Further down the road, the use of inputs and deterministic models will be introduced with linear regression as example.</p>
</p> I will contrast two major approaches. <u>Grid computation</u>, and <u>Markov Chain Monte-Carlo</u>. Note in both methods, , as mentioned earlier, the evidence \(P(D)\) is ignored. In both cases, relative probabilities are computed and subsequently normalized so as to add to 1.</p>
A. Grid Computation
In grid-based inference, all the possible parameter combinations to infer upon are fixed before hand, through the building of a grid. This grid is made of as many dimensions as there are parameter to the model of interest. The user needs to define a range and a resolution for each dimension. This choice depends on the computing power available, and the requirements of the problem at hand.I will illustrate that as the model complexity increases, along with the number of parameters featured, the curse of dimensionality can quickly take hold and limit the usefulness of this approach.
Given a set of ranges and a resolutions for the grid's dimension, each grid point "stores" the joint probability of the corresponding parameter values. Initially the grid is populated by the stipulation of prior probabilities that should encode what is deemed to be "reasonable" by the practitioner. These priors can diverge between individual users. This is not a problem however as it makes assumptions - and therefore ground for disagreement - explicit and specific. As these priors are confronted to a relatively (usually to the model complexity) large amount of data, initially diverging priors tend to converge.
Given our model is a Gaussian distribution, our set of hypotheses (\(H\) in the previous section) includes 2 vectors; a mean \(\mu\) and a standard deviation \(\sigma\). The next couple of lines of code defines the corresponding two axes of a \(200 \times 200\) grid, and include the range of the axes, and by extension, their resolution.
End of explanation
df_grid = pd.DataFrame([[μ_i, σ_i]
for σ_i in σ for μ_i in μ], columns=['μ', 'σ'])
Explanation: For ease of manipulation I will use a pandas DataFrame, which at first sight looks deceivingly like a 'lame' spreadsheet, to store the grid coordinates. I use this dataframe to subsequently store the prior definitions, and the results of likelihood and posterior computation at each grid point. Here's the code that defines the DataFrame, named and populates the first two columns \(\mu\) and \(\sigma\).
End of explanation
μ_log_prior = gaussian.logpdf(df_grid.μ, 1, 1)
σ_log_prior = uniform.logpdf(df_grid.σ, 0, 2)
Explanation: Accessing say the column \(\mu\) is as simple as typing: df_grid.\(\mu\)
Priors
The next step is to define the priors for both \(\mu\) and \(\sigma\) that encodes what the user's knowledge, or more commonly her or his lack thereof. Principles guiding the choice of priors are beyond the scope of this post. For no other reason than what seems to make sense. In this case, chlorophyll is expected to be log-transformed, so \(\mu\) should range within a few digits north and south of '0', and \(\sigma\) should be positive, and not expected to range beyond a few orders of magnitude. Thus a normal distribution for \(\mu\) and a uniform distribution for \(\sigma\) parameterized as below seems to make sense: <br>
\(\rightarrow \mu \sim \mathcal{N}(mean=1, st.dev.=1)\); a gaussian (normal) distribution centered at 1, with an standard deviation of 1<br>
\(\rightarrow \sigma \sim \mathcal{U}(lo=0, high=2)\); a uniform distribution bounded at 0 and 2<br>
Note that these are specified independently because \(\mu\) and \(\sigma\) are assumed independent.
The code below computes the probability for each \(\mu\) and \(\sigma\) values;
The lines below show how to pass the grid defined above to the scipy.stats distribution functions to compute the prior at each grid point.
End of explanation
# log prior probability
df_grid['log_prior_prob'] = μ_log_prior + σ_log_prior
# straight prior probability from exponentiation of log_prior_prob
df_grid['prior_prob'] = np.exp(df_grid.log_prior_prob
- df_grid.log_prior_prob.max())
Explanation: Note that the code above computes the log (prior) probability of each parameter at each grid point. Because the parameters \(\mu\) and \(\sigma\) are assumed independent, the joint prior probability at each grid point is just the product the individual prior probability. Products of probabilities can result in underflow errors. Log-transformed probabilities can be summed and exponentiated to compute joint probabilities of the entire grid can be computed by summing log probabilities followed by taking the exponent of the result. I store both the joint log-probability and the log-probability at each grid point in the pandas dataframe with the code snippet below:
End of explanation
f, ax = pl.subplots(figsize=(6, 6))
df_grid.plot.hexbin(x='μ', y='σ', C='prior_prob', figsize=(7,6),
cmap='plasma', sharex=False, ax=ax);
ax.set_title('Prior')
f.savefig('./resources/f1_grid_prior.svg')
Explanation: Since there are only two parameters, visualizing the joint prior probability is straighforward:
End of explanation
df_data = pd.read_pickle('./pickleJar/df_logMxBlues.pkl')
df_data[['MxBl-Gr', 'chl_l']].info()
Explanation: In the figure above looking across the \(\sigma\)-axis reveals the 'wall' of uniform probability where none of the positive values, bounded here between 0 and 2.0, is expected to be more likely. Looking down the \(\mu\)-axis, on the other hand, reveals the gaussian peak around 1, within a grid of floats extending from -2.0 to 2.0.
Once priors have been defined, the model is ready to be fed some data. The chl_ loaded earlier had several thousand observations. Because grid approximation is computationally intensive, I'll only pick a handful of data. For reasons discussed further below, this will enable the comparison of the effects different priors can have on the final result.
I'll start by selecting 10 observations.
<a id='GRID'></a>
Building the Grid
For this example I simply want to approximate the distribution of chl_l following these steps:
Define a model to approximate the process that generates the observations
Theory: data generation is well approximated by a Gaussian.
Hypotheses (\(H\)) therefore include 2 vectors; mean \(\mu\) and standard deviation \(\sigma\).
Both parameters are expected to vary within a certain range.
Build the grid of model parameters
2D grid of \((\mu, \sigma)\) pair
Propose priors
define priors for both \(\mu\) and \(\sigma\)
Compute likelihood
Compute posterior
First, I load data stored in a pandas dataframe that contains among other things, log-transformed phytoplankton chlorophyll (chl_l) values measured during oceanographic cruises around the world.
End of explanation
f, ax = pl.subplots(figsize=(4,4))
sb.kdeplot(df_data.chl_l, ax=ax, legend=False);
ax.set_xlabel('chl_l');
f.tight_layout()
f.savefig('./figJar/Presentation/fig1_chl.svg', dpi=300, format='svg')
Explanation: here are two columns. MxBl-Gr is a blue-to-green ratio that will serve as predictor of chlorophyll when I address regression. For now, MxBl-Gr is ignored, only chl_l is of interest. Here is what the distribution of chl_l, smoothed by kernel density estimation, looks like:
End of explanation
print(df_grid.shape)
df_grid.head(7)
Explanation: ... and here is what it looks like.
End of explanation
sample_N = 10
df_data_s = df_data.dropna().sample(n=sample_N)
g = sb.PairGrid(df_data_s.loc[:,['MxBl-Gr', 'chl_l']],
diag_sharey=False)
g.map_diag(sb.kdeplot, )
g.map_offdiag(sb.scatterplot, alpha=0.75, edgecolor='k');
make_lower_triangle(g)
g.axes[1,0].set_ylabel(r'$log_{10}(chl)$');
g.axes[1,1].set_xlabel(r'$log_{10}(chl)$');
Explanation: In the figure above looking down the \(\sigma\)-axis shows the 'wall' of uniform probability where none of the positive values, capped here at 2.0 has is expected to be more likely. Looking down the \(\mu\)-axis, on the other hand, reveals the gaussian peak around 1, within a grid of floats extending from -2.0 to 2.0.
Once priors have been defined, the model is ready to be fed some data. The chl_ loaded earlier had several thousand observations. Because grid approximation is computationally intensive, I'll only pick a handful of data. For reasons discussed further below, this will enable the comparison of the effects different priors can have on the final result.
I'll start by selecting 10 observations.
End of explanation
df_grid['LL'] = np.sum(norm.logpdf(df_data_s.chl_l.values.reshape(1, -1),
loc=df_grid.μ.values.reshape(-1, 1),
scale=df_grid.σ.values.reshape(-1, 1)
), axis=1)
Explanation: Compute Log-Likelihood of the data given every pair \( ( \mu ,\sigma)\). This is done by summing the log-probability of each datapoint, given each grid point; i.e. each \((\mu, \sigma)\) pair.
End of explanation
# compute log-probability
df_grid['log_post_prob'] = df_grid.LL + df_grid.log_prior_prob
# convert to straight prob.
df_grid['post_prob'] = np.exp(df_grid.log_post_prob
- df_grid.log_post_prob.max())
# Plot Multi-Dimensional Prior and Posterior
f, ax = pl.subplots(ncols=2, figsize=(12, 5), sharey=True)
df_grid.plot.hexbin(x='μ', y='σ', C='prior_prob',
cmap='plasma', sharex=False, ax=ax[0])
df_grid.plot.hexbin(x='μ', y='σ', C='post_prob',
cmap='plasma', sharex=False, ax=ax[1]);
ax[0].set_title('Prior Probability Distribution')
ax[1].set_title('Posterior Probability Distribution')
f.tight_layout()
f.savefig('./figJar/Presentation/grid1.svg')
Explanation: Compute Posterior $P(\mu,\sigma\ | data) \propto P(data | \mu, \sigma) \times P(\mu, \sigma)$
End of explanation
# Compute Marginal Priors and Posteriors for each Parameter
df_μ = df_grid.groupby(['μ']).sum().drop('σ', axis=1)[['prior_prob',
'post_prob']
].reset_index()
df_σ = df_grid.groupby(['σ']).sum().drop('μ', axis=1)[['prior_prob',
'post_prob']
].reset_index()
# Normalize Probability Distributions
df_μ.prior_prob /= df_μ.prior_prob.max()
df_μ.post_prob /= df_μ.post_prob.max()
df_σ.prior_prob /= df_σ.prior_prob.max()
df_σ.post_prob /= df_σ.post_prob.max()
#Plot Marginal Priors and Posteriors
f, ax = pl.subplots(ncols=2, figsize=(12, 4))
df_μ.plot(x='μ', y='prior_prob', ax=ax[0], label='prior');
df_μ.plot(x='μ', y='post_prob', ax=ax[0], label='posterior')
df_σ.plot(x='σ', y='prior_prob', ax=ax[1], label='prior')
df_σ.plot(x='σ', y='post_prob', ax=ax[1], label='posterior');
f.suptitle('Marginal Probability Distributions', fontsize=16);
f.tight_layout(pad=2)
f.savefig('./figJar/Presentation/grid2.svg')
Explanation: <img src='./resources/grid1.svg'/>
End of explanation
def compute_bayes_framework(data, priors_dict):
# build grid:
μ = np.linspace(-2, 2, num=200)
σ = np.linspace(0, 2, num=200)
df_b = pd.DataFrame([[μ_i, σ_i] for σ_i in σ for μ_i in μ],
columns=['μ', 'σ'])
# compute/store distributions
μ_prior = norm.logpdf(df_b.μ, priors_dict['μ_mean'],
priors_dict['μ_sd'])
σ_prior = uniform.logpdf(df_b.σ, priors_dict['σ_lo'],
priors_dict['σ_hi'])
# compute joint prior
df_b['log_prior_prob'] = μ_prior + σ_prior
df_b['prior_prob'] = np.exp(df_b.log_prior_prob
- df_b.log_prior_prob.max())
# compute log likelihood
df_b['LL'] = np.sum(norm.logpdf(data.chl_l.values.reshape(1, -1),
loc=df_b.μ.values.reshape(-1, 1),
scale=df_b.σ.values.reshape(-1, 1)
), axis=1)
# compute joint posterior
df_b['log_post_prob'] = df_b.LL + df_b.log_prior_prob
df_b['post_prob'] = np.exp(df_b.log_post_prob
- df_b.log_post_prob.max())
return df_b
def plot_posterior(df_, ax1, ax2):
df_.plot.hexbin(x='μ', y='σ', C='prior_prob',
cmap='plasma', sharex=False, ax=ax1)
df_.plot.hexbin(x='μ', y='σ', C='post_prob',
cmap='plasma', sharex=False, ax=ax2);
ax1.set_title('Prior Probability Distribution')
ax2.set_title('Posterior Probability Distribution')
def plot_marginals(df_, ax1, ax2, plot_prior=True):
Compute marginal posterior distributions.
df_μ = df_.groupby(['μ']).sum().drop('σ',
axis=1)[['prior_prob',
'post_prob']
].reset_index()
df_σ = df_.groupby(['σ']).sum().drop('μ',
axis=1)[['prior_prob',
'post_prob']
].reset_index()
# Normalize Probability Distributions
df_μ.prior_prob /= df_μ.prior_prob.max()
df_μ.post_prob /= df_μ.post_prob.max()
df_σ.prior_prob /= df_σ.prior_prob.max()
df_σ.post_prob /= df_σ.post_prob.max()
#Plot Marginal Priors and Posteriors
if plot_prior:
df_μ.plot(x='μ', y='prior_prob', ax=ax1, label='prior');
df_σ.plot(x='σ', y='prior_prob', ax=ax2, label='prior')
df_μ.plot(x='μ', y='post_prob', ax=ax1, label='posterior')
df_σ.plot(x='σ', y='post_prob', ax=ax2, label='posterior');
Explanation: Back to Contents
<a id='PriorImpact'></a>
Impact of Priors
End of explanation
weak_prior=dict(μ_mean=1, μ_sd=1, σ_lo=0, σ_hi=2)
df_grid_1 = compute_bayes_framework(df_data_s, priors_dict=weak_prior)
f , axp = pl.subplots(ncols=2, nrows=2, figsize=(12, 9))
axp = axp.ravel()
plot_posterior(df_grid_1, axp[0], axp[1])
plot_marginals(df_grid_1, axp[2], axp[3])
axp[2].legend(['weak prior', 'posterior'])
axp[3].legend(['flat prior', 'posterior'])
f.tight_layout()
f.savefig('./figJar/Presentation/grid3.svg')
Explanation: Try two priors:
1. $\mu \sim \mathcal{N}(1, 1)$, $\sigma \sim \mathcal{U}(0, 2)$ - a weakly informative set of priors
End of explanation
strong_prior=dict(μ_mean=-1.5, μ_sd=.1, σ_lo=0, σ_hi=2)
df_grid_2 = compute_bayes_framework(df_data_s, priors_dict=strong_prior)
f , axp = pl.subplots(ncols=2, nrows=2, figsize=(12, 9))
axp = axp.ravel()
plot_posterior(df_grid_2, axp[0], axp[1])
plot_marginals(df_grid_2, axp[2], axp[3])
axp[2].legend(['strong prior', 'posterior'])
axp[3].legend(['flat prior', 'posterior'])
f.tight_layout()
f.savefig('./figJar/Presentation/grid4.svg')
Explanation: <img src="./resources/grid3.svg?modified=3"/>
$\mu \sim \mathcal{N}(-1.5, 0.1)$, $\sigma \sim \mathcal{U}(0, 2)$ - a strongly informative prior
End of explanation
sample_N = 500
# compute the inference dataframe
df_data_s = df_data.dropna().sample(n=sample_N)
# display the new sub-sample
g = sb.PairGrid(df_data_s.loc[:,['MxBl-Gr', 'chl_l']],
diag_sharey=False)
g.map_diag(sb.kdeplot, )
g.map_offdiag(sb.scatterplot, alpha=0.75, edgecolor='k');
make_lower_triangle(g)
g.axes[1,0].set_ylabel(r'$log_{10}(chl)$');
g.axes[1,1].set_xlabel(r'$log_{10}(chl)$');
%%time
df_grid_3 = compute_bayes_framework(df_data_s, priors_dict=weak_prior)
f , axp = pl.subplots(ncols=2, nrows=2, figsize=(12, 9))
axp = axp.ravel()
plot_posterior(df_grid_3, axp[0], axp[1])
plot_marginals(df_grid_3, axp[2], axp[3])
axp[2].legend(['weak prior', 'posterior'])
axp[3].legend(['flat prior', 'posterior'])
f.tight_layout()
f.savefig('./figJar/Presentation/grid5.svg')
Explanation: Back to Contents
<a id='DataImpact'></a>
Impact of data set size
sub-sample size is now 500 samples,
same two priors used
End of explanation
df_grid_4 = compute_bayes_framework(df_data_s, priors_dict=strong_prior)
f , axp = pl.subplots(ncols=2, nrows=2, figsize=(12, 9))
axp = axp.ravel()
plot_posterior(df_grid_4, axp[0], axp[1])
plot_marginals(df_grid_4, axp[2], axp[3])
axp[2].legend(['strong prior', 'posterior'])
axp[3].legend(['flat prior', 'posterior'])
f.tight_layout()
f.savefig('./figJar/Presentation/grid6.svg')
Explanation: <img src=./resources/grid5.svg/>
End of explanation
f , axp = pl.subplots(ncols=2, nrows=2, figsize=(12, 8), sharey=True)
axp = axp.ravel()
plot_marginals(df_grid_3, axp[0], axp[1])
plot_marginals(df_grid_4, axp[2], axp[3])
axp[0].legend(['weak prior', 'posterior'])
axp[1].legend(['flat prior', 'posterior'])
axp[2].legend(['strong prior', 'posterior'])
axp[3].legend(['flat prior', 'posterior'])
f.tight_layout()
f.savefig('./figJar/Presentation/grid7.svg')
Explanation: <img src=./resources/grid6.svg/>
End of explanation
%%time
priors=dict(μ_mean=-1.5, μ_sd=.1, σ_lo=0, σ_hi=2)
try:
df_grid_all_data= compute_bayes_framework(df_data, priors_dict=priors)
except MemoryError:
print("OUT OF MEMORY!")
print("--------------")
Explanation: And using all the data?
End of explanation |
5,589 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The netCDF file format
popular scientific file format for ocean and atmospere gridded datasets
netCDF is a collection of formats for storing arrays
netCDF classic
more widespread
2 GB file limit
often preffered for distributing products
netCDF 64 bit offset
supports larger files
NetCDF4
based on HDF5
compression
multiple unlimited variables
new types inc. user defined
herarchical groups
Developed by Unidata-UCAR with the aim of storing climate model data (3D+time)
Auxilary information about each variable can be added
Readable text equivalent called CDL (use ncdump/ncgen)
Can be used with Climate and Forecast (CF) data convention
http
Step1: The main advantages of using xarray versus plain netCDF4 are
Step2: ...or import local dataset
Step3: Extract variable from dataset
Step4: Access variable attributes
Step5: Accessing data values
Step6: Indexing and selecting data
From http
Step7: Define selection using nearest value
Step8: Plotting
Step9: Arithmetic operations
Step10: Calculate average along a dimension
Step11: A dataset can easily be saved to a netCDF file
Step12: Exercise
Extract the bathymetry
Extract the seabed temperature isel(level=0)
Produce a scatter plot of depth vs. seabed temperature | Python Code:
# Import everything that we are going to need... but not more
import pandas as pd
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap, cm
%matplotlib inline
DF=pd.DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])],
orient='index', columns=['one', 'two', 'three'])
DF.mean(0)
pd_s=pd.Series(range(3), index=list('abc'), name='foo')
print(pd_s)
print()
#conver 1D series to ND aware dataArray
print(xr.DataArray(pd_s))
Explanation: The netCDF file format
popular scientific file format for ocean and atmospere gridded datasets
netCDF is a collection of formats for storing arrays
netCDF classic
more widespread
2 GB file limit
often preffered for distributing products
netCDF 64 bit offset
supports larger files
NetCDF4
based on HDF5
compression
multiple unlimited variables
new types inc. user defined
herarchical groups
Developed by Unidata-UCAR with the aim of storing climate model data (3D+time)
Auxilary information about each variable can be added
Readable text equivalent called CDL (use ncdump/ncgen)
Can be used with Climate and Forecast (CF) data convention
http://cfconventions.org/
Data model:
Dimensions:describe the axes of the data arrays.
Variables: N-dimensional arrays of data.
Attributes: annotate variables or files with small notes or supplementary metadata.
Example for an ocean model dataset:
Dimensions
lat
lon
depth
time
Variable
Temperature
Salinity
Global Attibutes
Geographic grid type
History
Variable attributes (Temperature)
Long_name: "sea water temperature"
Missing_value: 1.09009E36
Units: deg. C
range: -2:50
Tools for working with netCDF files
C and Fortran libraries
Used to underpin interfaces to other languages such as python (e.g. python package netCDF4)
Include ncdump/ncgen to convert to and from human readable CDL format.
nco tools http://nco.sourceforge.net/nco.html
Command line utilities to extract, create and operate data in netCDF files.
> ncks -v u_wind -d lat,50.,51. -d lon,0.,5 inputfile.nc outputfile.nc
Viewers
ncview, pyncview, panoply, etc...
Imported by many packages
ArcGIS, QGIS, Surfer,Ferret etc...
Working With netCDF files using xarray
Alternative to plain netCDF4 access from python.
Brings the power of pandas to environmental sciences, by providing N-dimensional variants of the core pandas data structures:
worth using for multidimensional data even when not using
| Pandas | xarray |
|---|---|
| Series | DataArray |
| DataFrame | Dataset |
DataArray uses names of dimensions making it easier to track than by using axis numbers. It is possible to write:
da.sel(time='2000-01-01') or da.mean(dim='time')
intead of df.mean(0)
HTML documentation: http://xarray.pydata.org/
Example dataset
http://xray.readthedocs.io/en/stable/data-structures.html
<br>
<img src="../figures/dataset-diagram.png">
End of explanation
# Naughty datasets might require decode_cf=False
# Here it just needed decode_times=False
naughty_data = xr.open_dataset(
'http://iridl.ldeo.columbia.edu/SOURCES/.OSU/.PRISM/.monthly/dods',
decode_times=False)
naughty_data
Explanation: The main advantages of using xarray versus plain netCDF4 are:
intelligent selection along labelled dimensions (and also indexes)
groupby operations
data alignment
IO (netcdf)
conversion from and to Pandas.DataFrames
Lets peek inside our example file using ncdump
$ ncdump data/cefas_GETM_nwes.nc | more
netcdf cefas_GETM_nwes {
dimensions:
latc = 360 ;
lonc = 396 ;
time = UNLIMITED ; // (6 currently)
level = 5 ;
variables:
double bathymetry(latc, lonc) ;
bathymetry:units = "m" ;
bathymetry:long_name = "bathymetry" ;
bathymetry:valid_range = -5., 4000. ;
bathymetry:_FillValue = -10. ;
bathymetry:missing_value = -10. ;
float h(time, level, latc, lonc) ;
h:units = "m" ;
h:long_name = "layer thickness" ;
h:_FillValue = -9999.f ;
h:missing_value = -9999.f ;
double latc(latc) ;
latc:units = "degrees_north" ;
double level(level) ;
level:units = "level" ;
double lonc(lonc) ;
lonc:units = "degrees_east" ;
float temp(time, level, latc, lonc) ;
temp:units = "degC" ;
temp:long_name = "temperature" ;
temp:valid_range = -2.f, 40.f ;
temp:_FillValue = -9999.f ;
temp:missing_value = -9999.f ;
double time(time) ;
time:long_name = "time" ;
time:units = "seconds since 1996-01-01 00:00:00" ;
Now let's go back to python and use xarray.
Import remote dataset
xarray supports OpenDAP. This means that a dataset can be accessed remotely and subsetted as needed. Only the selected parts are downloaded.
End of explanation
GETM = xr.open_dataset('../data/cefas_GETM_nwes.nc4')
GETM
GETM.dims
print(type(GETM.coords['latc']))
GETM.coords['latc'].shape
# List name of dataset attributes
GETM.attrs.keys()
# List variable names
GETM.data_vars.keys()
Explanation: ...or import local dataset
End of explanation
temp=GETM['temp']
print(type( temp ))
temp.shape
Explanation: Extract variable from dataset
End of explanation
# print varaible attributes
for at in temp.attrs:
print(at+':\t\t',end=' ')
print(temp.attrs[at])
Explanation: Access variable attributes
End of explanation
temp[0,0,90,100]
Explanation: Accessing data values
End of explanation
#positional by integer
print( temp[0,2,:,:].shape )
# positional by label
print( temp.loc['1996-02-02T01:00:00',:,:,:].shape )
# by name and integer
print( temp.isel(level=1,latc=90,lonc=100).shape )
# by name and label
print( temp.sel(time='1996-02-02T01:00:00').shape )
#temp.loc
Explanation: Indexing and selecting data
From http://xarray.pydata.org/
<br>
<img src="../figures/xarray_indexing_table.png">
End of explanation
#GETM.sel(level=1)['temp']
GETM['temp'].sel(level=1,lonc=-5.0,latc=-50.0, method='nearest')
try:
GETM['temp'].sel(level=1,lonc=-5.0,latc=-50.0, method='nearest',tolerance=0.5)
except KeyError:
print('ERROR: outside tolerance of '+str(0.5))
Explanation: Define selection using nearest value
End of explanation
# Define a general mapping function using basemap
def do_map(var,title,units):
latc=GETM.coords['latc'].values
lonc=GETM.coords['lonc'].values
# create figure and axes instances
fig = plt.figure()
ax = fig.add_axes()
# create polar stereographic Basemap instance.
m = Basemap(projection='stere', lon_0=0.,lat_0=60.,
llcrnrlat=49,urcrnrlat=60,
llcrnrlon=-10,urcrnrlon=15,resolution='l')
# bondaries resolution can be 'c','l','i','h' or 'f'
m.drawcoastlines(linewidth=0.5)
m.fillcontinents(color='0.8')
parallels = np.arange(-45,70,5)
m.drawparallels(parallels,labels=[1,0,0,0],fontsize=10)
m.drawparallels?
meridians = np.arange(-15,20,5)
m.drawmeridians(meridians,labels=[0,0,0,1],fontsize=10)
# create arrays of coordinates for contourf
lon2d,lat2d=np.meshgrid(lonc,latc)
# draw filled contours.
m.contourf(lon2d,lat2d,var,50,latlon=True)
# add colorbar.
cbar = m.colorbar(cmap=plt.cm.coolwarm,location='right')
cbar.set_label(units)
# add title
plt.title(title)
# Extract attributes
units=GETM['temp'].attrs['units']
var_long_name=GETM['temp'].attrs['long_name']
# and plot
do_map(var=time_ave.sel(level=21),
units=units,
title='Time averaged '+var_long_name)
# But often, this will do
time_ave.sel(level=21).plot()
Explanation: Plotting
End of explanation
top=GETM['temp'].isel(time=0,level=4)
bottom=GETM['temp'].isel(time=0,level=0)
diff=top-bottom
diff.plot()
Explanation: Arithmetic operations
End of explanation
# average over time
time_ave = GETM['temp'].mean('time')
#average over time and level (vertical)
timelev_ave=GETM['temp'].mean(['time','level'])
timelev_ave.plot()
#zonal average (vertical)
timelon_ave=GETM['temp'].mean(['time','lonc']).isel(level=4)
timelon_ave.plot()
Explanation: Calculate average along a dimension
End of explanation
ds=GETM[['temp']].mean('time','level')
ds.to_netcdf('../data/temp_avg_level_time.nc')
print(type( GETM[['temp']]) )
print(type( GETM['temp']) )
Explanation: A dataset can easily be saved to a netCDF file
End of explanation
# bathy = GETM
# bedtemp=GETM
# plt.scatter( , ,marker='.')
Explanation: Exercise
Extract the bathymetry
Extract the seabed temperature isel(level=0)
Produce a scatter plot of depth vs. seabed temperature
End of explanation |
5,590 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 41
Step1: The webdriver.Chrome() command launches an interactive browser using selenium. All commands referencing this new browser variable will use this specific browser window. There are a variety of browsers available, they just need to be installed.
Step2: We can then use commands like browser.get() to load urls
Step3: We can now use CSS selectors to interact with specific elements on this page; you can find them in Chrome using the Developer tools
Step4: Once selected, we can a run a .click() method on that element to interact with the selected web element.
Step5: We can also select multiple elements on a webpage and storing them in a list using the find_elements_by method instead of find_element_by.
Step6: There are many ways to use the webdriver to find elements, and they are summarized here.
There are also ways to interact with a webpage, and input our own text as necessary. Let use the search box of the Automate website as the example
Step7: We can use the .send_keys method to pass text to this element, and then submit via the form type of that element.
Step8: We can also use browser functions themselves programmatically, including functions like 'back', 'forward', 'refresh', etc.
Step9: We can also quit the entire browser with .quit().
Step10: We can also use Python scripts to read the content via the selenium browser.
Step11: All web elements have a .text() variable that stores all text of the element.
Step12: Similarly, we can select all text on the webpage via the <html> or <body> element, which should include all text on the page, since <html> is the first element on any webpage. | Python Code:
from selenium import webdriver
Explanation: Lesson 41:
Controlling the Browser with the Selenium Module
We download and parse webpages using beautifulsoup module, but some pages require logins and other dependencies to function properly.
We can simulate these effects using selenium to launch a programmatic browser.
The course uses Firefox, but we will be using Google Chrome.
End of explanation
browser = webdriver.Chrome()
Explanation: The webdriver.Chrome() command launches an interactive browser using selenium. All commands referencing this new browser variable will use this specific browser window. There are a variety of browsers available, they just need to be installed.
End of explanation
browser.get('https://automatetheboringstuff.com')
Explanation: We can then use commands like browser.get() to load urls:
End of explanation
elem = browser.find_element_by_css_selector('#post-6 > div > ol:nth-child(20) > li:nth-child(1) > a')
Explanation: We can now use CSS selectors to interact with specific elements on this page; you can find them in Chrome using the Developer tools:
This web element is then referenced using the browser method find_element_by_css_selector. There are a variety of other find methods, which can reference names, classes, tags, etc.
End of explanation
# Clicks on that first link
elem.click()
Explanation: Once selected, we can a run a .click() method on that element to interact with the selected web element.
End of explanation
# Find all paragraph elements on the current page in the browser (Introduction)
elems = browser.find_elements_by_css_selector('p')
elems
Explanation: We can also select multiple elements on a webpage and storing them in a list using the find_elements_by method instead of find_element_by.
End of explanation
# Element by class name example
searchElem = browser.find_element_by_class_name('search-field')
Explanation: There are many ways to use the webdriver to find elements, and they are summarized here.
There are also ways to interact with a webpage, and input our own text as necessary. Let use the search box of the Automate website as the example:
End of explanation
# Send the text 'zophie' to the Search element
searchElem.send_keys('zophie')
# Submit that element, if available
searchElem.submit()
#Clear any existing text sent to the element using 'Ctrl+A' and a new string; otherwise `send_keys` will just append it
#searchElem.sendKeys(Keys.chord(Keys.CONTROL, "a"), "zophie");
Explanation: We can use the .send_keys method to pass text to this element, and then submit via the form type of that element.
End of explanation
browser.back()
browser.forward()
browser.refresh()
Explanation: We can also use browser functions themselves programmatically, including functions like 'back', 'forward', 'refresh', etc.
End of explanation
browser.quit()
Explanation: We can also quit the entire browser with .quit().
End of explanation
browser = webdriver.Chrome()
browser.get('https://automatetheboringstuff.com')
elem = browser.find_element_by_css_selector('#post-6 > div > p:nth-child(6)')
Explanation: We can also use Python scripts to read the content via the selenium browser.
End of explanation
elem.text
Explanation: All web elements have a .text() variable that stores all text of the element.
End of explanation
bodyElem = browser.find_element_by_css_selector('html')
print(bodyElem.text)
Explanation: Similarly, we can select all text on the webpage via the <html> or <body> element, which should include all text on the page, since <html> is the first element on any webpage.
End of explanation |
5,591 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Continuous training with TFX and Google Cloud AI Platform
Learning Objectives
Use the TFX CLI to build a TFX pipeline.
Deploy a new TFX pipeline version with tuning enabled to a hosted AI Platform Pipelines instance.
Create and monitor a TFX pipeline run using the TFX CLI and KFP UI.
In this lab, you use utilize the following tools and services to deploy and run a TFX pipeline on Google Cloud that automates the development and deployment of a TensorFlow 2.3 WideDeep Classifer to predict forest cover from cartographic data
Step1: Validate lab package version installation
Step2: Note
Step3: Note
Step4: The config.py module configures the default values for the environment specific settings and the default values for the pipeline runtime parameters.
The default values can be overwritten at compile time by providing the updated values in a set of environment variables. You will set custom environment variables later on this lab.
The pipeline.py module contains the TFX DSL defining the workflow implemented by the pipeline.
The preprocessing.py module implements the data preprocessing logic the Transform component.
The model.py module implements the training, tuning, and model building logic for the Trainer and Tuner components.
The runner.py module configures and executes KubeflowDagRunner. At compile time, the KubeflowDagRunner.run() method converts the TFX DSL into the pipeline package in the argo format for execution on your hosted AI Platform Pipelines instance.
The features.py module contains feature definitions common across preprocessing.py and model.py.
Exercise
Step5: Set the compile time settings to first create a pipeline version without hyperparameter tuning
Default pipeline runtime environment values are configured in the pipeline folder config.py. You will set their values directly below
Step6: Compile your pipeline code
You can build and upload the pipeline to the AI Platform Pipelines instance in one step, using the tfx pipeline create command. The tfx pipeline create goes through the following steps
Step7: Note
Step8: Hint
Step9: To view the status of existing pipeline runs
Step10: To retrieve the status of a given run | Python Code:
import yaml
# Set `PATH` to include the directory containing TFX CLI and skaffold.
PATH=%env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
Explanation: Continuous training with TFX and Google Cloud AI Platform
Learning Objectives
Use the TFX CLI to build a TFX pipeline.
Deploy a new TFX pipeline version with tuning enabled to a hosted AI Platform Pipelines instance.
Create and monitor a TFX pipeline run using the TFX CLI and KFP UI.
In this lab, you use utilize the following tools and services to deploy and run a TFX pipeline on Google Cloud that automates the development and deployment of a TensorFlow 2.3 WideDeep Classifer to predict forest cover from cartographic data:
The TFX CLI utility to build and deploy a TFX pipeline.
A hosted AI Platform Pipeline instance (Kubeflow Pipelines) for TFX pipeline orchestration.
Dataflow jobs for scalable, distributed data processing for TFX components.
A AI Platform Training job for model training and flock management for parallel tuning trials.
AI Platform Prediction as a model server destination for blessed pipeline model versions.
CloudTuner and AI Platform Vizier for advanced model hyperparameter tuning using the Vizier algorithm.
You will then create and monitor pipeline runs using the TFX CLI as well as the KFP UI.
Setup
Update lab environment PATH to include TFX CLI and skaffold
End of explanation
!python -c "import tensorflow; print('TF version: {}'.format(tensorflow.__version__))"
!python -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
!python -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
Explanation: Validate lab package version installation
End of explanation
%pip install --upgrade --user tensorflow==2.3.2
%pip install --upgrade --user tfx==0.25.0
%pip install --upgrade --user kfp==1.0.4
Explanation: Note: this lab was built and tested with the following package versions:
TF version: 2.3.2
TFX version: 0.25.0
KFP version: 1.4.0
(Optional) If running the above command results in different package versions or you receive an import error, upgrade to the correct versions by running the cell below:
End of explanation
%cd pipeline
!ls -la
Explanation: Note: you may need to restart the kernel to pick up the correct package versions.
Validate creation of AI Platform Pipelines cluster
Navigate to AI Platform Pipelines page in the Google Cloud Console.
Note you may have already deployed an AI Pipelines instance during the Setup for the lab series. If so, you can proceed using that instance. If not:
1. Create or select an existing Kubernetes cluster (GKE) and deploy AI Platform. Make sure to select "Allow access to the following Cloud APIs https://www.googleapis.com/auth/cloud-platform" to allow for programmatic access to your pipeline by the Kubeflow SDK for the rest of the lab. Also, provide an App instance name such as "tfx" or "mlops".
Validate the deployment of your AI Platform Pipelines instance in the console before proceeding.
Review: example TFX pipeline design pattern for Google Cloud
The pipeline source code can be found in the pipeline folder.
End of explanation
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
GCP_REGION = 'us-central1'
ARTIFACT_STORE_URI = f'gs://{PROJECT_ID}-kubeflowpipelines-default'
CUSTOM_SERVICE_ACCOUNT = f'tfx-tuner-caip-service-account@{PROJECT_ID}.iam.gserviceaccount.com'
#TODO: Set your environment resource settings here for ENDPOINT.
ENDPOINT = ''
# Set your resource settings as environment variables. These override the default values in pipeline/config.py.
%env GCP_REGION={GCP_REGION}
%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}
%env CUSTOM_SERVICE_ACCOUNT={CUSTOM_SERVICE_ACCOUNT}
%env PROJECT_ID={PROJECT_ID}
Explanation: The config.py module configures the default values for the environment specific settings and the default values for the pipeline runtime parameters.
The default values can be overwritten at compile time by providing the updated values in a set of environment variables. You will set custom environment variables later on this lab.
The pipeline.py module contains the TFX DSL defining the workflow implemented by the pipeline.
The preprocessing.py module implements the data preprocessing logic the Transform component.
The model.py module implements the training, tuning, and model building logic for the Trainer and Tuner components.
The runner.py module configures and executes KubeflowDagRunner. At compile time, the KubeflowDagRunner.run() method converts the TFX DSL into the pipeline package in the argo format for execution on your hosted AI Platform Pipelines instance.
The features.py module contains feature definitions common across preprocessing.py and model.py.
Exercise: build your pipeline with the TFX CLI
You will use TFX CLI to compile and deploy the pipeline. As explained in the previous section, the environment specific settings can be provided through a set of environment variables and embedded into the pipeline package at compile time.
Configure your environment resource settings
Update the below constants with the settings reflecting your lab environment.
GCP_REGION - the compute region for AI Platform Training, Vizier, and Prediction.
ARTIFACT_STORE - An existing GCS bucket. You can use any bucket or use the GCS bucket created during installation of AI Platform Pipelines. The default bucket name will contain the kubeflowpipelines- prefix.
CUSTOM_SERVICE_ACCOUNT - In the gcp console Click on the Navigation Menu. Navigate to IAM & Admin, then to Service Accounts and use the service account starting with prefix - 'tfx-tuner-caip-service-account'. This enables CloudTuner and the Google Cloud AI Platform extensions Tuner component to work together and allows for distributed and parallel tuning backed by AI Platform Vizier's hyperparameter search algorithm.
ENDPOINT - set the ENDPOINT constant to the endpoint to your AI Platform Pipelines instance. The endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console. Open the SETTINGS for your instance and use the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window. The format is '...pipelines.googleusercontent.com'.
End of explanation
PIPELINE_NAME = 'tfx_covertype_continuous_training'
MODEL_NAME = 'tfx_covertype_classifier'
DATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'
CUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)
RUNTIME_VERSION = '2.3'
PYTHON_VERSION = '3.7'
USE_KFP_SA=False
ENABLE_TUNING=True
%env PIPELINE_NAME={PIPELINE_NAME}
%env MODEL_NAME={MODEL_NAME}
%env DATA_ROOT_URI={DATA_ROOT_URI}
%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERIONS={PYTHON_VERSION}
%env USE_KFP_SA={USE_KFP_SA}
%env ENABLE_TUNING={ENABLE_TUNING}
Explanation: Set the compile time settings to first create a pipeline version without hyperparameter tuning
Default pipeline runtime environment values are configured in the pipeline folder config.py. You will set their values directly below:
PIPELINE_NAME - the pipeline's globally unique name. For each pipeline update, each pipeline version uploaded to KFP will be reflected on the Pipelines tab in the Pipeline name > Version name dropdown in the format PIPELINE_NAME_datetime.now().
MODEL_NAME - the pipeline's unique model output name for AI Platform Prediction. For multiple pipeline runs, each pushed blessed model will create a new version with the format 'v{}'.format(int(time.time())).
DATA_ROOT_URI - the URI for the raw lab dataset gs://workshop-datasets/covertype/small.
CUSTOM_TFX_IMAGE - the image name of your pipeline container build by skaffold and published by Cloud Build to Cloud Container Registry in the format 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME).
RUNTIME_VERSION - the TensorFlow runtime version. This lab was built and tested using TensorFlow 2.3.
PYTHON_VERSION - the Python runtime version. This lab was built and tested using Python 3.7.
USE_KFP_SA - The pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the user-gcp-sa secret of the Kubernetes namespace hosting Kubeflow Pipelines. If you want to use the user-gcp-sa service account you change the value of USE_KFP_SA to True. Note that the default AI Platform Pipelines configuration does not define the user-gcp-sa secret.
ENABLE_TUNING - boolean value indicating whether to add the Tuner component to the pipeline or use hyperparameter defaults. See the model.py and pipeline.py files for details on how this changes the pipeline topology across pipeline versions.
End of explanation
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py
Explanation: Compile your pipeline code
You can build and upload the pipeline to the AI Platform Pipelines instance in one step, using the tfx pipeline create command. The tfx pipeline create goes through the following steps:
- (Optional) Builds the custom image to that provides a runtime environment for TFX components or uses the latest image of the installed TFX version
- Compiles the pipeline code into a pipeline package
- Uploads the pipeline package via the ENDPOINT to the hosted AI Platform instance.
As you debug the pipeline DSL, you may prefer to first use the tfx pipeline compile command, which only executes the compilation step. After the DSL compiles successfully you can use tfx pipeline create to go through all steps.
End of explanation
# TODO: Your code here to use the TFX CLI to deploy your pipeline image to AI Platform Pipelines.
!tfx pipeline create \
--pipeline_path=runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
Explanation: Note: you should see a {PIPELINE_NAME}.tar.gz file appear in your current pipeline directory.
Exercise: deploy your pipeline container to AI Platform Pipelines with TFX CLI
After the pipeline code compiles without any errors you can use the tfx pipeline create command to perform the full build and deploy the pipeline. You will deploy your compiled pipeline container hosted on Google Container Registry e.g. gcr.io/[PROJECT_ID]/tfx_covertype_continuous_training to run on AI Platform Pipelines with the TFX CLI.
End of explanation
# TODO: your code here to trigger a pipeline run with the TFX CLI
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: Hint: review the TFX CLI documentation on the "pipeline group" to create your pipeline. You will need to specify the --pipeline_path to point at the pipeline DSL and runner defined locally in runner.py, --endpoint, and --build_target_image arguments using the environment variables specified above.
Note: you should see a build.yaml file in your pipeline folder created by skaffold. The TFX CLI compile triggers a custom container to be built with skaffold using the instructions in the Dockerfile.
If you need to redeploy the pipeline you can first delete the previous version using tfx pipeline delete or you can update the pipeline in-place using tfx pipeline update.
To delete the pipeline:
tfx pipeline delete --pipeline_name {PIPELINE_NAME} --endpoint {ENDPOINT}
To update the pipeline:
tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}
Create and monitor a pipeline run with the TFX CLI
After the pipeline has been deployed, you can trigger and monitor pipeline runs using TFX CLI.
Hint: review the TFX CLI documentation on the "run group".
End of explanation
!tfx run list --pipeline_name {PIPELINE_NAME} --endpoint {ENDPOINT}
Explanation: To view the status of existing pipeline runs:
End of explanation
RUN_ID='[YOUR RUN ID]'
!tfx run status --pipeline_name {PIPELINE_NAME} --run_id {RUN_ID} --endpoint {ENDPOINT}
Explanation: To retrieve the status of a given run:
End of explanation |
5,592 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2 samples permutation test on source data with spatio-temporal clustering
Tests if the source space data are significantly different between
2 groups of subjects (simulated here using one subject's data).
The multiple comparisons problem is addressed with a cluster-level
permutation test across space and time.
Step1: Set parameters
Step2: Compute statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial connectivity matrix (instead of spatio-temporal)
Step3: Visualize the clusters | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Eric Larson <[email protected]>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from scipy import stats as stats
import mne
from mne import spatial_src_connectivity
from mne.stats import spatio_temporal_cluster_test, summarize_clusters_stc
from mne.datasets import sample
print(__doc__)
Explanation: 2 samples permutation test on source data with spatio-temporal clustering
Tests if the source space data are significantly different between
2 groups of subjects (simulated here using one subject's data).
The multiple comparisons problem is addressed with a cluster-level
permutation test across space and time.
End of explanation
data_path = sample.data_path()
stc_fname = data_path + '/MEG/sample/sample_audvis-meg-lh.stc'
subjects_dir = data_path + '/subjects'
src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
# Load stc to in common cortical space (fsaverage)
stc = mne.read_source_estimate(stc_fname)
stc.resample(50, npad='auto')
# Read the source space we are morphing to
src = mne.read_source_spaces(src_fname)
fsave_vertices = [s['vertno'] for s in src]
morph = mne.compute_source_morph(stc, 'sample', 'fsaverage',
spacing=fsave_vertices, smooth=20,
subjects_dir=subjects_dir)
stc = morph.apply(stc)
n_vertices_fsave, n_times = stc.data.shape
tstep = stc.tstep
n_subjects1, n_subjects2 = 7, 9
print('Simulating data for %d and %d subjects.' % (n_subjects1, n_subjects2))
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X1 = np.random.randn(n_vertices_fsave, n_times, n_subjects1) * 10
X2 = np.random.randn(n_vertices_fsave, n_times, n_subjects2) * 10
X1[:, :, :] += stc.data[:, :, np.newaxis]
# make the activity bigger for the second set of subjects
X2[:, :, :] += 3 * stc.data[:, :, np.newaxis]
# We want to compare the overall activity levels for each subject
X1 = np.abs(X1) # only magnitude
X2 = np.abs(X2) # only magnitude
Explanation: Set parameters
End of explanation
print('Computing connectivity.')
connectivity = spatial_src_connectivity(src)
# Note that X needs to be a list of multi-dimensional array of shape
# samples (subjects_k) x time x space, so we permute dimensions
X1 = np.transpose(X1, [2, 1, 0])
X2 = np.transpose(X2, [2, 1, 0])
X = [X1, X2]
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.0001
f_threshold = stats.distributions.f.ppf(1. - p_threshold / 2.,
n_subjects1 - 1, n_subjects2 - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu =\
spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1,
threshold=f_threshold, buffer_size=None)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
Explanation: Compute statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial connectivity matrix (instead of spatio-temporal)
End of explanation
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
fsave_vertices = [np.arange(10242), np.arange(10242)]
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A != condition B
brain = stc_all_cluster_vis.plot('fsaverage', hemi='both',
views='lateral', subjects_dir=subjects_dir,
time_label='Duration significant (ms)',
clim=dict(kind='value', lims=[0, 1, 40]))
Explanation: Visualize the clusters
End of explanation |
5,593 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
机器学习纳米学位
非监督学习
项目 3
Step1: 分析数据
在这部分,你将开始分析数据,通过可视化和代码来理解每一个特征和其他特征的联系。你会看到关于数据集的统计描述,考虑每一个属性的相关性,然后从数据集中选择若干个样本数据点,你将在整个项目中一直跟踪研究这几个数据点。
运行下面的代码单元给出数据集的一个统计描述。注意这个数据集包含了6个重要的产品类型:'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper'和 'Delicatessen'。想一下这里每一个类型代表你会购买什么样的产品。
Step2: 练习
Step3: 问题 1
在你看来你选择的这三个样本点分别代表什么类型的企业(客户)?对每一个你选择的样本客户,通过它在每一种产品类型上的花费与数据集的统计描述进行比较,给出你做上述判断的理由。
提示: 企业的类型包括超市、咖啡馆、零售商以及其他。注意不要使用具体企业的名字,比如说在描述一个餐饮业客户时,你不能使用麦当劳。
回答
Step4: 问题 2
你尝试预测哪一个特征?预测的得分是多少?这个特征对于区分用户的消费习惯来说必要吗?
提示: 决定系数(coefficient of determination), R^2,结果在0到1之间,1表示完美拟合,一个负的R^2表示模型不能够拟合数据。
回答
Step5: 问题 3
这里是否存在一些特征他们彼此之间存在一定程度相关性?如果有请列出。这个结果是验证了还是否认了你尝试预测的那个特征的相关性?这些特征的数据是怎么分布的?
提示: 这些数据是正态分布(normally distributed)的吗?大多数的数据点分布在哪?
回答
Step6: 观察
在使用了一个自然对数的缩放之后,数据的各个特征会显得更加的正态分布。对于任意的你以前发现有相关关系的特征对,观察他们的相关关系是否还是存在的(并且尝试观察,他们的相关关系相比原来是变强了还是变弱了)。
运行下面的代码以观察样本数据在进行了自然对数转换之后如何改变了。
Step7: 练习
Step8: 问题 4
请列出所有在多于一个特征下被看作是异常的数据点。这些点应该被从数据集中移除吗?为什么?把你认为需要移除的数据点全部加入到到outliers变量中。
回答
Step9: 问题 5
数据的第一个和第二个主成分 总共 表示了多少的方差? 前四个主成分呢?使用上面提供的可视化图像,讨论从用户花费的角度来看前四个主要成分的消费行为最能代表哪种类型的客户并给出你做出判断的理由。
提示: 某一特定维度上的正向增长对应正权特征的增长和负权特征的减少。增长和减少的速率和每个特征的权重相关。参考资料(英文)。
回答
Step10: 练习:降维
当使用主成分分析的时候,一个主要的目的是减少数据的维度,这实际上降低了问题的复杂度。当然降维也是需要一定代价的:更少的维度能够表示的数据中的总方差更少。因为这个,累计解释方差比(cumulative explained variance ratio)对于我们确定这个问题需要多少维度非常重要。另外,如果大部分的方差都能够通过两个或者是三个维度进行表示的话,降维之后的数据能够被可视化。
在下面的代码单元中,你将实现下面的功能:
- 将good_data用两个维度的PCA进行拟合,并将结果存储到pca中去。
- 使用pca.transform将good_data进行转换,并将结果存储在reduced_data中。
- 使用pca.transform将log_samples进行转换,并将结果存储在pca_samples中。
Step11: 观察
运行以下代码观察当仅仅使用两个维度进行PCA转换后,这个对数样本数据将怎样变化。观察这里的结果与一个使用六个维度的PCA转换相比较时,前两维的数值是保持不变的。
Step12: 可视化一个双标图(Biplot)
双标图是一个散点图,每个数据点的位置由它所在主成分的分数确定。坐标系是主成分(这里是Dimension 1 和 Dimension 2)。此外,双标图还展示出初始特征在主成分上的投影。一个双标图可以帮助我们理解降维后的数据,发现主成分和初始特征之间的关系。
运行下面的代码来创建一个降维后数据的双标图。
Step13: 观察
一旦我们有了原始特征的投影(红色箭头),就能更加容易的理解散点图每个数据点的相对位置。
在这个双标图中,哪些初始特征与第一个主成分有强关联?哪些初始特征与第二个主成分相关联?你观察到的是否与之前得到的 pca_results 图相符?
回答:与第一个主成分有强关联的有:Milk,Grocery,Detergents_Paper。与第二个主成分有关联的有:Fresh,Frozen。这个与之前得到的pca_results 图相符。
聚类
在这个部分,你讲选择使用K-Means聚类算法或者是高斯混合模型聚类算法以发现数据中隐藏的客户分类。然后,你将从簇中恢复一些特定的关键数据点,通过将它们转换回原始的维度和规模,从而理解他们的含义。
问题 6
使用K-Means聚类算法的优点是什么?使用高斯混合模型聚类算法的优点是什么?基于你现在对客户数据的观察结果,你选用了这两个算法中的哪一个,为什么?
回答
Step14: 问题 7
汇报你尝试的不同的聚类数对应的轮廓系数。在这些当中哪一个聚类的数目能够得到最佳的轮廓系数?
回答
Step15: 练习
Step16: 问题 8
考虑上面的代表性数据点在每一个产品类型的花费总数,你认为这些客户分类代表了哪类客户?为什么?需要参考在项目最开始得到的统计值来给出理由。
提示: 一个被分到'Cluster X'的客户最好被用 'Segment X'中的特征集来标识的企业类型表示。
Step17: 回答
Step18: 回答
Step19: 回答:
得分基本相同,但是不使用cluster特征的得分略微低一些。
在数据有Label的情况下,即可以用于监督学习的情况。可以先用非监督学习来类聚数据,然后对比非监督学习与监督学习的结果进行比较,相互印证。增加对于模型,以及分析结果的信心。
另外,将使用非监督学习的预测出来的10个新用户的label,组成新的样本数据(feature,label),来训练出新的监督学习模型。(这样做的目的何在???)
【CodeReview20170725】
根据上一次review中给出的note,这里我们已经通过历史数据得到了非监督学习的结果,那么现在我们迎来了10个新的用户,实际上我们可以将这10个samples的类别预测出来,这样我们就得到了新用户的label,所以这里说,在这个情景中,我们可以使用非监督学习的成果,即得到的label,来实现监督学习的功能:利用得到的(feature,label)来学到一个监督学习模型。
可视化内在的分布
在这个项目的开始,我们讨论了从数据集中移除'Channel'和'Region'特征,这样在分析过程中我们就会着重分析用户产品类别。通过重新引入Channel这个特征到数据集中,并施加和原来数据集同样的PCA变换的时候我们将能够发现数据集产生一个有趣的结构。
运行下面的代码单元以查看哪一个数据点在降维的空间中被标记为'HoReCa' (旅馆/餐馆/咖啡厅)或者'Retail'。另外,你将发现样本点在图中被圈了出来,用以显示他们的标签。 | Python Code:
%%time
# 引入这个项目需要的库
import numpy as np
import pandas as pd
import visuals as vs
from IPython.display import display # 使得我们可以对DataFrame使用display()函数
# 设置以内联的形式显示matplotlib绘制的图片(在notebook中显示更美观)
%matplotlib inline
# 载入整个客户数据集
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
Explanation: 机器学习纳米学位
非监督学习
项目 3: 创建用户分类
欢迎来到机器学习工程师纳米学位的第三个项目!在这个notebook文件中,有些模板代码已经提供给你,但你还需要实现更多的功能来完成这个项目。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的代码部分中有你必须要实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示!
除了实现代码外,你还必须回答一些与项目和你的实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。我们将根据你对问题的回答和撰写代码所实现的功能来对你提交的项目进行评分。
提示:Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown可以通过双击进入编辑模式。
开始
在这个项目中,你将分析一个数据集的内在结构,这个数据集包含很多客户真对不同类型产品的年度采购额(用金额表示)。这个项目的任务之一是如何最好地描述一个批发商不同种类顾客之间的差异。这样做将能够使得批发商能够更好的组织他们的物流服务以满足每个客户的需求。
这个项目的数据集能够在UCI机器学习信息库中找到.因为这个项目的目的,分析将不会包括'Channel'和'Region'这两个特征——重点集中在6个记录的客户购买的产品类别上。
运行下面的的代码单元以载入整个客户数据集和一些这个项目需要的Python库。如果你的数据集载入成功,你将看到后面输出数据集的大小。
End of explanation
# 显示数据集的一个描述
display(data.describe())
Explanation: 分析数据
在这部分,你将开始分析数据,通过可视化和代码来理解每一个特征和其他特征的联系。你会看到关于数据集的统计描述,考虑每一个属性的相关性,然后从数据集中选择若干个样本数据点,你将在整个项目中一直跟踪研究这几个数据点。
运行下面的代码单元给出数据集的一个统计描述。注意这个数据集包含了6个重要的产品类型:'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper'和 'Delicatessen'。想一下这里每一个类型代表你会购买什么样的产品。
End of explanation
# TODO:从数据集中选择三个你希望抽样的数据点的索引
indices = [90, 200, 265]
# 为选择的样本建立一个DataFrame
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
Explanation: 练习: 选择样本
为了对客户有一个更好的了解,并且了解代表他们的数据将会在这个分析过程中如何变换。最好是选择几个样本数据点,并且更为详细地分析它们。在下面的代码单元中,选择三个索引加入到索引列表indices中,这三个索引代表你要追踪的客户。我们建议你不断尝试,直到找到三个明显不同的客户。
End of explanation
%%time
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
# TODO:为DataFrame创建一个副本,用'drop'函数丢弃一些指定的特征
grocery = data['Grocery']
data_without_grocery = data.drop(['Grocery'],1)
# display(data.head(1))
# display(fresh.head())
# display(new_data.head())
# TODO:使用给定的特征作为目标,将数据分割成训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(data_without_grocery, grocery, test_size=0.25, random_state=20)
# display(X_train.head(1))
# display(X_test.head(1))
# display(y_train.head(1))
# display(y_test.head(1))
# # TODO:创建一个DecisionTreeRegressor(决策树回归器)并在训练集上训练它
regressor = DecisionTreeRegressor(max_depth=15, random_state=20)
regressor.fit(X_train, y_train)
# # TODO:输出在测试集上的预测得分
score = regressor.score(X_test, y_test)
print score
Explanation: 问题 1
在你看来你选择的这三个样本点分别代表什么类型的企业(客户)?对每一个你选择的样本客户,通过它在每一种产品类型上的花费与数据集的统计描述进行比较,给出你做上述判断的理由。
提示: 企业的类型包括超市、咖啡馆、零售商以及其他。注意不要使用具体企业的名字,比如说在描述一个餐饮业客户时,你不能使用麦当劳。
回答:
1. 客户1:代表小区附近的生鲜店,小卖部(便利店)的客户。总体采购量较小,还以Fresh和Frozen为主。
- Fresh和Frozen需求量较大,都超过了总体的75%;
- 而其他种类的商品的消费量却较小,Milk,Grocery,Detergents_Paper,Delicatessen都不到总体的25%。说明,客户家中人数少,客户经常性买生鲜类产品。
2. 客户2:代表超市的客户。各种商品的采购量都较大,非常适合去超市批量采购,会更加的优惠。
- Milk,Grocery,Frozen,Detergents_Paper,Delicatessen需求量非常大,超过了总体的75%;
- 只有Delicatessen低于50%。
3. 客户3:代表超市的客户。类似客户2,各种商品的采购量都较大,非常适合去超市批量采购,会更加的优惠。
- 这个客户Fresh、Milk、Grocery,Frozen,Delicatessen需求量都较大,都超过了总体的75%。
- Detergents_Paper刚刚超过50%。
练习: 特征相关性
一个有趣的想法是,考虑这六个类别中的一个(或者多个)产品类别,是否对于理解客户的购买行为具有实际的相关性。也就是说,当用户购买了一定数量的某一类产品,我们是否能够确定他们必然会成比例地购买另一种类的产品。有一个简单的方法可以检测相关性:我们用移除了某一个特征之后的数据集来构建一个监督学习(回归)模型,然后用这个模型去预测那个被移除的特征,再对这个预测结果进行评分,来判断由余下5个特征构建的模型对移除掉的那一个特征预测能力的好坏。
在下面的代码单元中,你需要实现以下的功能:
- 使用DataFrame.drop函数移除数据集中你选择的不需要的特征,并将移除后的结果赋值给new_data。
- 使用sklearn.model_selection.train_test_split将数据集分割成训练集和测试集。
- 使用移除的特征作为你的目标标签。设置test_size为0.25并设置一个random_state。
- 导入一个DecisionTreeRegressor(决策树回归器),设置一个random_state,然后用训练集训练它。
- 使用回归器的score函数输出模型在测试集上的预测得分。
End of explanation
# 对于数据中的每一对特征构造一个散布矩阵
pd.plotting.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
%pdb
# 数据关联度表格呈现
import seaborn
seaborn.heatmap(data.corr(), annot=True)
# 添加这些代码以后,导致所有曲线图背景颜色显示异常(???????????)
Explanation: 问题 2
你尝试预测哪一个特征?预测的得分是多少?这个特征对于区分用户的消费习惯来说必要吗?
提示: 决定系数(coefficient of determination), R^2,结果在0到1之间,1表示完美拟合,一个负的R^2表示模型不能够拟合数据。
回答:
预测Grocery,预测得分约为0.728,这个特征对于区分用户是没有必要的。因为这个特征可以被其他特征很好的预测,说明与其他特征的相关性好,可以被其他的特征替代,所以没有存在的必要性。
可视化特征分布
为了能够对这个数据集有一个更好的理解,我们可以对数据集中的每一个产品特征构建一个散布矩阵(scatter matrix)。如果你发现你在上面尝试预测的特征对于区分一个特定的用户来说是必须的,那么这个特征和其它的特征可能不会在下面的散射矩阵中显示任何关系。相反的,如果你认为这个特征对于识别一个特定的客户是没有作用的,那么通过散布矩阵可以看出在这个数据特征和其它特征中有关联性。运行下面的代码以创建一个散布矩阵。
End of explanation
%%time
from scipy import stats
display(data.head())
# TODO:使用自然对数缩放数据
# log_data = data.apply(lambda x: np.log(x))
log_data = np.log(data)
display(log_data.head())
# TODO:使用自然对数缩放样本数据
log_samples = np.log(samples)
display(log_samples.head())
# 为每一对新产生的特征制作一个散射矩阵
pd.plotting.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: 问题 3
这里是否存在一些特征他们彼此之间存在一定程度相关性?如果有请列出。这个结果是验证了还是否认了你尝试预测的那个特征的相关性?这些特征的数据是怎么分布的?
提示: 这些数据是正态分布(normally distributed)的吗?大多数的数据点分布在哪?
回答:
- Grocery与Milk也存在比较好的正相关,大部分数据分布在图形左下角斜向上的锥形区域,符合正态分布。
- Milk与Detergents_Paper存在非常好的正相关,同上,也是大部分数据分布在图形左下角斜向上的锥形区域,符合正态分布。
- Grocery与Detergents_Paper存在非常好的正相关,大部分数据分布在图形左下角斜向上的长条形区域,符合正态分布。
这个结果与之前的预测的结果(Grocery相关性好)相似。
数据预处理
在这个部分,你将通过在数据上做一个合适的缩放,并检测异常点(你可以选择性移除)将数据预处理成一个更好的代表客户的形式。预处理数据是保证你在分析中能够得到显著且有意义的结果的重要环节。
练习: 特征缩放
如果数据不是正态分布的,尤其是数据的平均数和中位数相差很大的时候(表示数据非常歪斜)。这时候通常用一个非线性的缩放是很合适的,(英文原文) — 尤其是对于金融数据。一种实现这个缩放的方法是使用Box-Cox 变换,这个方法能够计算出能够最佳减小数据倾斜的指数变换方法。一个比较简单的并且在大多数情况下都适用的方法是使用自然对数。
在下面的代码单元中,你将需要实现以下功能:
- 使用np.log函数在数据 data 上做一个对数缩放,然后将它的副本(不改变原始data的值)赋值给log_data。
- 使用np.log函数在样本数据 samples 上做一个对数缩放,然后将它的副本赋值给log_samples。
End of explanation
# 展示经过对数变换后的样本数据
display(log_samples)
Explanation: 观察
在使用了一个自然对数的缩放之后,数据的各个特征会显得更加的正态分布。对于任意的你以前发现有相关关系的特征对,观察他们的相关关系是否还是存在的(并且尝试观察,他们的相关关系相比原来是变强了还是变弱了)。
运行下面的代码以观察样本数据在进行了自然对数转换之后如何改变了。
End of explanation
import numpy as np
# 用于记录所有的异常值的索引
allOutliers = []
# 对于每一个特征,找到值异常高或者是异常低的数据点
for feature in log_data.keys():
# TODO:计算给定特征的Q1(数据的25th分位点)
Q1 = np.percentile(log_data[feature], 25)
# TODO:计算给定特征的Q3(数据的75th分位点)
Q3 = np.percentile(log_data[feature], 75)
# TODO:使用四分位范围计算异常阶(1.5倍的四分位距)
step = 1.5*(Q3-Q1)
# 显示异常点
print "Data points considered outliers for the feature '{}':".format(feature)
display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
outlier = log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))]
allOutliers.extend(outlier.index)
# 找出重复的索引值
sortedAllOutliers = np.sort(allOutliers)
duplicatedOutliers = []
preElement = -1
for element in sortedAllOutliers.flat:
if element == preElement and element not in duplicatedOutliers:
duplicatedOutliers.append(element);
preElement = element
print "sortedAllOutliers:{0}".format(sortedAllOutliers)
print "duplicatedOutliers: {0}".format(duplicatedOutliers)
# 可选:选择你希望移除的数据点的索引
outliers = np.unique(sortedAllOutliers)
print "outliers: {}".format(outliers)
# 如果选择了的话,移除异常点
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
Explanation: 练习: 异常值检测
对于任何的分析,在数据预处理的过程中检测数据中的异常值都是非常重要的一步。异常值的出现会使得把这些值考虑进去后结果出现倾斜。这里有很多关于怎样定义什么是数据集中的异常值的经验法则。这里我们将使用Tukey的定义异常值的方法:一个异常阶(outlier step)被定义成1.5倍的四分位距(interquartile range,IQR)。一个数据点如果某个特征包含在该特征的IQR之外的特征,那么该数据点被认定为异常点。
在下面的代码单元中,你需要完成下面的功能:
- 将指定特征的25th分位点的值分配给Q1。使用np.percentile来完成这个功能。
- 将指定特征的75th分位点的值分配给Q3。同样的,使用np.percentile来完成这个功能。
- 将指定特征的异常阶的计算结果赋值给step.
- 选择性地通过将索引添加到outliers列表中,以移除异常值。
注意: 如果你选择移除异常值,请保证你选择的样本点不在这些移除的点当中!
一旦你完成了这些功能,数据集将存储在good_data中。
End of explanation
%%time
from sklearn.decomposition import PCA
# TODO:通过在good data上使用PCA,将其转换成和当前特征数一样多的维度
pca = PCA(n_components=good_data.shape[1], random_state=20)
pca.fit(good_data)
# TODO:使用上面的PCA拟合将变换施加在log_samples上
pca_samples = pca.transform(log_samples)
# 生成PCA的结果图
pca_results = vs.pca_results(good_data, pca)
Explanation: 问题 4
请列出所有在多于一个特征下被看作是异常的数据点。这些点应该被从数据集中移除吗?为什么?把你认为需要移除的数据点全部加入到到outliers变量中。
回答:多于一个特征下被看作是异常的数据点为:65, 66, 75, 128, 154。
虽然这些数据点有多个特性,说明信息比较丰富,不是单一的数据异常。但是,这些数据做增加的信息所带来的积极的影响,远小于其使类聚中心偏移所带来的消极影响。所以,这些数据点应该被移除。
特征转换
在这个部分中你将使用主成分分析(PCA)来分析批发商客户数据的内在结构。由于使用PCA在一个数据集上会计算出最大化方差的维度,我们将找出哪一个特征组合能够最好的描绘客户。
练习: 主成分分析(PCA)
既然数据被缩放到一个更加正态分布的范围中并且我们也移除了需要移除的异常点,我们现在就能够在good_data上使用PCA算法以发现数据的哪一个维度能够最大化特征的方差。除了找到这些维度,PCA也将报告每一个维度的解释方差比(explained variance ratio)--这个数据有多少方差能够用这个单独的维度来解释。注意PCA的一个组成部分(维度)能够被看做这个空间中的一个新的“特征”,但是它是原来数据中的特征构成的。
在下面的代码单元中,你将要实现下面的功能:
- 导入sklearn.decomposition.PCA并且将good_data用PCA并且使用6个维度进行拟合后的结果保存到pca中。
- 使用pca.transform将log_samples进行转换,并将结果存储到pca_samples中。
End of explanation
# 展示经过PCA转换的sample log-data
display(samples)
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
Explanation: 问题 5
数据的第一个和第二个主成分 总共 表示了多少的方差? 前四个主成分呢?使用上面提供的可视化图像,讨论从用户花费的角度来看前四个主要成分的消费行为最能代表哪种类型的客户并给出你做出判断的理由。
提示: 某一特定维度上的正向增长对应正权特征的增长和负权特征的减少。增长和减少的速率和每个特征的权重相关。参考资料(英文)。
回答:前两个主成分表示了0.7252,即%72.52的方差。前四个主成分表示了0.9279,即%92.79的方差。
- Dimension 1代表Milk、Grocery和Detergents三者相关性购买的客户(买其中一种的商品的同时,会购买相应数量的另外2种商品)。
- Dimension 2代表Fresh、Frozen和Delicatessen三者相关性购买的客户(买其中一种的商品的同时,会购买相应数量的另外2种商品)。
- Dimension 3代表Delicatessen和Fresh二者负相关性购买的客户(购买Delicatessen时,会购买更少Fresh的客户)。
- Dimension 4代表Delicatessen和Frozen二者负相关性购买的客户(购买Delicatessen时,会购买更少Frozen的客户)。
正的权值越大,说明购买相关性越大,所以,主成分分析的结果,才会放到一起,合并到同一个维度里面来。
【CodeReview20170723】
在这边,我们使用了主成分分析法,将原来的6个特征通过数学变换,变换为了另外6个特征。对方差的计算,是为了让我们能够选择方差较大的特征以保留它们。每个新特征,实际上都是由原来的特征通过某种带权重的组合得到的,权重就是图中柱状图柱高度。考虑权重的绝对值,权重绝对值越大,说明权重对应的原特征对这个新特征带来的影响越大,反之亦反。权重若为正,则说明他们有正相关性;负值则说明它们是负相关性。A和B有正相关性可以理解为,买更多的A意味着有很大可能买更多的B;负相关性意味着买更多的A意味着有很大可能买更少的B。
观察
运行下面的代码,查看经过对数转换的样本数据在进行一个6个维度的主成分分析(PCA)之后会如何改变。观察样本数据的前四个维度的数值。考虑这和你初始对样本点的解释是否一致。
End of explanation
%%time
from sklearn.decomposition import PCA
# TODO:通过在good data上进行PCA,将其转换成两个维度
pca = PCA(n_components = 2, random_state=20)
pca.fit(good_data)
# TODO:使用上面训练的PCA将good data进行转换
reduced_data = pca.transform(good_data)
# TODO:使用上面训练的PCA将log_samples进行转换
pca_samples = pca.transform(log_samples)
# 为降维后的数据创建一个DataFrame
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
Explanation: 练习:降维
当使用主成分分析的时候,一个主要的目的是减少数据的维度,这实际上降低了问题的复杂度。当然降维也是需要一定代价的:更少的维度能够表示的数据中的总方差更少。因为这个,累计解释方差比(cumulative explained variance ratio)对于我们确定这个问题需要多少维度非常重要。另外,如果大部分的方差都能够通过两个或者是三个维度进行表示的话,降维之后的数据能够被可视化。
在下面的代码单元中,你将实现下面的功能:
- 将good_data用两个维度的PCA进行拟合,并将结果存储到pca中去。
- 使用pca.transform将good_data进行转换,并将结果存储在reduced_data中。
- 使用pca.transform将log_samples进行转换,并将结果存储在pca_samples中。
End of explanation
# 展示经过两个维度的PCA转换之后的样本log-data
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
Explanation: 观察
运行以下代码观察当仅仅使用两个维度进行PCA转换后,这个对数样本数据将怎样变化。观察这里的结果与一个使用六个维度的PCA转换相比较时,前两维的数值是保持不变的。
End of explanation
# Create a biplot
vs.biplot(good_data, reduced_data, pca)
Explanation: 可视化一个双标图(Biplot)
双标图是一个散点图,每个数据点的位置由它所在主成分的分数确定。坐标系是主成分(这里是Dimension 1 和 Dimension 2)。此外,双标图还展示出初始特征在主成分上的投影。一个双标图可以帮助我们理解降维后的数据,发现主成分和初始特征之间的关系。
运行下面的代码来创建一个降维后数据的双标图。
End of explanation
%%time
from sklearn.mixture import GaussianMixture
from sklearn.metrics import silhouette_score
# TODO:在降维后的数据上使用你选择的聚类算法
clusterer = GaussianMixture(n_components=2, random_state=20).fit(reduced_data)
# TODO:预测每一个点的簇
preds = clusterer.predict(reduced_data)
# TODO:找到聚类中心
centers = clusterer.means_
print "centers: \n{0}".format(centers)
# TODO:预测在每一个转换后的样本点的类
sample_preds = clusterer.predict(pca_samples)
# TODO:计算选择的类别的平均轮廓系数(mean silhouette coefficient)
score = silhouette_score(reduced_data, preds)
print "silhouette_score: {0}".format(score)
Explanation: 观察
一旦我们有了原始特征的投影(红色箭头),就能更加容易的理解散点图每个数据点的相对位置。
在这个双标图中,哪些初始特征与第一个主成分有强关联?哪些初始特征与第二个主成分相关联?你观察到的是否与之前得到的 pca_results 图相符?
回答:与第一个主成分有强关联的有:Milk,Grocery,Detergents_Paper。与第二个主成分有关联的有:Fresh,Frozen。这个与之前得到的pca_results 图相符。
聚类
在这个部分,你讲选择使用K-Means聚类算法或者是高斯混合模型聚类算法以发现数据中隐藏的客户分类。然后,你将从簇中恢复一些特定的关键数据点,通过将它们转换回原始的维度和规模,从而理解他们的含义。
问题 6
使用K-Means聚类算法的优点是什么?使用高斯混合模型聚类算法的优点是什么?基于你现在对客户数据的观察结果,你选用了这两个算法中的哪一个,为什么?
回答:
K-Means类聚算法的优点是:
1. 算法快速、简单;
2. 对大数据集有较高的效率,并且是可伸缩性的;
3. 时间复杂度近于线性,而且适合挖掘大规模数据集。
高斯混合模型聚类算法的优点是:
1. 模型假设更加复杂准确,即假定服从正态分布(同时,也带来缺点是,计算复杂度增加)
2. 运行结果是属于某个分类的概率,描述更精确严谨。
根据数据分布比较区域比较大,分类的界限并没有那么的严格和清晰,GMM这种分类结果是概率的方法,更加的精确严谨。我打算选用高斯混合模型聚类算法。
练习: 创建聚类
针对不同情况,有些问题你需要的聚类数目可能是已知的。但是在聚类数目不作为一个先验知道的情况下,我们并不能够保证某个聚类的数目对这个数据是最优的,因为我们对于数据的结构(如果存在的话)是不清楚的。但是,我们可以通过计算每一个簇中点的轮廓系数来衡量聚类的质量。数据点的轮廓系数衡量了它与分配给他的簇的相似度,这个值范围在-1(不相似)到1(相似)。平均轮廓系数为我们提供了一种简单地度量聚类质量的方法。
在接下来的代码单元中,你将实现下列功能:
- 在reduced_data上使用一个聚类算法,并将结果赋值到clusterer,需要设置 random_state 使得结果可以复现。
- 使用clusterer.predict预测reduced_data中的每一个点的簇,并将结果赋值到preds。
- 使用算法的某个属性值找到聚类中心,并将它们赋值到centers。
- 预测pca_samples中的每一个样本点的类别并将结果赋值到sample_preds。
- 导入sklearn.metrics.silhouette_score包并计算reduced_data相对于preds的轮廓系数。
- 将轮廓系数赋值给score并输出结果。
End of explanation
print pca_samples
# 从已有的实现中展示聚类的结果
vs.cluster_results(reduced_data, preds, centers, pca_samples)
Explanation: 问题 7
汇报你尝试的不同的聚类数对应的轮廓系数。在这些当中哪一个聚类的数目能够得到最佳的轮廓系数?
回答:
- n_components=2: 0.447
- n_components=3: 0.362
- n_components=4: 0.304
当类聚数目为2时,能够得到最佳的轮廓系数。
聚类可视化
一旦你选好了通过上面的评价函数得到的算法的最佳聚类数目,你就能够通过使用下面的代码块可视化来得到的结果。作为实验,你可以试着调整你的聚类算法的聚类的数量来看一下不同的可视化结果。但是你提供的最终的可视化图像必须和你选择的最优聚类数目一致。
End of explanation
log_centers = pca.inverse_transform(centers)
print log_centers
# TODO:反向转换中心点
log_centers = pca.inverse_transform(centers)
# TODO:对中心点做指数转换
true_centers = np.exp(log_centers)
# 显示真实的中心点
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
Explanation: 练习: 数据恢复
上面的可视化图像中提供的每一个聚类都有一个中心点。这些中心(或者叫平均点)并不是数据中真实存在的点,但是是所有预测在这个簇中的数据点的平均。对于创建客户分类的问题,一个簇的中心对应于那个分类的平均用户。因为这个数据现在进行了降维并缩放到一定的范围,我们可以通过施加一个反向的转换恢复这个点所代表的用户的花费。
在下面的代码单元中,你将实现下列的功能:
- 使用pca.inverse_transform将centers 反向转换,并将结果存储在log_centers中。
- 使用np.log的反函数np.exp反向转换log_centers并将结果存储到true_centers中。
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
# add the true centers as rows to our original data
newdata = data.append(true_centers)
# show the percentiles of the centers
ctr_pcts = 100. * newdata.rank(axis=0, pct=True).loc[['Segment 0', 'Segment 1']].round(decimals=3)
print ctr_pcts
# visualize percentiles with heatmap
sns.heatmap(ctr_pcts, annot=True, cmap='Greens', fmt='.1f', linewidth=.1, square=True, cbar=False)
plt.xticks(rotation=45, ha='center')
plt.yticks(rotation=0)
plt.title('Percentile ranks of\nsegment centers');
Explanation: 问题 8
考虑上面的代表性数据点在每一个产品类型的花费总数,你认为这些客户分类代表了哪类客户?为什么?需要参考在项目最开始得到的统计值来给出理由。
提示: 一个被分到'Cluster X'的客户最好被用 'Segment X'中的特征集来标识的企业类型表示。
End of explanation
# 显示预测结果
display(samples.head())
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
Explanation: 回答:
- Segment 0,应该是属于零售店客户,各种类别的商品采购量都比较小,Fresh和Frozen购买量刚刚超过最大值的50%,其他类别的商品采购量都在最大值的25%~50%。
- Segment 1,应该是属于超市客户,Milk,Grocery,Frozen,购买量都相对较大,购买量达到了最大值的75%,Delicatessen购买量达到了最大值的50%~75%,只有Fresh,Frozen购买量在25%~50%。
问题 9
对于每一个样本点 * 问题 8 中的哪一个分类能够最好的表示它?你之前对样本的预测和现在的结果相符吗?*
运行下面的代码单元以找到每一个样本点被预测到哪一个簇中去。
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# 读取包含聚类结果的数据
cluster_data = pd.read_csv("cluster.csv")
y = cluster_data['Region']
X = cluster_data.drop(['Region'], axis = 1)
# 划分训练集测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=24)
clf = RandomForestClassifier(random_state=24)
clf.fit(X_train, y_train)
print "使用cluster特征的得分", clf.score(X_test, y_test)
# 移除cluster特征
X_train = X_train.copy()
X_train.drop(['cluster'], axis=1, inplace=True)
X_test = X_test.copy()
X_test.drop(['cluster'], axis=1, inplace=True)
clf.fit(X_train, y_train)
print "不使用cluster特征的得分", clf.score(X_test, y_test)
Explanation: 回答:
- Sample point 0: 零售店(便利店)客户
- Sample point 1: 超市客户
- Sample point 2: 超市客户
与之前的预测基本相符。
结论
在最后一部分中,你要学习如何使用已经被分类的数据。首先,你要考虑不同组的客户客户分类,针对不同的派送策略受到的影响会有什么不同。其次,你要考虑到,每一个客户都被打上了标签(客户属于哪一个分类)可以给客户数据提供一个多一个特征。最后,你会把客户分类与一个数据中的隐藏变量做比较,看一下这个分类是否辨识了特定的关系。
问题 10
在对他们的服务或者是产品做细微的改变的时候,公司经常会使用A/B tests以确定这些改变会对客户产生积极作用还是消极作用。这个批发商希望考虑将他的派送服务从每周5天变为每周3天,但是他只会对他客户当中对此有积极反馈的客户采用。这个批发商应该如何利用客户分类来知道哪些客户对它的这个派送策略的改变有积极的反馈,如果有的话?你需要给出在这个情形下A/B 测试具体的实现方法,以及最终得出结论的依据是什么?
提示: 我们能假设这个改变对所有的客户影响都一致吗?我们怎样才能够确定它对于哪个类型的客户影响最大?
回答:
- 首先假设,这个服务改变对于用户的影响是均匀的,即假设一个已经确定是积极的服务改变对于某单个客户的影响有可能是积极的,也有可能的消极的,但是对于总体的影响应该是恒定的。我们不能假设对于客户的影响是一致的。
- 确认结果的评价标准是:客户投诉率变高(或变低),客户退订减少(或增多),新客户增加(或减少)为好(或坏)。
- 过程:假设有2种分类的客户,将各种分类的客户分别随机用户分成100组,共200组。然后,从各个分类中分别抽出3组来实验。然后,收集实验结果。
- 根据结果评价标准,确定这个服务改变对哪些分类的客户有积极的作用,然后对这些分类的客户采用新的服务模式。
问题 11
通过聚类技术,我们能够将原有的没有标记的数据集中的附加结构分析出来。因为每一个客户都有一个最佳的划分(取决于你选择使用的聚类算法),我们可以把用户分类作为数据的一个工程特征。假设批发商最近迎来十位新顾客,并且他已经为每位顾客每个产品类别年度采购额进行了预估。进行了这些估算之后,批发商该如何运用它的预估和非监督学习的结果来对这十个新的客户进行更好的预测?
提示:在下面的代码单元中,我们提供了一个已经做好聚类的数据(聚类结果为数据中的cluster属性),我们将在这个数据集上做一个小实验。尝试运行下面的代码看看我们尝试预测‘Region’的时候,如果存在聚类特征'cluster'与不存在相比对最终的得分会有什么影响?这对你有什么启发?
End of explanation
# 根据‘Channel‘数据显示聚类的结果
vs.channel_results(reduced_data, outliers, pca_samples)
Explanation: 回答:
得分基本相同,但是不使用cluster特征的得分略微低一些。
在数据有Label的情况下,即可以用于监督学习的情况。可以先用非监督学习来类聚数据,然后对比非监督学习与监督学习的结果进行比较,相互印证。增加对于模型,以及分析结果的信心。
另外,将使用非监督学习的预测出来的10个新用户的label,组成新的样本数据(feature,label),来训练出新的监督学习模型。(这样做的目的何在???)
【CodeReview20170725】
根据上一次review中给出的note,这里我们已经通过历史数据得到了非监督学习的结果,那么现在我们迎来了10个新的用户,实际上我们可以将这10个samples的类别预测出来,这样我们就得到了新用户的label,所以这里说,在这个情景中,我们可以使用非监督学习的成果,即得到的label,来实现监督学习的功能:利用得到的(feature,label)来学到一个监督学习模型。
可视化内在的分布
在这个项目的开始,我们讨论了从数据集中移除'Channel'和'Region'特征,这样在分析过程中我们就会着重分析用户产品类别。通过重新引入Channel这个特征到数据集中,并施加和原来数据集同样的PCA变换的时候我们将能够发现数据集产生一个有趣的结构。
运行下面的代码单元以查看哪一个数据点在降维的空间中被标记为'HoReCa' (旅馆/餐馆/咖啡厅)或者'Retail'。另外,你将发现样本点在图中被圈了出来,用以显示他们的标签。
End of explanation |
5,594 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
P3 - Data Wrangling with MongoDB
OpenStreetMap Project Data Wrangling with MongoDB
Gangadhara Naga Sai<a name="top"></a>
Data used -<a href=https
Step1: <hr>
Over-abbreviated Names<a name="abbr"></a>
Since the most of data being manually uploaded, there are lot of abbreviations in street names,locality names.
Where they are filtered and replaced with full names.
Step2: <hr>
Merging Both cities<a name="combine_cities"></a>
These two maps are selected since ,right now i am living at Hoodi,Bengaluru. And one day i want do my masters in japan in robotics,so i had selected locality of University of tokyo, Bunkyo.I really wanted to explore differences between the regions.
I need to add a tag named "city" so i can differentiate them from the database.
<hr>
2. Data Overview<a name="data_overview"></a>
This section contains basic statistics about the dataset and the MongoDB queries used to gather them.
File Sizes
Step3: Number of documents
Step4: Bunkyo
Step5: Number of way nodes.
Step6: Total Number of contributor.
Step7: <hr>
3. Additional Data Exploration using MongoDB<a name="exploration"></a>
I am going to use the pipeline function to retrive data from the database
Step8: The top contributors for hoodi are no where near since bunkyo being a more compact region than hoodi ,there are more places to contribute.
<hr>
To get the top Amenities in Hoodi and Bunkyo
I will be showing the pipeline that will go in the above mentioned "Pipleline" function
Step9: As compared to hoodi ,bunkyo have few atms,And parking can be commonly found in bunkyo locality
<hr>
popular places of worship
Step10: As expected japan is popular with buddism,
but india being a secular country it will be having most of the reglious places of worship,where hinduism being majority
<hr>
popular restaurants
Step11: {u'Count'
Step12: Burger seems very popular among japanese in fast foods,i was expecting ramen to be more popular
, but in hoodi pizza is really common,being a metropolitan city.
<hr>
ATM's near locality
Step13: There are quite a few ATM in Bunkyo as compared to hoodi
<hr>
Martial arts or Dojo Center near locality
Step14: I wanted to learn martial arts ,
In japan is known for its akido and other ninjistsu martial arts , where i can find some in bunkyo
Where as in hoodi,india Kalaripayattu Martial Arts are one of the ancient arts that ever existed.
<hr>
most popular shops.
Step15: most popular supermarkets | Python Code:
def isEnglish(string):
try:
string.encode('ascii')
except UnicodeEncodeError:
return False
else:
return True
Explanation: P3 - Data Wrangling with MongoDB
OpenStreetMap Project Data Wrangling with MongoDB
Gangadhara Naga Sai<a name="top"></a>
Data used -<a href=https://mapzen.com/metro-extracts/> MapZen Weekly OpenStreetMaps Metro Extracts</a>
Map Areas:
These two maps are selected since ,right now i am living at Hoodi,Bengaluru. And my dream is to do my masters in japan in robotics,so i had selected locality of University of tokyo, Bunkyo.I really wanted to explore differences between the regions.
<a href=https://mapzen.com/data/metro-extracts/your-extracts/fdd7c4ef0518> Bonkyu,Tokyo,Japan. </a>
<a href=https://mapzen.com/data/metro-extracts/your-extracts/c1f2842408ac> Hoodi,Bengaluru,india </a>
Working Code :
- <a href=https://mapzen.com/data/metro-extracts/your-extracts/fdd7c4ef0518> Bonkyu,Tokyo,Japan. </a>
<hr>
Problems Encountered in the Map
Filtering Different Language names
Over-abbreviated Names
Merging both cities
Data Overview
Additional Data Exploration using MongoDB
Conclusion
<hr>
<h2><a name="problems"></a> **1. Problems Encountered**</h2>
Some of names were in different Languages so ,i had to filter out them and select english names for both maps Hoodi and Bunkyo
Street names with different types of abbreviations. (i.e. 'Clark Ave SE' or 'Eubank Northeast Ste E-18')
Two cities have to be accessed from one database
Names in Different Language<a name="Language"></a>
Different regions have different languages ,and we find that someof names were in different language which are filltered to get only english names.
Which would check weather the charecters belong to ascii or not
End of explanation
#the city below can be hoodi or bunkyo
for st_type, ways in city_types.iteritems():
for name in ways:
better_name = update_name(name, mapping)
if name != better_name:
print name, "=>", better_name
#few examples
Bunkyo:
Meidai Jr. High Sch. => Meidai Junior High School
St. Mary's Cathedral => Saint Mary's Cathedral
Shinryukei brdg. E. => Shinryukei Bridge East
Iidabashi Sta. E. => Iidabashi Station East
...
Hoodi:
St. Thomas School => Saint Thomas School
Opp. Jagrithi Apartment => Opposite Jagrithi Apartment
...
Explanation: <hr>
Over-abbreviated Names<a name="abbr"></a>
Since the most of data being manually uploaded, there are lot of abbreviations in street names,locality names.
Where they are filtered and replaced with full names.
End of explanation
bangalore.osm -40MB
bangalore.osm.json-51MB
tokyo1.osm- 82MB
tokyo1.osm.json-102.351MB
Explanation: <hr>
Merging Both cities<a name="combine_cities"></a>
These two maps are selected since ,right now i am living at Hoodi,Bengaluru. And one day i want do my masters in japan in robotics,so i had selected locality of University of tokyo, Bunkyo.I really wanted to explore differences between the regions.
I need to add a tag named "city" so i can differentiate them from the database.
<hr>
2. Data Overview<a name="data_overview"></a>
This section contains basic statistics about the dataset and the MongoDB queries used to gather them.
File Sizes
End of explanation
print "Bunkyo:",mongo_db.cities.find({'city':'bunkyo'}).count()
print "Hoodi:",mongo_db.cities.find({'city':'hoodi'}).count()
Explanation: Number of documents
End of explanation
print "Bunkyo:",mongo_db.cities.find({"type":"node",
'city':'bunkyo'}).count()
print "Hoodi:",mongo_db.cities.find({"type":"node",
'city':'hoodi'}).count()
Bunkyo: 1051170
Hoodi: 548862
Explanation: Bunkyo: 1268292
Hoodi: 667842
Number of node nodes.
End of explanation
print "Bunkyo:",mongo_db.cities.find({'type':'way',
'city':'bunkyo'}).count()
print "Hoodi:",mongo_db.cities.find({'type':'way',
'city':'hoodi'}).count()
Bunkyo: 217122
Hoodi: 118980
Explanation: Number of way nodes.
End of explanation
print "Constributors:", len(mongo_db.cities.distinct("created.user"))
Contributors: 858
Explanation: Total Number of contributor.
End of explanation
def pipeline(city):
p= [{"$match":{"created.user":{"$exists":1},
"city":city}},
{"$group": {"_id": {"City":"$city",
"User":"$created.user"},
"contribution": {"$sum": 1}}},
{"$project": {'_id':0,
"City":"$_id.City",
"User_Name":"$_id.User",
"Total_contribution":"$contribution"}},
{"$sort": {"Total_contribution": -1}},
{"$limit" : 5 }]
return p
result1 =mongo_db["cities"].aggregate(pipeline('bunkyo'))
for each in result1:
print(each)
print("\n")
result2 =mongo_db["cities"].aggregate(pipeline('hoodi'))
for each in result2:
print(each)
Explanation: <hr>
3. Additional Data Exploration using MongoDB<a name="exploration"></a>
I am going to use the pipeline function to retrive data from the database
End of explanation
pipeline=[{"$match":{"Additional Information.amenity":{"$exists":1},
"city":city}},
{"$group": {"_id": {"City":"$city",
"Amenity":"$Additional Information.amenity"},
"count": {"$sum": 1}}},
{"$project": {'_id':0,
"City":"$_id.City",
"Amenity":"$_id.Amenity",
"Count":"$count"}},
{"$sort": {"Count": -1}},
{"$limit" : 10 }]
Explanation: The top contributors for hoodi are no where near since bunkyo being a more compact region than hoodi ,there are more places to contribute.
<hr>
To get the top Amenities in Hoodi and Bunkyo
I will be showing the pipeline that will go in the above mentioned "Pipleline" function
End of explanation
p = [{"$match":{"Additional Information.amenity":{"$exists":1},
"Additional Information.amenity":"place_of_worship",
"city":city}},
{"$group":{"_id": {"City":"$city",
"Religion":"$Additional Information.religion"},
"count":{"$sum":1}}},
{"$project":{"_id":0,
"City":"$_id.City",
"Religion":"$_id.Religion",
"Count":"$count"}},
{"$sort":{"Count":-1}},
{"$limit":6}]
Explanation: As compared to hoodi ,bunkyo have few atms,And parking can be commonly found in bunkyo locality
<hr>
popular places of worship
End of explanation
p = [{"$match":{"Additional Information.amenity":{"$exists":1},
"Additional Information.amenity":"restaurant",
"city":city}},
{"$group":{"_id":{"City":"$city",
"Food":"$Additional Information.cuisine"},
"count":{"$sum":1}}},
{"$project":{"_id":0,
"City":"$_id.City",
"Food":"$_id.Food",
"Count":"$count"}},
{"$sort":{"Count":-1}},
{"$limit":6}]
Explanation: As expected japan is popular with buddism,
but india being a secular country it will be having most of the reglious places of worship,where hinduism being majority
<hr>
popular restaurants
End of explanation
p = [{"$match":{"Additional Information.amenity":{"$exists":1},
"Additional Information.amenity":"fast_food",
"city":city}},
{"$group":{"_id":{"City":"$city",
"Food":"$Additional Information.cuisine"},
"count":{"$sum":1}}},
{"$project":{"_id":0,
"City":"$_id.City",
"Food":"$_id.Food",
"Count":"$count"}},
{"$sort":{"Count":-1}},
{"$limit":6}]
Explanation: {u'Count': 582, u'City': u'bunkyo'}
{u'Food': u'japanese', u'City': u'bunkyo', u'Count': 192}
{u'Food': u'chinese', u'City': u'bunkyo', u'Count': 126}
{u'Food': u'italian', u'City': u'bunkyo', u'Count': 69}
{u'Food': u'indian', u'City': u'bunkyo', u'Count': 63}
{u'Food': u'sushi', u'City': u'bunkyo', u'Count': 63}
{u'Count': 213, u'City': u'hoodi'}
{u'Food': u'regional', u'City': u'hoodi', u'Count': 75}
{u'Food': u'indian', u'City': u'hoodi', u'Count': 69}
{u'Food': u'chinese', u'City': u'hoodi', u'Count': 36}
{u'Food': u'international', u'City': u'hoodi', u'Count': 24}
{u'Food': u'Andhra', u'City': u'hoodi', u'Count': 21}
Indian style cusine in Bunkyo seems famous, Which will be better if i go to japan and do my higher studies there.
<hr>
popular fast food joints
End of explanation
p = [{"$match":{"Additional Information.amenity":{"$exists":1},
"Additional Information.amenity":"atm",
"city":city}},
{"$group":{"_id":{"City":"$city",
"Name":"$Additional Information.name:en"},
"count":{"$sum":1}}},
{"$project":{"_id":0,
"City":"$_id.City",
"Name":"$_id.Name",
"Count":"$count"}},
{"$sort":{"Count":-1}},
{"$limit":4}]
Explanation: Burger seems very popular among japanese in fast foods,i was expecting ramen to be more popular
, but in hoodi pizza is really common,being a metropolitan city.
<hr>
ATM's near locality
End of explanation
## Martial arts or Dojo Center near locality
import re
pat = re.compile(r'dojo', re.I)
d=mongo_db.cities.aggregate([{"$match":{ "$or": [ { "Additional Information.name": {'$regex': pat}}
,{"Additional Information.amenity": {'$regex': pat}}]}}
,{"$group":{"_id":{"City":"$city"
, "Sport":"$Additional Information.name"}}}])
for each in d:
print(each)
bunkyo:
{u'_id': {u'City': u'bunkyo', u'Sport': u'Aikikai Hombu Dojo'}}
{u'_id': {u'City': u'bunkyo', u'Sport': u'Kodokan Dojo'}}
hoodi:
{u'_id': {u'City': u'hoodi', u'Sport': u"M S Gurukkal's Kalari Academy"}}
Explanation: There are quite a few ATM in Bunkyo as compared to hoodi
<hr>
Martial arts or Dojo Center near locality
End of explanation
p = [{"$match":{"Additional Information.shop":{"$exists":1},
"city":city}},
{"$group":{"_id":{"City":"$city",
"Shop":"$Additional Information.shop"},
"count":{"$sum":1}}},
{"$project": {'_id':0,
"City":"$_id.City",
"Shop":"$_id.Shop",
"Count":"$count"}},
{"$sort":{"Count":-1}},
{"$limit":10}]
{u'Shop': u'convenience', u'City': u'bunkyo', u'Count': 1035}
{u'Shop': u'clothes', u'City': u'bunkyo', u'Count': 282}
{u'Shop': u'books', u'City': u'bunkyo', u'Count': 225}
{u'Shop': u'mobile_phone', u'City': u'bunkyo', u'Count': 186}
{u'Shop': u'confectionery', u'City': u'bunkyo', u'Count': 156}
{u'Shop': u'supermarket', u'City': u'bunkyo', u'Count': 150}
{u'Shop': u'computer', u'City': u'bunkyo', u'Count': 126}
{u'Shop': u'hairdresser', u'City': u'bunkyo', u'Count': 90}
{u'Shop': u'electronics', u'City': u'bunkyo', u'Count': 90}
{u'Shop': u'anime', u'City': u'bunkyo', u'Count': 90}
{u'Shop': u'clothes', u'City': u'hoodi', u'Count': 342}
{u'Shop': u'supermarket', u'City': u'hoodi', u'Count': 129}
{u'Shop': u'bakery', u'City': u'hoodi', u'Count': 120}
{u'Shop': u'shoes', u'City': u'hoodi', u'Count': 72}
{u'Shop': u'furniture', u'City': u'hoodi', u'Count': 72}
{u'Shop': u'sports', u'City': u'hoodi', u'Count': 66}
{u'Shop': u'electronics', u'City': u'hoodi', u'Count': 60}
{u'Shop': u'beauty', u'City': u'hoodi', u'Count': 54}
{u'Shop': u'car', u'City': u'hoodi', u'Count': 36}
{u'Shop': u'convenience', u'City': u'hoodi', u'Count': 36}
The general stores are quite common in both the places
Explanation: I wanted to learn martial arts ,
In japan is known for its akido and other ninjistsu martial arts , where i can find some in bunkyo
Where as in hoodi,india Kalaripayattu Martial Arts are one of the ancient arts that ever existed.
<hr>
most popular shops.
End of explanation
p = [{"$match":{"Additional Information.shop":{"$exists":1},
"city":city,
"Additional Information.shop":"supermarket"}},
{"$group":{"_id":{"City":"$city",
"Supermarket":"$Additional Information.name"},
"count":{"$sum":1}}},
{"$project": {'_id':0,
"City":"$_id.City",
"Supermarket":"$_id.Supermarket",
"Count":"$count"}},
{"$sort":{"Count":-1}},
{"$limit":5}]
{u'Count': 120, u'City': u'bunkyo'}
{u'Count': 9, u'City': u'bunkyo', u'Supermarket': u'Maruetsu'}
{u'Count': 3, u'City': u'bunkyo', u'Supermarket': u"Y's Mart"}
{u'Count': 3, u'City': u'bunkyo', u'Supermarket': u'SainE'}
{u'Count': 3, u'City': u'bunkyo', u'Supermarket': u'DAIMARU Peacock'}
{u'Count': 9, u'City': u'hoodi', u'Supermarket': u'Reliance Fresh'}
{u'Count': 9, u'City': u'hoodi'}
{u'Count': 6, u'City': u'hoodi', u'Supermarket': u"Nilgiri's"}
{u'Count': 3, u'City': u'hoodi', u'Supermarket': u'Royal Mart Supermarket'}
{u'Count': 3, u'City': u'hoodi', u'Supermarket': u'Safal'}
Explanation: most popular supermarkets
End of explanation |
5,595 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Front page heatmap
If we view the front page of each newspaper as an MxN matrix, we can assign each pixel an intensity P based off the font size of the character at that location. Averaging the intensity across all newspapers will generate the "average importance" of a particular location on a front page for all newspapers of that size.
We'll first group newspapers into particular sizes
We'll smooth the output using a density-based heatmap
Step1: First, let's discover how many unique layouts there are for newspapers, and whether there is only one size per newspaper.
Step2: There may be subtle pixel-level differences that may be ignored. Let's round all values to the nearest pixel.
Step3: To the nearest 10 pixels?
Step4: Perhaps it's good enough to grab the 34 newspapers in the 790x1580 categories for now.
What are they?
Step5: We're interested in the average character size at every pixel on these newspapers. Let's go about constructing the matrix for one of them.
Step6: Considering all newspapers of the same aspect ratio
We had to subset the newspapers quite significantly in order to find a bunch with the same exact height and width. Instead, let's find the newspapers with the same aspect ratio, and consider them after scaling.
Step7: The top ones all look awfully close to 50-50. Let's round to the tenth place.
Step8: Over half of the newspapers have roughly a 1
Step9: Heatmap by newspaper
Each newspaper should have its own distinct appearance. Let's try grouping them by newspaper.
Step10: We see the issue here with the title of the newspaper taking up a lot of space.
Step11: Eliminating bylines, newspaper names, etc.
There are lots of repetitive non-article text that we want to eliminate. The heuristic we'll use is whether they appear more than once.
Step12: Average front page, with cleaned text | Python Code:
import pandas as pd
df = pd.read_sql_table('frontpage_texts', 'postgres:///frontpages')
df.head()
Explanation: Front page heatmap
If we view the front page of each newspaper as an MxN matrix, we can assign each pixel an intensity P based off the font size of the character at that location. Averaging the intensity across all newspapers will generate the "average importance" of a particular location on a front page for all newspapers of that size.
We'll first group newspapers into particular sizes
We'll smooth the output using a density-based heatmap
End of explanation
len(df.groupby(['page_width', 'page_height']).indices)
Explanation: First, let's discover how many unique layouts there are for newspapers, and whether there is only one size per newspaper.
End of explanation
df['page_width_round'] = df['page_width'].apply(int)
df['page_height_round'] = df['page_height'].apply(int)
len(df.groupby(['page_width_round', 'page_height_round']).indices)
Explanation: There may be subtle pixel-level differences that may be ignored. Let's round all values to the nearest pixel.
End of explanation
df['page_width_round_10'] = df['page_width'].apply(lambda w: int(w/10)*10)
df['page_height_round_10'] = df['page_height'].apply(lambda w: int(w/10)*10)
print('''Number of unique dimensions: {}
Top dimensions:
{}'''.format(
len(df.groupby(['page_width_round_10', 'page_height_round_10']).slug.nunique()),
df.groupby(['page_width_round_10', 'page_height_round_10']).slug.nunique().sort_values(ascending=False)[:10]
))
Explanation: To the nearest 10 pixels?
End of explanation
newspapers = pd.read_sql_table('newspapers', 'postgres:///frontpages')
WIDTH = 790
HEIGHT = 1580
df_at_size = df[(df.page_width_round_10 == WIDTH) & (df.page_height_round_10 == HEIGHT)]
print('Number of days for which we data for each newspaper')
pd.merge(newspapers, df_at_size.groupby('slug').date.nunique().reset_index(), on='slug').sort_values('date', ascending=False)
Explanation: Perhaps it's good enough to grab the 34 newspapers in the 790x1580 categories for now.
What are they?
End of explanation
one_paper = df_at_size[(df_at_size.slug=='NY_RDC') & (df_at_size.date == df_at_size.date.max())]
print('''The Rochester Democrat and Chronicle has {} entries in the database across {} days.
On the latest day, it has {} text fields.
'''.format(
df_at_size[df_at_size.slug == 'NY_RDC'].shape[0],
df_at_size[df_at_size.slug == 'NY_RDC'].date.nunique(),
one_paper.shape[0]
))
%matplotlib inline
from matplotlib import pyplot as plt
from matplotlib.patches import Rectangle
plt.figure(figsize=(WIDTH/200, HEIGHT/200))
currentAxis = plt.gca()
for i, row in one_paper.iterrows():
left = row.bbox_left / row.page_width
right = row.bbox_right / row.page_width
top = row.bbox_top / row.page_height
bottom = row.bbox_bottom / row.page_height
currentAxis.add_patch(Rectangle((left, bottom), right-left, top-bottom,alpha=0.5))
plt.suptitle('Layout of detected text boxes on a single front page')
plt.show()
import numpy as np
def make_intensity_grid(paper, height=HEIGHT, width=WIDTH, verbose=False):
intensity_grid = np.zeros((height, width))
for i, row in paper.iterrows():
left = int(row.bbox_left)
right = int(row.bbox_right)
top = int(row.bbox_top)
bottom = int(row.bbox_bottom)
if np.count_nonzero(intensity_grid[bottom:top, left:right]) > 0:
if verbose:
print('Warning: overlapping bounding box with', bottom, top, left, right)
intensity_grid[bottom:top, left:right] = row.avg_character_area
return intensity_grid
def plot_intensity(intensity, title, scale=100):
height, width = intensity.shape
fig = plt.figure(figsize=(height/scale, width/scale))
ax = plt.gca()
cmap = plt.get_cmap('YlOrRd')
cmap.set_under(color='white')
fig.suptitle(title)
plt.imshow(intensity, cmap=cmap, extent=[0, width, 0, height], origin='lower', vmin=0.1)
plt.close()
return fig
intensity_grid = make_intensity_grid(one_paper)
plot_intensity(intensity_grid, 'Intensity map of a front page')
intensities = []
for i, ((date, slug), paper) in enumerate(df_at_size.groupby(['date', 'slug'])):
if i % 50 == 0:
print('.', end='')
intensities.append(make_intensity_grid(paper))
avg_intensity = sum(intensities) / len(intensities)
plot_intensity(avg_intensity, 'Average intensity of all {} x {} newspapers'.format(HEIGHT, WIDTH))
Explanation: We're interested in the average character size at every pixel on these newspapers. Let's go about constructing the matrix for one of them.
End of explanation
df['aspect_ratio'] = df['page_width_round_10'] / df['page_height_round_10']
print('''Out of {} newspapers, there are {} unique aspect ratios.
The top ones are:
{}'''.format(
df.slug.nunique(),
df.groupby('slug').aspect_ratio.first().nunique(),
df.groupby('slug').aspect_ratio.first().value_counts().head(5)
))
Explanation: Considering all newspapers of the same aspect ratio
We had to subset the newspapers quite significantly in order to find a bunch with the same exact height and width. Instead, let's find the newspapers with the same aspect ratio, and consider them after scaling.
End of explanation
import math
df['aspect_ratio'] = np.round(df['page_width_round_10'] / df['page_height_round_10'], decimals=1)
print('''This time, there are {} unique aspect ratios.
Top ones:
{}'''.format(
df.groupby('slug').aspect_ratio.first().nunique(),
df.groupby('slug').aspect_ratio.first().value_counts()
))
Explanation: The top ones all look awfully close to 50-50. Let's round to the tenth place.
End of explanation
smallest_width = df[df.aspect_ratio == 0.5].page_width_round_10.min()
smallest_height = df[df.aspect_ratio == 0.5].page_height_round_10.min()
print('''The easiest way would be to scale down to the smallest dimensions.
{} x {}'''.format(
smallest_width,
smallest_height
))
from scipy.misc import imresize
intensities = []
for i, ((date, slug), paper) in enumerate(df[df.aspect_ratio == 0.5].groupby(['date', 'slug'])):
if i % 50 == 0:
print('.', end='')
intensities.append(imresize(make_intensity_grid(paper), (smallest_height, smallest_width)))
count = len(intensities)
avg_intensity = sum([x / count for x in intensities])
plot_intensity(avg_intensity, 'Average front-page of {} newspapers'.format(len(intensities)))
Explanation: Over half of the newspapers have roughly a 1:2 aspect ratio. Let's scale them and push the errors toward the right and bottom margins.
End of explanation
newspapers[newspapers.slug == 'NY_NYT'].head()
def newspaper_for_slug(slug):
return newspapers[newspapers.slug == slug].title.iloc[0]
def slug_for_newspaper(title):
return newspapers[newspapers.title == title].slug.iloc[0]
def avg_frontpage_for(newspaper_title='', random=False, paper=df):
if newspaper_title:
slug = slug_for_newspaper(newspaper_title)
if slug not in paper.slug.unique():
return 'No data'
elif random:
slug = paper.sample(1).slug.iloc[0]
newspaper_title = newspaper_for_slug(slug)
else:
raise ArgumentError('Need newspaper_title or random=True')
newspaper = paper[paper.slug == slug]
width = newspaper.iloc[0].page_width_round
height = newspaper.iloc[0].page_height_round
intensities = []
for i, ((date, slug), paper) in enumerate(newspaper.groupby(['date', 'slug'])):
intensities.append(make_intensity_grid(paper, height=height, width=width))
avg_intensity = sum([x / len(intensities) for x in intensities])
return plot_intensity(avg_intensity, 'Average front-page of {} ({} days)'.format(newspaper_title, newspaper.date.nunique()))
avg_frontpage_for('The Denver Post')
avg_frontpage_for('The Washington Post')
Explanation: Heatmap by newspaper
Each newspaper should have its own distinct appearance. Let's try grouping them by newspaper.
End of explanation
avg_frontpage_for(random=True)
avg_frontpage_for(random=True)
df[df.slug == slug_for_newspaper('Marietta Daily Journal')].text.value_counts().head()
Explanation: We see the issue here with the title of the newspaper taking up a lot of space.
End of explanation
text_counts = df.groupby(['slug']).text.value_counts()
duplicate_text = text_counts[text_counts > 1].reset_index(name='count').drop('count', axis=1)
print('Detected {} rows of duplicate text'.format(duplicate_text.shape[0]))
from collections import defaultdict
duplicate_text_dict = defaultdict(set)
_ = duplicate_text.apply(lambda row: duplicate_text_dict[row.slug].add(row.text), axis=1)
df_clean = df[df.apply(lambda row: row.text not in duplicate_text_dict[row.slug], axis=1)]
avg_frontpage_for('The Hamilton Spectator', paper=df_clean)
avg_frontpage_for('The Washington Post', paper=df_clean)
avg_frontpage_for(random=True, paper=df_clean)
Explanation: Eliminating bylines, newspaper names, etc.
There are lots of repetitive non-article text that we want to eliminate. The heuristic we'll use is whether they appear more than once.
End of explanation
intensities = []
for i, ((date, slug), paper) in enumerate(df_clean[df_clean.aspect_ratio == 0.5].groupby(['date', 'slug'])):
if i % 50 == 0:
print('.', end='')
intensities.append(imresize(make_intensity_grid(paper), (smallest_height, smallest_width)))
count = len(intensities)
avg_intensity = sum([x / count for x in intensities])
plot_intensity(avg_intensity, 'Average front-page of newspapers')
Explanation: Average front page, with cleaned text
End of explanation |
5,596 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Researching a Pairs Trading Strategy
By Delaney Granizo-Mackenzie
Part of the Quantopian Lecture Series
Step1: Explaining the Concept
Step2: Now we generate Y. Remember that Y is supposed to have a deep economic link to X, so the price of Y should vary pretty similarly. We model this by taking X, shifting it up and adding some random noise drawn from a normal distribution.
Step3: Def
Step4: Testing for Cointegration
That's an intuitive definition, but how do we test for this statisitcally? There is a convenient test that lives in statsmodels.tsa.stattools. We should see a very low p-value, as we've artifically created two series that are as cointegrated as physically possible.
Step5: Correlation vs. Cointegration
Correlation and cointegration, while theoretically similar, are not the same. To demonstrate this, we'll show examples of series that are correlated, but not cointegrated, and vice versa. To start let's check the correlation of the series we just generated.
Step6: That's very high, as we would expect. But how would two series that are correlated but not cointegrated look?
Correlation Without Cointegration
A simple example is two series that just diverge.
Step7: Cointegration Without Correlation
A simple example of this case is a normally distributed series and a square wave.
Step8: Sure enough, the correlation is incredibly low, but the p-value shows perfect cointegration.
Def
Step9: Looking for Cointegrated Pairs of Alternative Energy Securities
I'm looking through a set of solar company stocks to see if any of them are cointegrated. We'll start by defining the list of securities we want to look through. Then we'll get the pricing data for each security for the year of 2014.
get_pricing() is a Quantopian method that pulls in stock data, and loads it into a Python Pandas DataPanel object. Available fields are 'price', 'open_price', 'high', 'low', 'volume'. But for this example we will just use 'price' which is the daily closing price of the stock.
Step10: Example of how to get all the prices of all the stocks loaded using get_pricing() above in one pandas dataframe object
Step11: Example of how to get just the prices of a single stock that was loaded using get_pricing() above
Step12: Now we'll run our method on the list and see if any pairs are cointegrated.
Step13: Looks like 'ABGB' and 'FSLR' are cointegrated. Let's take a look at the prices to make sure there's nothing weird going on.
Step14: We'll plot the spread of the two series.
Step15: The absolute spread isn't very useful in statistical terms. It is more helpful to normalize our signal by treating it as a z-score. This way we associate probabilities to the signals we see. If we see a z-score of 1, we know that approximately 84% of all spread values will be smaller.
Step16: Simple Strategy
Step17: We can use the moving averages to compute the z-score of the difference at each given time. This will tell us how extreme the difference is and whether it's a good idea to enter a position at this time. Let's take a look at the z-score now.
Step18: The z-score doesn't mean much out of context, let's plot it next to the prices to get an idea of what it looks like. We'll take the negative of the z-score because the differences were all negative and that's kinda confusing. | Python Code:
import numpy as np
import pandas as pd
import statsmodels
from statsmodels.tsa.stattools import coint
# just set the seed for the random number generator
np.random.seed(107)
import matplotlib.pyplot as plt
Explanation: Researching a Pairs Trading Strategy
By Delaney Granizo-Mackenzie
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
Pairs trading is a nice example of a strategy based on mathematical analysis. The principle is as follows. Let's say you have a pair of securities X and Y that have some underlying economic link. An example might be two companies that manufacture the same product, or two companies in one supply chain. We'll start by constructing an artificial example.
End of explanation
X_returns = np.random.normal(0, 1, 100) # Generate the daily returns
# sum them and shift all the prices up into a reasonable range
X = pd.Series(np.cumsum(X_returns), name='X') + 50
X.plot()
Explanation: Explaining the Concept: We start by generating two fake securities.
We model X's daily returns by drawing from a normal distribution. Then we perform a cumulative sum to get the value of X on each day.
End of explanation
some_noise = np.random.normal(0, 1, 100)
Y = X + 5 + some_noise
Y.name = 'Y'
pd.concat([X, Y], axis=1).plot()
Explanation: Now we generate Y. Remember that Y is supposed to have a deep economic link to X, so the price of Y should vary pretty similarly. We model this by taking X, shifting it up and adding some random noise drawn from a normal distribution.
End of explanation
(Y-X).plot() # Plot the spread
plt.axhline((Y-X).mean(), color='red', linestyle='--') # Add the mean
Explanation: Def: Cointegration
We've constructed an example of two cointegrated series. Cointegration is a "different" form of correlation (very loosely speaking). The spread between two cointegrated timeseries will vary around a mean. The expected value of the spread over time must converge to the mean for pairs trading work work. Another way to think about this is that cointegrated timeseries might not necessarily follow a similar path to a same destination, but they both end up at this destination.
We'll plot the spread between the two now so we can see how this looks.
End of explanation
# compute the p-value of the cointegration test
# will inform us as to whether the spread btwn the 2 timeseries is stationary
# around its mean
score, pvalue, _ = coint(X,Y)
print pvalue
Explanation: Testing for Cointegration
That's an intuitive definition, but how do we test for this statisitcally? There is a convenient test that lives in statsmodels.tsa.stattools. We should see a very low p-value, as we've artifically created two series that are as cointegrated as physically possible.
End of explanation
X.corr(Y)
Explanation: Correlation vs. Cointegration
Correlation and cointegration, while theoretically similar, are not the same. To demonstrate this, we'll show examples of series that are correlated, but not cointegrated, and vice versa. To start let's check the correlation of the series we just generated.
End of explanation
X_returns = np.random.normal(1, 1, 100)
Y_returns = np.random.normal(2, 1, 100)
X_diverging = pd.Series(np.cumsum(X_returns), name='X')
Y_diverging = pd.Series(np.cumsum(Y_returns), name='Y')
pd.concat([X_diverging, Y_diverging], axis=1).plot()
print 'Correlation: ' + str(X_diverging.corr(Y_diverging))
score, pvalue, _ = coint(X_diverging,Y_diverging)
print 'Cointegration test p-value: ' + str(pvalue)
Explanation: That's very high, as we would expect. But how would two series that are correlated but not cointegrated look?
Correlation Without Cointegration
A simple example is two series that just diverge.
End of explanation
Y2 = pd.Series(np.random.normal(0, 1, 1000), name='Y2') + 20
Y3 = Y2.copy()
# Y2 = Y2 + 10
Y3[0:100] = 30
Y3[100:200] = 10
Y3[200:300] = 30
Y3[300:400] = 10
Y3[400:500] = 30
Y3[500:600] = 10
Y3[600:700] = 30
Y3[700:800] = 10
Y3[800:900] = 30
Y3[900:1000] = 10
Y2.plot()
Y3.plot()
plt.ylim([0, 40])
# correlation is nearly zero
print 'Correlation: ' + str(Y2.corr(Y3))
score, pvalue, _ = coint(Y2,Y3)
print 'Cointegration test p-value: ' + str(pvalue)
Explanation: Cointegration Without Correlation
A simple example of this case is a normally distributed series and a square wave.
End of explanation
def find_cointegrated_pairs(securities_panel):
n = len(securities_panel.minor_axis)
score_matrix = np.zeros((n, n))
pvalue_matrix = np.ones((n, n))
keys = securities_panel.keys
pairs = []
for i in range(n):
for j in range(i+1, n):
S1 = securities_panel.minor_xs(securities_panel.minor_axis[i])
S2 = securities_panel.minor_xs(securities_panel.minor_axis[j])
result = coint(S1, S2)
score = result[0]
pvalue = result[1]
score_matrix[i, j] = score
pvalue_matrix[i, j] = pvalue
if pvalue < 0.05:
pairs.append((securities_panel.minor_axis[i], securities_panel.minor_axis[j]))
return score_matrix, pvalue_matrix, pairs
Explanation: Sure enough, the correlation is incredibly low, but the p-value shows perfect cointegration.
Def: Hedged Position
Because you'd like to protect yourself from bad markets, often times short sales will be used to hedge long investments. Because a short sale makes money if the security sold loses value, and a long purchase will make money if a security gains value, one can long parts of the market and short others. That way if the entire market falls off a cliff, we'll still make money on the shorted securities and hopefully break even. In the case of two securities we'll call it a hedged position when we are long on one security and short on the other.
The Trick: Where it all comes together
Because the securities drift towards and apart from each other, there will be times when the distance is high and times when the distance is low. The trick of pairs trading comes from maintaining a hedged position across X and Y. If both securities go down, we neither make nor lose money, and likewise if both go up. We make money on the difference of the two reverting to the mean. In order to do this we'll watch for when X and Y are far apart, then short Y and long X. Similarly we'll watch for when they're close together, and long Y and short X.
Finding real securities that behave like this
The best way to do this is to start with securities you suspect may be cointegrated and perform a statistical test. If you just run statistical tests over all pairs, you'll fall prey to multiple comparison bias.
Here's a method I wrote to look through a list of securities and test for cointegration between all pairs. It returns a cointegration test score matrix, a p-value matrix, and any pairs for which the p-value was less than 0.05.
End of explanation
symbol_list = ['ABGB', 'ASTI', 'CSUN', 'DQ', 'FSLR','SPY']
securities_panel = get_pricing(symbol_list, fields=['price']
, start_date='2014-01-01', end_date='2015-01-01')
securities_panel.minor_axis = map(lambda x: x.symbol, securities_panel.minor_axis)
Explanation: Looking for Cointegrated Pairs of Alternative Energy Securities
I'm looking through a set of solar company stocks to see if any of them are cointegrated. We'll start by defining the list of securities we want to look through. Then we'll get the pricing data for each security for the year of 2014.
get_pricing() is a Quantopian method that pulls in stock data, and loads it into a Python Pandas DataPanel object. Available fields are 'price', 'open_price', 'high', 'low', 'volume'. But for this example we will just use 'price' which is the daily closing price of the stock.
End of explanation
securities_panel.loc['price'].head(5)
Explanation: Example of how to get all the prices of all the stocks loaded using get_pricing() above in one pandas dataframe object
End of explanation
securities_panel.minor_xs('SPY').head(5)
Explanation: Example of how to get just the prices of a single stock that was loaded using get_pricing() above
End of explanation
# Heatmap to show the p-values of the cointegration test between each pair of
# stocks. Only show the value in the upper-diagonal of the heatmap
# (Just showing a '1' for everything in lower diagonal)
scores, pvalues, pairs = find_cointegrated_pairs(securities_panel)
import seaborn
seaborn.heatmap(pvalues, xticklabels=symbol_list, yticklabels=symbol_list, cmap='RdYlGn_r'
, mask = (pvalues >= 0.95)
)
print pairs
Explanation: Now we'll run our method on the list and see if any pairs are cointegrated.
End of explanation
S1 = securities_panel.loc['price']['ABGB']
S2 = securities_panel.loc['price']['FSLR']
score, pvalue, _ = coint(S1, S2)
pvalue
Explanation: Looks like 'ABGB' and 'FSLR' are cointegrated. Let's take a look at the prices to make sure there's nothing weird going on.
End of explanation
diff_series = S1 - S2
diff_series.plot()
plt.axhline(diff_series.mean(), color='black')
Explanation: We'll plot the spread of the two series.
End of explanation
def zscore(series):
return (series - series.mean()) / np.std(series)
zscore(diff_series).plot()
plt.axhline(zscore(diff_series).mean(), color='black')
plt.axhline(1.0, color='red', linestyle='--')
plt.axhline(-1.0, color='green', linestyle='--')
Explanation: The absolute spread isn't very useful in statistical terms. It is more helpful to normalize our signal by treating it as a z-score. This way we associate probabilities to the signals we see. If we see a z-score of 1, we know that approximately 84% of all spread values will be smaller.
End of explanation
# Get the difference in prices between the 2 stocks
difference = S1 - S2
difference.name = 'diff'
# Get the 10 day moving average of the difference
diff_mavg10 = pd.rolling_mean(difference, window=10)
diff_mavg10.name = 'diff 10d mavg'
# Get the 60 day moving average
diff_mavg60 = pd.rolling_mean(difference, window=60)
diff_mavg60.name = 'diff 60d mavg'
pd.concat([diff_mavg60, diff_mavg10], axis=1).plot()
# pd.concat([diff_mavg60, diff_mavg10, difference], axis=1).plot()
Explanation: Simple Strategy:
Go "Long" the spread whenever the z-score is below -1.0
Go "Short" the spread when the z-score is above 1.0
Exit positions when the z-score approaches zero
Since we originally defined the "spread" as S1-S2, "Long" the spread would mean "Buy 1 share of S1, and Sell Short 1 share of S2" (and vice versa if you were going "Short" the spread)
This is just the tip of the iceberg, and only a very simplistic example to illustrate the concepts. In practice you would want to compute a more optimal weighting for how many shares to hold for S1 and S2. Some additional resources on pair trading are listed at the end of this notebook
Trading using constantly updating statistics
Def: Moving Average
A moving average is just an average over the last $n$ datapoints for each given time. It will be undefined for the first $n$ datapoints in our series.
End of explanation
# Take a rolling 60 day standard deviation
std_60 = pd.rolling_std(difference, window=60)
std_60.name = 'std 60d'
# Compute the z score for each day
zscore_60_10 = (diff_mavg10 - diff_mavg60)/std_60
zscore_60_10.name = 'z-score'
zscore_60_10.plot()
plt.axhline(0, color='black')
plt.axhline(1.0, color='red', linestyle='--')
plt.axhline(-1.0, color='green', linestyle='--')
Explanation: We can use the moving averages to compute the z-score of the difference at each given time. This will tell us how extreme the difference is and whether it's a good idea to enter a position at this time. Let's take a look at the z-score now.
End of explanation
two_stocks = securities_panel.loc['price'][['ABGB', 'FSLR']]
# Plot the prices scaled down along with the negative z-score
# just divide the stock prices by 10 to make viewing it on the plot easier
pd.concat([two_stocks/10, zscore_60_10], axis=1).plot()
Explanation: The z-score doesn't mean much out of context, let's plot it next to the prices to get an idea of what it looks like. We'll take the negative of the z-score because the differences were all negative and that's kinda confusing.
End of explanation |
5,597 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classifying Structured Data using Keras Preprocessing Layers
Learning Objectives
Load a CSV file using Pandas.
Build an input pipeline to batch and shuffle the rows using tf.data.
Map from columns in the CSV to features used to train the model using Keras Preprocessing layers.
Build, train, and evaluate a model using Keras.
Introduction
In this notebook, you learn how to classify structured data (e.g. tabular data in a CSV). You will use Keras to define the model, and preprocessing layers as a bridge to map from columns in a CSV to features used to train the model.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Note
Step1: Restart the kernel before proceeding further (On the Notebook menu, select Kernel > Restart Kernel > Restart).
Step2: Use Pandas to create a dataframe
Pandas is a Python library with many helpful utilities for loading and working with structured data. You will use Pandas to download the dataset from a URL, and load it into a dataframe.
Step3: Create target variable
The task in the Kaggle competition is to predict the speed at which a pet will be adopted (e.g., in the first week, the first month, the first three months, and so on). Let's simplify this for our tutorial. Here, you will transform this into a binary classification problem, and simply predict whether the pet was adopted, or not.
After modifying the label column, 0 will indicate the pet was not adopted, and 1 will indicate it was.
Step4: Split the dataframe into train, validation, and test
The dataset you downloaded was a single CSV file. You will split this into train, validation, and test sets.
Step5: Create an input pipeline using tf.data
Next, you will wrap the dataframes with tf.data, in order to shuffle and batch the data. If you were working with a very large CSV file (so large that it does not fit into memory), you would use tf.data to read it from disk directly. That is not covered in this tutorial.
Step6: Now that you have created the input pipeline, let's call it to see the format of the data it returns. You have used a small batch size to keep the output readable.
Step7: You can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.
Demonstrate the use of preprocessing layers.
The Keras preprocessing layers API allows you to build Keras-native input processing pipelines. You will use 3 preprocessing layers to demonstrate the feature preprocessing code.
Normalization - Feature-wise normalization of the data.
CategoryEncoding - Category encoding layer.
StringLookup - Maps strings from a vocabulary to integer indices.
IntegerLookup - Maps integers from a vocabulary to integer indices.
You can find a list of available preprocessing layers here.
Numeric columns
For each of the Numeric feature, you will use a Normalization() layer to make sure the mean of each feature is 0 and its standard deviation is 1.
get_normalization_layer function returns a layer which applies featurewise normalization to numerical features.
Step8: Note
Step9: Often, you don't want to feed a number directly into the model, but instead use a one-hot encoding of those inputs. Consider raw data that represents a pet's age.
Step10: Choose which columns to use
You have seen how to use several types of preprocessing layers. Now you will use them to train a model. You will be using Keras-functional API to build the model. The Keras functional API is a way to create models that are more flexible than the tf.keras.Sequential API.
The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with preprocessing layers. A few columns have been selected arbitrarily to train our model.
Key point
Step11: Create, compile, and train the model
Now you can create our end-to-end model.
Step12: Let's visualize our connectivity graph
Step13: Train the model
Step14: Inference on new data
Key point
Step15: To get a prediction for a new sample, you can simply call model.predict(). There are just two things you need to do | Python Code:
!pip install -q sklearn
Explanation: Classifying Structured Data using Keras Preprocessing Layers
Learning Objectives
Load a CSV file using Pandas.
Build an input pipeline to batch and shuffle the rows using tf.data.
Map from columns in the CSV to features used to train the model using Keras Preprocessing layers.
Build, train, and evaluate a model using Keras.
Introduction
In this notebook, you learn how to classify structured data (e.g. tabular data in a CSV). You will use Keras to define the model, and preprocessing layers as a bridge to map from columns in a CSV to features used to train the model.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Note: This tutorial is similar to Classify structured data with feature columns. This version uses new experimental Keras Preprocessing Layers instead of tf.feature_column. Keras Preprocessing Layers are more intuitive, and can be easily included inside your model to simplify deployment.
The Dataset
You will use a simplified version of the PetFinder dataset. There are several thousand rows in the CSV. Each row describes a pet, and each column describes an attribute. You will use this information to predict if the pet will be adopted.
Following is a description of this dataset. Notice there are both numeric and categorical columns. There is a free text column which you will not use in this tutorial.
Column | Description| Feature Type | Data Type
------------|--------------------|----------------------|-----------------
Type | Type of animal (Dog, Cat) | Categorical | string
Age | Age of the pet | Numerical | integer
Breed1 | Primary breed of the pet | Categorical | string
Color1 | Color 1 of pet | Categorical | string
Color2 | Color 2 of pet | Categorical | string
MaturitySize | Size at maturity | Categorical | string
FurLength | Fur length | Categorical | string
Vaccinated | Pet has been vaccinated | Categorical | string
Sterilized | Pet has been sterilized | Categorical | string
Health | Health Condition | Categorical | string
Fee | Adoption Fee | Numerical | integer
Description | Profile write-up for this pet | Text | string
PhotoAmt | Total uploaded photos for this pet | Numerical | integer
AdoptionSpeed | Speed of adoption | Classification | integer
Import TensorFlow and other libraries
End of explanation
# import necessary libraries
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
# print the tensorflow version
tf.__version__
Explanation: Restart the kernel before proceeding further (On the Notebook menu, select Kernel > Restart Kernel > Restart).
End of explanation
import pathlib
dataset_url = 'http://storage.googleapis.com/download.tensorflow.org/data/petfinder-mini.zip'
csv_file = 'gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/petfinder-mini_toy.csv'
tf.keras.utils.get_file('petfinder_mini.zip', dataset_url,
extract=True, cache_dir='.')
# TODO
# read a comma-separated values (csv) file into DataFrame
dataframe = pd.read_csv(csv_file)
# get the first n rows
dataframe.head()
Explanation: Use Pandas to create a dataframe
Pandas is a Python library with many helpful utilities for loading and working with structured data. You will use Pandas to download the dataset from a URL, and load it into a dataframe.
End of explanation
# In the original dataset "4" indicates the pet was not adopted.
dataframe['target'] = np.where(dataframe['AdoptionSpeed']==4, 0, 1)
# Drop un-used columns.
dataframe = dataframe.drop(columns=['AdoptionSpeed', 'Description'])
Explanation: Create target variable
The task in the Kaggle competition is to predict the speed at which a pet will be adopted (e.g., in the first week, the first month, the first three months, and so on). Let's simplify this for our tutorial. Here, you will transform this into a binary classification problem, and simply predict whether the pet was adopted, or not.
After modifying the label column, 0 will indicate the pet was not adopted, and 1 will indicate it was.
End of explanation
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
Explanation: Split the dataframe into train, validation, and test
The dataset you downloaded was a single CSV file. You will split this into train, validation, and test sets.
End of explanation
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
ds = ds.prefetch(batch_size)
return ds
Explanation: Create an input pipeline using tf.data
Next, you will wrap the dataframes with tf.data, in order to shuffle and batch the data. If you were working with a very large CSV file (so large that it does not fit into memory), you would use tf.data to read it from disk directly. That is not covered in this tutorial.
End of explanation
batch_size = 5
# TODO
# call the necessary function with required parameters
train_ds = df_to_dataset(train, batch_size=batch_size)
[(train_features, label_batch)] = train_ds.take(1)
print('Every feature:', list(train_features.keys()))
print('A batch of ages:', train_features['Age'])
print('A batch of targets:', label_batch )
Explanation: Now that you have created the input pipeline, let's call it to see the format of the data it returns. You have used a small batch size to keep the output readable.
End of explanation
def get_normalization_layer(name, dataset):
# Create a Normalization layer for our feature.
normalizer = preprocessing.Normalization(axis=None)
# TODO
# Prepare a Dataset that only yields our feature.
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the statistics of the data.
normalizer.adapt(feature_ds)
return normalizer
photo_count_col = train_features['PhotoAmt']
layer = get_normalization_layer('PhotoAmt', train_ds)
layer(photo_count_col)
Explanation: You can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.
Demonstrate the use of preprocessing layers.
The Keras preprocessing layers API allows you to build Keras-native input processing pipelines. You will use 3 preprocessing layers to demonstrate the feature preprocessing code.
Normalization - Feature-wise normalization of the data.
CategoryEncoding - Category encoding layer.
StringLookup - Maps strings from a vocabulary to integer indices.
IntegerLookup - Maps integers from a vocabulary to integer indices.
You can find a list of available preprocessing layers here.
Numeric columns
For each of the Numeric feature, you will use a Normalization() layer to make sure the mean of each feature is 0 and its standard deviation is 1.
get_normalization_layer function returns a layer which applies featurewise normalization to numerical features.
End of explanation
def get_category_encoding_layer(name, dataset, dtype, max_tokens=None):
# Create a StringLookup layer which will turn strings into integer indices
if dtype == 'string':
index = preprocessing.StringLookup(max_tokens=max_tokens)
else:
index = preprocessing.IntegerLookup(max_tokens=max_tokens)
# TODO
# Prepare a Dataset that only yields our feature
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the set of possible values and assign them a fixed integer index.
index.adapt(feature_ds)
# Create a Discretization for our integer indices.
encoder = preprocessing.CategoryEncoding(num_tokens=index.vocabulary_size())
# Apply one-hot encoding to our indices. The lambda function captures the
# layer so we can use them, or include them in the functional model later.
return lambda feature: encoder(index(feature))
type_col = train_features['Type']
layer = get_category_encoding_layer('Type', train_ds, 'string')
layer(type_col)
Explanation: Note: If you many numeric features (hundreds, or more), it is more efficient to concatenate them first and use a single normalization layer.
Categorical columns
In this dataset, Type is represented as a string (e.g. 'Dog', or 'Cat'). You cannot feed strings directly to a model. The preprocessing layer takes care of representing strings as a one-hot vector.
get_category_encoding_layer function returns a layer which maps values from a vocabulary to integer indices and one-hot encodes the features.
End of explanation
type_col = train_features['Age']
category_encoding_layer = get_category_encoding_layer('Age', train_ds,
'int64', 5)
category_encoding_layer(type_col)
Explanation: Often, you don't want to feed a number directly into the model, but instead use a one-hot encoding of those inputs. Consider raw data that represents a pet's age.
End of explanation
batch_size = 256
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
all_inputs = []
encoded_features = []
# Numeric features.
for header in ['PhotoAmt', 'Fee']:
numeric_col = tf.keras.Input(shape=(1,), name=header)
normalization_layer = get_normalization_layer(header, train_ds)
encoded_numeric_col = normalization_layer(numeric_col)
all_inputs.append(numeric_col)
encoded_features.append(encoded_numeric_col)
# Categorical features encoded as integers.
age_col = tf.keras.Input(shape=(1,), name='Age', dtype='int64')
encoding_layer = get_category_encoding_layer('Age', train_ds, dtype='int64',
max_tokens=5)
encoded_age_col = encoding_layer(age_col)
all_inputs.append(age_col)
encoded_features.append(encoded_age_col)
# Categorical features encoded as string.
categorical_cols = ['Type', 'Color1', 'Color2', 'Gender', 'MaturitySize',
'FurLength', 'Vaccinated', 'Sterilized', 'Health', 'Breed1']
for header in categorical_cols:
categorical_col = tf.keras.Input(shape=(1,), name=header, dtype='string')
encoding_layer = get_category_encoding_layer(header, train_ds, dtype='string',
max_tokens=5)
encoded_categorical_col = encoding_layer(categorical_col)
all_inputs.append(categorical_col)
encoded_features.append(encoded_categorical_col)
Explanation: Choose which columns to use
You have seen how to use several types of preprocessing layers. Now you will use them to train a model. You will be using Keras-functional API to build the model. The Keras functional API is a way to create models that are more flexible than the tf.keras.Sequential API.
The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with preprocessing layers. A few columns have been selected arbitrarily to train our model.
Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.
Earlier, you used a small batch size to demonstrate the input pipeline. Let's now create a new input pipeline with a larger batch size.
End of explanation
all_features = tf.keras.layers.concatenate(encoded_features)
x = tf.keras.layers.Dense(32, activation="relu")(all_features)
x = tf.keras.layers.Dropout(0.5)(x)
output = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(all_inputs, output)
# TODO
# compile the model
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"])
Explanation: Create, compile, and train the model
Now you can create our end-to-end model.
End of explanation
# rankdir='LR' is used to make the graph horizontal.
tf.keras.utils.plot_model(model, show_shapes=True, rankdir="LR")
Explanation: Let's visualize our connectivity graph:
End of explanation
# TODO
# train the model
model.fit(train_ds, epochs=10, validation_data=val_ds)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
Explanation: Train the model
End of explanation
model.save('my_pet_classifier')
reloaded_model = tf.keras.models.load_model('my_pet_classifier')
Explanation: Inference on new data
Key point: The model you have developed can now classify a row from a CSV file directly, because the preprocessing code is included inside the model itself.
You can now save and reload the Keras model. Follow the tutorial here for more information on TensorFlow models.
End of explanation
sample = {
'Type': 'Cat',
'Age': 3,
'Breed1': 'Tabby',
'Gender': 'Male',
'Color1': 'Black',
'Color2': 'White',
'MaturitySize': 'Small',
'FurLength': 'Short',
'Vaccinated': 'No',
'Sterilized': 'No',
'Health': 'Healthy',
'Fee': 100,
'PhotoAmt': 2,
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = reloaded_model.predict(input_dict)
prob = tf.nn.sigmoid(predictions[0])
print(
"This particular pet had a %.1f percent probability "
"of getting adopted." % (100 * prob)
)
Explanation: To get a prediction for a new sample, you can simply call model.predict(). There are just two things you need to do:
Wrap scalars into a list so as to have a batch dimension (models only process batches of data, not single samples)
Call convert_to_tensor on each feature
End of explanation |
5,598 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous
Step1: Import section specific modules
Step2: 1.6.1 Synchrotron Emission
Step3: The frequency of gyration in the non-relativistic case is simply
$$\omega = \frac{qB}{mc} \qquad .$$
For synchrotron radiation, this gets modified to
$$\omega_{G}= \frac{qB}{\gamma mc} \qquad ,$$
since it is the relativistic mass, i.e. $\gamma mc$ which we should consider in this case.
In the non-relativistic case (i.e. cyclotron radiation), the frequency of gyration is also the frequency of radiation. If this was also the case for the synchrotron radiation, for magnetic fields of galactic strengths (a few microGauss or so), the resultant frequency would be less than a Hertz ! Luckily for us, the relativistic beaming and doppler effects come into play; the actual frequency of the observed radiation gets bumped up by a factor of ~ $\gamma^{3}$, which brings it in the radio regime. This frequency, known also as the 'critical frequency' is at most of the emission takes place. It is given by
$$\nu_{c} \propto \gamma^{3}\nu_{G} \propto E^{2}$$.
So far we have discussed a single particle emitting synchrotron radiation. However, what we want to know is the case of an ensemble of raditing particles. Since the Synchrotron emission depends only on the magnetic field and the energy of the particle, given a (more or less) uniform magnetic field, what we need is a distribution of the particles with energy. Denoting the distribution N(E) (number of particles per unit volume per solid angle) of the particles, the resultant spectrum from the ensemble of the particles is
Step4: The non-thermal nature of the emission can be seen easily by measuring the spectrum of the radio emission from the lobes. The plots below show the spectrum of the lobes of Cygnus A, from a frequency of 150 MHz to 14.65 GHz. It can be seen that these can be explained better by a synchrotron spectrum rather than a spectrum of thermal origins. Another observation can be made from the plot that the spectral index of the lobe emission seems steeper than the flux from the hotspot. This is consistent with fresh plasma being the one from the hotspot and the older plasma residing in the lobes. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous: 1.5 Black body radiation
Next: 1.7 Line emission
Import standard modules:
End of explanation
from IPython.display import Image
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
Image(filename='figures/drawing.png', width=300)
Explanation: 1.6.1 Synchrotron Emission:
Sychrotron emission is one of the most commonly encountered forms of radiation found from astronomical radio sources. This type of radiation originates from relativistic particles get accelerated in a magnetic field.
An understanding of generation of Synchrotron emission depends fundamentally on special relativistic effects. Here we wil not go into the details; but try to give an (somewhat handwaving) idea of the underlying physics. An accelerating charge emitts radiation (as we have seen in section 1.2.1) - here the acceleration is done through the ambient magnetic fields. The non-relativistic Larmor formula for the radiated power is:
$$P= \frac{2}{3}\frac{q^{2}a^{2}}{c^{3}}$$.
Since the acceleration is a result of the magnetic fields, we get:
$$P=\frac{2}{3}\frac{q^{2}}{c^{3}}\frac{v_{\perp}^{2}B^{2}q^{2}}{m^{2}} \qquad$$
where $v_{\perp}$ is the component of velocity of the particle perpendicular to the magnetic field, $m$ is the mass of the charged particle, $q$ is it's charge and $a$ is the acceleration it is undergoing. This is essentially the cyclotron radiation. The relativistic effects modify this into :
$$P = \gamma^{2} \frac{2}{3}\frac{q^{2}}{c^{3}}\frac{v_{\perp}^{2}B^{2}q^{2}}{m^{2}c^{2}} = \gamma^{2} \frac{2}{3}\frac{q^{4}}{c^{3}}\frac{v_{\perp}^{2}B^{2}}{m^{2}c^{2}} \qquad ,$$
where $$\gamma = \frac{1}{\sqrt{1+v^{2}/c^{2}}} = \frac{E}{mc^{2}} \qquad ,$$
is a measure of the energy of the particle. For particles to be considered relativistic and ultrarelativistic, $\gamma >> 1$, typically with a value of $100-1000$. Since $v_{\perp}=vsin\alpha$, with $\alpha$ being the angle between the magnetic field and the velocity of the particle, the radiated power can be written as:
$$P=\gamma^{2} \frac{2}{3}\frac{q^{4}}{c^{3}}\frac{v^{2}B^{2}sin\alpha^{2}}{m^{2}c^{2}} \qquad .$$
From this equation it can be seen that the total power radiated by the particle depends on the strength of the magnetic field and that the higher the energy of the particle, the more the power it radiates away.
In analogy with the non-relativistic case, there is a frequency of gyration. This refers to the path the charged particle follows while being accelerated in a magnetic field. The figure below illustrates the idea.
End of explanation
Image(filename='figures/cygnusA.png')
Explanation: The frequency of gyration in the non-relativistic case is simply
$$\omega = \frac{qB}{mc} \qquad .$$
For synchrotron radiation, this gets modified to
$$\omega_{G}= \frac{qB}{\gamma mc} \qquad ,$$
since it is the relativistic mass, i.e. $\gamma mc$ which we should consider in this case.
In the non-relativistic case (i.e. cyclotron radiation), the frequency of gyration is also the frequency of radiation. If this was also the case for the synchrotron radiation, for magnetic fields of galactic strengths (a few microGauss or so), the resultant frequency would be less than a Hertz ! Luckily for us, the relativistic beaming and doppler effects come into play; the actual frequency of the observed radiation gets bumped up by a factor of ~ $\gamma^{3}$, which brings it in the radio regime. This frequency, known also as the 'critical frequency' is at most of the emission takes place. It is given by
$$\nu_{c} \propto \gamma^{3}\nu_{G} \propto E^{2}$$.
So far we have discussed a single particle emitting synchrotron radiation. However, what we want to know is the case of an ensemble of raditing particles. Since the Synchrotron emission depends only on the magnetic field and the energy of the particle, given a (more or less) uniform magnetic field, what we need is a distribution of the particles with energy. Denoting the distribution N(E) (number of particles per unit volume per solid angle) of the particles, the resultant spectrum from the ensemble of the particles is: $$ \epsilon(E) dE = N(E) P(E) dE $$.
The usual assumption made about the distribution N(E) (based also on the observed cosmic ray distribution) is that of a power law, i.e.
$$N(E)dE=E^{-\alpha}dE \qquad .$$
Plugging in this and remembering that $P(E) \propto \gamma^{2} \propto E^{2}$, we get
$$ \epsilon(E) dE \propto E^{2-\alpha} dE \qquad .$$
Shifting to frequncy domain, $$\epsilon(\nu) \propto \nu^{(1-\alpha)/2} \qquad .$$
The usual value for $\alpha$ is 5/2 and since flux $S_{\nu} \propto \epsilon_{\nu}$, $$S_{\nu} \propto \nu^{-0.75} \qquad .$$
This shows that the synchrotron flux is also a power law, if the underlying distribution of particles is a power law.
This is approximately valid for 'fresh' collection of radiating particles. However, as mentioned above, the higher energy particles lose energy through radiation much faster than the lower energy particles. This means that the distribution of particles over time gets steeper at higher frequncies (which is where the contribution from the high energy particles comes in). This steepening of the spectral index is a typical feature of older plasma in astrophysical scenarios.
1.2.5.1 Sources of Synchrotron Emission:
So where do we actually see synchrotron emission? As mentioned above, the prerequisites are magnetic fields and relativistic particles. These conditions are satisfied in a variety of situations. A prime example is that of lobes of radio galaxies. The lobes contain relativistic plasma in magnetic fields of strength ~ $\mu$G. The origin of the plasma and the magnetic field is thought to be in ultimately in the activity taking place in the central part of the radio galaxies, which hosts a supermassive black hole. The figure below shows a radio image of the radio galaxy nearest to us, Cygnus A. The jets, which carry relativistic plasma/particles originate from the centre of the host galaxy (emission from there is marked as 'core'), collide with the surrounding medium at the places which show bright spots of emission, labelled as "hotspots". The plasma forming the radio lobe streams back from the hotspots, which means that the 'freshest' plasma resides in the hotspots and the surrounding regions of the lobes; by contrast, the plasma in lobes closest to the centre is the oldest.
End of explanation
# Data taken from Steenbrugge et al.,2010, MNRAS
freq=(151.0,327.5,1345.0,4525.0,8514.9,14650.0)
flux_L=(4746,2752.7,749.8,189.4,83.4,40.5)
flux_H=(115.7,176.4,69.3,45.2,20.8,13.4)
fig,ax = plt.subplots()
ax.loglog(freq,flux_L,'bo--',label='Lobe Flux')
ax.loglog(freq,flux_H,'g*-',label='Hotspot Flux')
ax.legend()
ax.set_xlabel("Frequency (MHz)")
ax.set_ylabel("Flux (Jy)")
Explanation: The non-thermal nature of the emission can be seen easily by measuring the spectrum of the radio emission from the lobes. The plots below show the spectrum of the lobes of Cygnus A, from a frequency of 150 MHz to 14.65 GHz. It can be seen that these can be explained better by a synchrotron spectrum rather than a spectrum of thermal origins. Another observation can be made from the plot that the spectral index of the lobe emission seems steeper than the flux from the hotspot. This is consistent with fresh plasma being the one from the hotspot and the older plasma residing in the lobes.
End of explanation |
5,599 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
while loops
The while statement in Python is one of most general ways to perform iteration. A while statement will repeatedly execute a single statement or group of statements as long as the condition is true. The reason it is called a 'loop' is because the code statements are looped through over and over again until the condition is no longer met.
The general format of a while loop is
Step1: Notice how many times the print statements occurred and how the while loop kept going until the True condition was met, which occurred once x==10. Its important to note that once this occurred the code stopped. Lets see how we could add an else statement
Step2: break, continue, pass
We can use break, continue, and pass statements in our loops to add additional functionality for various cases. The three statements are defined by
Step3: Note how we have a printed statement when x==3, and a continue being printed out as we continue through the outer while loop. Let's put in a break once x ==3 and see if the result makes sense
Step4: Note how the other else statement wasn't reached and continuing was never printed!
After these brief but simple examples, you should feel comfortable using while statements in you code.
A word of caution however! It is possible to create an infinitely running loop with while statements. For example | Python Code:
x = 0
while x < 10:
print 'x is currently: ',x
print ' x is still less than 10, adding 1 to x'
x+=1
Explanation: while loops
The while statement in Python is one of most general ways to perform iteration. A while statement will repeatedly execute a single statement or group of statements as long as the condition is true. The reason it is called a 'loop' is because the code statements are looped through over and over again until the condition is no longer met.
The general format of a while loop is:
while test:
code statement
else:
final code statements
Let’s look at a few simple while loops in action.
End of explanation
x = 0
while x < 10:
print 'x is currently: ',x
print ' x is still less than 10, adding 1 to x'
x+=1
else:
print 'All Done!'
Explanation: Notice how many times the print statements occurred and how the while loop kept going until the True condition was met, which occurred once x==10. Its important to note that once this occurred the code stopped. Lets see how we could add an else statement:
End of explanation
x = 0
while x < 10:
print 'x is currently: ',x
print ' x is still less than 10, adding 1 to x'
x+=1
if x ==3:
print 'x==3'
else:
print 'continuing...'
continue
Explanation: break, continue, pass
We can use break, continue, and pass statements in our loops to add additional functionality for various cases. The three statements are defined by:
break: Breaks out of the current closest enclosing loop.
continue: Goes to the top of the closest enclosing loop.
pass: Does nothing at all.
Thinking about break and continue statements, the general format of the while loop looks like this:
while test:
code statement
if test:
break
if test:
continue
else:
break and continue statements can appear anywhere inside the loop’s body,but we will usually put them further nested in conjunction with an if statement to perform an action based on some condition.
Lets go ahead and look at some examples!
End of explanation
x = 0
while x < 10:
print 'x is currently: ',x
print ' x is still less than 10, adding 1 to x'
x+=1
if x ==3:
print 'Breaking because x==3'
break
else:
print 'continuing...'
continue
Explanation: Note how we have a printed statement when x==3, and a continue being printed out as we continue through the outer while loop. Let's put in a break once x ==3 and see if the result makes sense:
End of explanation
# DO NOT RUN THIS CODE!!!!
while True:
print 'Uh Oh infinite Loop!'
Explanation: Note how the other else statement wasn't reached and continuing was never printed!
After these brief but simple examples, you should feel comfortable using while statements in you code.
A word of caution however! It is possible to create an infinitely running loop with while statements. For example:
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.