Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
2,300 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Idea Behind Gradient Descent
In calculus, the gradient is the vector of partial derivatives and it identifies the direction of inputs that increase the functions outputs most quickly.
We can use gradient descent to maximize a function by starting at a random point and moving in the direction of the gradient--then repeat.
Estimating The Gradient
The derivative of $f(x)$ measures how $f(x)$ changes when making very small changes to $x$
Step1: It is the slope of the tangent line at $(x,\ f(x))$ that tells us which direction to traverse. As $h$ gets smaller, the bottom tail of the line gets close to the curve.
Step7: Using The Gradient
Here is an example of using gradient descent to find the minimum of a 3-dimensional vector
Step8: Choose The Right Step Size
Although using the right step size is important, there is not an exact way to know which size to use. Some general rules of thumb are
Step10: Some step sizes provide invalid inputs to our function so we will need a safe version
Step12: Putting It All Together
Given a target function that we want to minimize, and given a gradient function, we can determine the parameters that minimize the error
Step15: Another optimization involves maximizing a function by minimizing its negative
Step17: Stochastic Gradient Descent
The approach above is computationally expensive
Step18: In order that we don't get stuck in an infinite cycle, we need to decrease the step size and quit once minimizations to the error stop improving
Step19: Stochastic gradient descent is a lot faster than our first version. Here is a negated version | Python Code:
def difference_quotient(f, x, h):
return (f(x + h) - f(x)) / h
Explanation: The Idea Behind Gradient Descent
In calculus, the gradient is the vector of partial derivatives and it identifies the direction of inputs that increase the functions outputs most quickly.
We can use gradient descent to maximize a function by starting at a random point and moving in the direction of the gradient--then repeat.
Estimating The Gradient
The derivative of $f(x)$ measures how $f(x)$ changes when making very small changes to $x$:
End of explanation
def square(x):
return x * x
def derivative(x):
return 2 * x
derivative(2)
derivative_estimate = partial(difference_quotient, square, h=0.00001)
x = range(-10,10)
plt.title("Actual Derivatives vs. Estimates")
plt.plot(x, [derivative(x_i) for x_i in x], 'rx', label='Actual') # red x
plt.plot(x, [derivative_estimate(x_i) for x_i in x], 'b+', label='Estimate') # blue +
plt.legend(loc=9)
plt.show()
Explanation: It is the slope of the tangent line at $(x,\ f(x))$ that tells us which direction to traverse. As $h$ gets smaller, the bottom tail of the line gets close to the curve.
End of explanation
def dot(v, w):
v_1 * w_1 + ... + v_n * w_n
return sum(v_i * w_i for v_i, w_i in zip(v, w))
def vector_subtract(v, w):
subtracts corresponding elements
return [v_i - w_i for v_i, w_i in zip(v, w)]
def sum_of_squares(v):
v_1 * v_1 + ... + v_n * v_n
return dot(v, v)
def squared_distance(v, w):
(v_1 - w_1) ** 2 + ... + (v_n - w_n) ** 2
return sum_of_squares(vector_subtract(v, w))
def distance(v, w):
return math.sqrt(squared_distance(v, w))
def step(v, direction, step_size):
move step_size in the direction from v
return [v_i + step_size * direction_i for v_i, direction_i in zip(v, direction)]
def sum_of_squares_gradient(v):
return [2 * v_i for v_i in v]
# pick a random starting point
v = [random.randint(-10,10) for i in range(3)]
tolerance = 0.0001
while True:
gradient = sum_of_squares_gradient(v) # compute the gradient at v
next_v = step(v, gradient, -0.01) # take a negative gradient step
if distance(next_v, v) < tolerance: # stop if we're converging
break
v = next_v # continue if we're not
v
Explanation: Using The Gradient
Here is an example of using gradient descent to find the minimum of a 3-dimensional vector:
End of explanation
step_sizes = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]
Explanation: Choose The Right Step Size
Although using the right step size is important, there is not an exact way to know which size to use. Some general rules of thumb are:
- use a fixed step size.
- gradually shrink the step size over iterations.
- finding the step size that leads to the minimum at each step.
We can illustrate this last point with an example:
End of explanation
def safe(f):
return a new function that's the same as f,
except that it outputs infinity whenever f produces an error
def safe_f(*args, **kwargs):
try:
return f(*args, **kwargs)
except:
return float('inf') # this means "infinity" in Python
return safe_f
Explanation: Some step sizes provide invalid inputs to our function so we will need a safe version:
End of explanation
def minimize_batch(target_fn, gradient_fn, theta_0, tolerance=0.000001):
use gradient descent to find theta that minimizes target function
step_sizes = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]
theta = theta_0 # set theta to initial value
target_fn = safe(target_fn) # safe version of target_fn
value = target_fn(theta) # value we're minimizing
while True:
gradient = gradient_fn(theta)
next_thetas = [step(theta, gradient, -step_size) for step_size in step_sizes]
# choose the one that minimizes the error function
next_theta = min(next_thetas, key=target_fn)
next_value = target_fn(next_theta)
# stop if we're "converging"
if abs(value - next_value) < tolerance:
return theta
else:
theta, value = next_theta, next_value
Explanation: Putting It All Together
Given a target function that we want to minimize, and given a gradient function, we can determine the parameters that minimize the error:
End of explanation
def negate(f):
return a function that for any input x returns -f(x)
return lambda *args, **kwargs: -f(*args, **kwargs)
def negate_all(f):
the same when f returns a list of numbers
return lambda *args, **kwargs: [-y for y in f(*args, **kwargs)]
def maximize_batch(target_fn, gradient_fn, theta_0, tolerance=0.000001):
return minimize_batch(negate(target_fn),
negate_all(gradient_fn),
theta_0,
tolerance)
Explanation: Another optimization involves maximizing a function by minimizing its negative:
End of explanation
def in_random_order(data):
generator that returns the elements of data in random order
indexes = [i for i, _ in enumerate(data)] # create a list of indexes
random.shuffle(indexes) # shuffle them
for i in indexes: # return the data in that order
yield data[i]
Explanation: Stochastic Gradient Descent
The approach above is computationally expensive: the gradient is computed for the whole dataset. A more efficient approach would be to compute the gradient for one point at a time. This is what stochastic gradient descent does. It will run over our data--each point individually--until the stopping point which we define.
Best results can be achieved by selecting our data at random:
End of explanation
def minimize_stochastic(target_fn, gradient_fn, x, y, theta_0, alpha_0=0.01):
# print('Calculating gradient for', list(zip(x, y)))
theta = theta_0 # initial guess
alpha = alpha_0 # initial step size
min_theta, min_value = None, float('inf') # the minimum so far
iterations_with_no_improvement = 0
# if we ever go 100 iterations with no improvement, stop
while iterations_with_no_improvement < 100:
def to_sum(x_i, y_i):
# print('x_i={}, y_i={}, theta={}'.format(x_i, y_i, theta))
return target_fn(x_i, y_i, theta)
value = sum([to_sum(x_i, y_i) for x_i, y_i in zip(x, y)])
if value < min_value:
# print('value={}, min_value={}'.format(value, min_value))
# if we've found a new minimum, remember it and go back to the original step size
min_theta, min_value = theta, value
iterations_with_no_improvement = 0
alpha = alpha_0
else:
# print('no improvement, i={}, alpha={}'.format(iterations_with_no_improvement, alpha))
# otherwise we're not improving, so try shrinking the step size
iterations_with_no_improvement += 1
alpha *= 0.9
# and take a gradient step for each of the data points
for x_i, y_i in in_random_order(list(zip(x, y))):
gradient_i = gradient_fn(x_i, y_i, theta)
theta = vector_subtract(theta, scalar_multiply(alpha, gradient_i))
# print('Gradient step: gradient_i={}, theta={}'.format(gradient_i, theta))
return min_theta
Explanation: In order that we don't get stuck in an infinite cycle, we need to decrease the step size and quit once minimizations to the error stop improving:
End of explanation
def maximize_stochastic(target_fn, gradient_fn, x, y, theta_0, alpha_0=0.01):
return minimize_stochastic(negate(target_fn), negate_all(gradient_fn), x, y, theta_0, alpha_0)
Explanation: Stochastic gradient descent is a lot faster than our first version. Here is a negated version:
End of explanation |
2,301 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding exponential mass loss/growth
You can always modify the mass of particles between calls to sim.integrate. However, if you want to apply the mass/loss growth every timestep within calls to sim.integrate, you should use this.
We begin by setting up a system with 3 planets.
Step1: We now add mass loss through REBOUNDx
Step2: Now we set the e-folding mass loss/growth rate.
Positive timescales give growth, negative timescales loss.
Here we have the star lose mass with an e-folding timescale of $10^4$ yrs.
Step3: Now we integrate for one e-folding timescale, and plot the resulting system
Step4: We see that after the mass of the star has decayed by a factor of e, the scale of the system has expanded by the corresponding factor, as one would expect. If we plot the mass of the star vs time, compared to an exponential decay, the two overlap. | Python Code:
import rebound
import reboundx
import numpy as np
M0 = 1. # initial mass of star
def makesim():
sim = rebound.Simulation()
sim.G = 4*np.pi**2 # use units of AU, yrs and solar masses
sim.add(m=M0)
sim.add(a=1.)
sim.add(a=2.)
sim.add(a=3.)
sim.move_to_com()
return sim
%matplotlib inline
sim = makesim()
ps = sim.particles
fig = rebound.OrbitPlot(sim)
Explanation: Adding exponential mass loss/growth
You can always modify the mass of particles between calls to sim.integrate. However, if you want to apply the mass/loss growth every timestep within calls to sim.integrate, you should use this.
We begin by setting up a system with 3 planets.
End of explanation
rebx = reboundx.Extras(sim)
modifymass = rebx.load_operator("modify_mass")
rebx.add_operator(modifymass)
Explanation: We now add mass loss through REBOUNDx:
End of explanation
ps[0].params["tau_mass"] = -1.e4
Explanation: Now we set the e-folding mass loss/growth rate.
Positive timescales give growth, negative timescales loss.
Here we have the star lose mass with an e-folding timescale of $10^4$ yrs.
End of explanation
Nout = 1000
mass = np.zeros(Nout)
times = np.linspace(0., 1.e4, Nout)
for i, time in enumerate(times):
sim.integrate(time)
mass[i] = sim.particles[0].m
fig = rebound.OrbitPlot(sim)
Explanation: Now we integrate for one e-folding timescale, and plot the resulting system:
End of explanation
pred = M0*np.e**(times/ps[0].params["tau_mass"])
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
ax.plot(times,mass, label='simulation')
ax.plot(times,pred, label='predicted')
ax.set_xlabel("Time (yrs)", fontsize=24)
ax.set_ylabel("Star's Mass (MSun)", fontsize=24)
ax.legend(fontsize=24)
Explanation: We see that after the mass of the star has decayed by a factor of e, the scale of the system has expanded by the corresponding factor, as one would expect. If we plot the mass of the star vs time, compared to an exponential decay, the two overlap.
End of explanation |
2,302 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using Machine Learning- Random Forest
Contest entry by Priyanka Raghavan and Steve Hall
This notebook demonstrates how to train a machine learning algorithm to predict facies from well log data. The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a Logistical regression classifier to classify facies types. We will use simple logistics regression to classify wells scikit-learn.
First we will explore the dataset. We will load the training data from 9 wells, and take a look at what we have to work with. We will plot the data from a couple wells, and create cross plots to look at the variation within the data.
Next we will condition the data set. We will remove the entries that have incomplete data. The data will be scaled to have zero mean and unit variance. We will also split the data into training and test sets.
We will then be ready to build the classifier.
Finally, once we have a built and tuned the classifier, we can apply the trained model to classify facies in wells which do not already have labels. We will apply the classifier to two wells, but in principle you could apply the classifier to any number of wells that had the same log data.
Exploring the dataset
First, we will examine the data set we will use to train the classifier. The training data is contained in the file facies_vectors.csv. The dataset consists of 5 wireline log measurements, two indicator variables and a facies label at half foot intervals. In machine learning terminology, each log measurement is a feature vector that maps a set of 'features' (the log measurements) to a class (the facies type). We will use the pandas library to load the data into a dataframe, which provides a convenient data structure to work with well log data.
Step1: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are
Step2: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set.
Remove a single well to use as a blind test later.
These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone.
Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe.
Step3: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial.
Step4: Placing the log plotting code in a function will make it easy to plot the logs from multiples wells, and can be reused later to view the results when we apply the facies classification model to other wells. The function was written to take a list of colors and facies labels as parameters.
We then show log plots for wells SHRIMPLIN.
Step5: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class.
Step6: This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.
Crossplots are a familiar tool in the geosciences to visualize how two properties vary with rock type. This dataset contains 5 log variables, and scatter matrix can help to quickly visualize the variation between the all the variables in the dataset. We can employ the very useful Seaborn library to quickly create a nice looking scatter matrix. Each pane in the plot shows the relationship between two of the variables on the x and y axis, with each point is colored according to its facies. The same colormap is used to represent the 9 facies.
Conditioning the data set
Now we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector.
Step7: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie
Step8: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.
Step9: Training the classifier using Random forest
Now we use the cleaned and conditioned training set to create a facies classifier. Lets try random forest
Step10: Now we can train the classifier using the training set we created above.
Step11: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set.
Step12: We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.
The confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i.
To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function.
Step13: The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label SS, 18 were correctly indentified as SS, 5 were classified as CSiS and 1 was classified as FSiS.
The entries along the diagonal are the facies that have been correctly classified. Below we define two functions that will give an overall value for how the algorithm is performing. The accuracy is defined as the number of correct classifications divided by the total number of classifications.
Step14: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.
Step15: Applying the classification model to the blind data
We held a well back from the training, and stored it in a dataframe called blind
Step16: The label vector is just the Facies column
Step17: We can form the feature matrix by dropping some of the columns and making a new dataframe
Step18: Now we can transform this with the scaler we made before
Step19: Now it's a simple matter of making a prediction and storing it back in the dataframe
Step20: Let's see how we did with the confusion matrix
Step21: The results are 0.43 accuracy on facies classification of blind data and 0.87 adjacent facies classification.
Step22: ...but does remarkably well on the adjacent facies predictions.
Step23: Applying the classification model to new data
Now that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input.
This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data.
Step24: The data needs to be scaled using the same constants we used for the training data.
Step25: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe.
Step26: We can use the well log plot to view the classification results along with the well logs.
Step27: Finally we can write out a csv file with the well data along with the facies classification results. | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from sklearn.ensemble import RandomForestClassifier
from pandas import set_option
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
filename = 'facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data
Explanation: Facies classification using Machine Learning- Random Forest
Contest entry by Priyanka Raghavan and Steve Hall
This notebook demonstrates how to train a machine learning algorithm to predict facies from well log data. The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a Logistical regression classifier to classify facies types. We will use simple logistics regression to classify wells scikit-learn.
First we will explore the dataset. We will load the training data from 9 wells, and take a look at what we have to work with. We will plot the data from a couple wells, and create cross plots to look at the variation within the data.
Next we will condition the data set. We will remove the entries that have incomplete data. The data will be scaled to have zero mean and unit variance. We will also split the data into training and test sets.
We will then be ready to build the classifier.
Finally, once we have a built and tuned the classifier, we can apply the trained model to classify facies in wells which do not already have labels. We will apply the classifier to two wells, but in principle you could apply the classifier to any number of wells that had the same log data.
Exploring the dataset
First, we will examine the data set we will use to train the classifier. The training data is contained in the file facies_vectors.csv. The dataset consists of 5 wireline log measurements, two indicator variables and a facies label at half foot intervals. In machine learning terminology, each log measurement is a feature vector that maps a set of 'features' (the log measurements) to a class (the facies type). We will use the pandas library to load the data into a dataframe, which provides a convenient data structure to work with well log data.
End of explanation
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
training_data.describe()
Explanation: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),
photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1. Nonmarine sandstone
2. Nonmarine coarse siltstone
3. Nonmarine fine siltstone
4. Marine siltstone and shale
5. Mudstone (limestone)
6. Wackestone (limestone)
7. Dolomite
8. Packstone-grainstone (limestone)
9. Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies |Label| Adjacent Facies
:---: | :---: |:--:
1 |SS| 2
2 |CSiS| 1,3
3 |FSiS| 2
4 |SiSh| 5
5 |MS| 4,6
6 |WS| 5,7
7 |D| 6,8
8 |PS| 6,7,9
9 |BS| 7,8
Let's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type.
End of explanation
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
#training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
faciesVals = training_data['Facies'].values
well = training_data['Well Name'].values
mpl.rcParams['figure.figsize'] = (20.0, 10.0)
for w_idx, w in enumerate(np.unique(well)):
ax = plt.subplot(3, 4, w_idx+1)
hist = np.histogram(faciesVals[well == w], bins=np.arange(len(facies_labels)+1)+.5)
plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center')
ax.set_xticks(np.arange(len(hist[0])))
ax.set_xticklabels(facies_labels)
ax.set_title(w)
blind = training_data[training_data['Well Name'] == 'NEWBY']
training_data = training_data[training_data['Well Name'] != 'NEWBY']
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
PE_mask = training_data['PE'].notnull().values
training_data = training_data[PE_mask]
Explanation: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set.
Remove a single well to use as a blind test later.
These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone.
Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe.
End of explanation
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
Explanation: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial.
End of explanation
make_facies_log_plot(
training_data[training_data['Well Name'] == 'SHRIMPLIN'],
facies_colors)
Explanation: Placing the log plotting code in a function will make it easy to plot the logs from multiples wells, and can be reused later to view the results when we apply the facies classification model to other wells. The function was written to take a list of colors and facies labels as parameters.
We then show log plots for wells SHRIMPLIN.
End of explanation
#count the number of unique entries for each facies, sort them by
#facies number (instead of by number of entries)
facies_counts = training_data['Facies'].value_counts().sort_index()
#use facies labels to index each count
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,
title='Distribution of Training Data by Facies')
facies_counts
Explanation: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class.
End of explanation
correct_facies_labels = training_data['Facies'].values
feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)
feature_vectors.describe()
Explanation: This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.
Crossplots are a familiar tool in the geosciences to visualize how two properties vary with rock type. This dataset contains 5 log variables, and scatter matrix can help to quickly visualize the variation between the all the variables in the dataset. We can employ the very useful Seaborn library to quickly create a nice looking scatter matrix. Each pane in the plot shows the relationship between two of the variables on the x and y axis, with each point is colored according to its facies. The same colormap is used to represent the 9 facies.
Conditioning the data set
Now we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector.
End of explanation
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(feature_vectors)
scaled_features = scaler.transform(feature_vectors)
feature_vectors
Explanation: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie: Gaussian with zero mean and unit variance). The factors used to standardize the training set must be applied to any subsequent feature set that will be input to the classifier. The StandardScalar class can be fit to the training set, and later used to standardize any training data.
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
scaled_features, correct_facies_labels, test_size=0.1, random_state=42)
Explanation: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.
End of explanation
clf = RandomForestClassifier(n_estimators=150,
min_samples_leaf= 50,class_weight="balanced",oob_score=True,random_state=50
)
Explanation: Training the classifier using Random forest
Now we use the cleaned and conditioned training set to create a facies classifier. Lets try random forest
End of explanation
clf.fit(X_train,y_train)
Explanation: Now we can train the classifier using the training set we created above.
End of explanation
predicted_labels = clf.predict(X_test)
Explanation: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set.
End of explanation
from sklearn.metrics import confusion_matrix
from classification_utilities import display_cm, display_adj_cm
conf = confusion_matrix(y_test, predicted_labels)
display_cm(conf, facies_labels, hide_zeros=True)
Explanation: We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.
The confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i.
To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function.
End of explanation
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
Explanation: The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label SS, 18 were correctly indentified as SS, 5 were classified as CSiS and 1 was classified as FSiS.
The entries along the diagonal are the facies that have been correctly classified. Below we define two functions that will give an overall value for how the algorithm is performing. The accuracy is defined as the number of correct classifications divided by the total number of classifications.
End of explanation
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
print('Facies classification accuracy = %f' % accuracy(conf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies))
Explanation: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.
End of explanation
blind
Explanation: Applying the classification model to the blind data
We held a well back from the training, and stored it in a dataframe called blind:
End of explanation
y_blind = blind['Facies'].values
Explanation: The label vector is just the Facies column:
End of explanation
well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1)
Explanation: We can form the feature matrix by dropping some of the columns and making a new dataframe:
End of explanation
X_blind = scaler.transform(well_features)
Explanation: Now we can transform this with the scaler we made before:
End of explanation
y_pred = clf.predict(X_blind)
blind['Prediction'] = y_pred
Explanation: Now it's a simple matter of making a prediction and storing it back in the dataframe:
End of explanation
cv_conf = confusion_matrix(y_blind, y_pred)
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
Explanation: Let's see how we did with the confusion matrix:
End of explanation
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
Explanation: The results are 0.43 accuracy on facies classification of blind data and 0.87 adjacent facies classification.
End of explanation
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
def compare_facies_plot(logs, compadre, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[6])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im2, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-2):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[6].set_xlabel(compadre)
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
ax[6].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
compare_facies_plot(blind, 'Prediction', facies_colors)
Explanation: ...but does remarkably well on the adjacent facies predictions.
End of explanation
well_data = pd.read_csv('validation_data_nofacies.csv')
well_data['Well Name'] = well_data['Well Name'].astype('category')
well_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
Explanation: Applying the classification model to new data
Now that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input.
This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data.
End of explanation
X_unknown = scaler.transform(well_features)
Explanation: The data needs to be scaled using the same constants we used for the training data.
End of explanation
#predict facies of unclassified data
y_unknown = clf.predict(X_unknown)
well_data['Facies'] = y_unknown
well_data
well_data['Well Name'].unique()
Explanation: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe.
End of explanation
make_facies_log_plot(
well_data[well_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
well_data[well_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
Explanation: We can use the well log plot to view the classification results along with the well logs.
End of explanation
well_data.to_csv('well_data_with_facies.csv')
Explanation: Finally we can write out a csv file with the well data along with the facies classification results.
End of explanation |
2,303 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-vhr4', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-CM2-VHR4
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
2,304 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Level 3
In diesem Level lernen wir neue Datentypen, wie list, tuple, dict, set und frozenset kennen und lernen über Objekte dieser Typen mittels einer for-Schleife zu iterieren. Wir werden die Schlüsselwörter del und for kennenlernen und auch den Schlüsselwörtern in, break, continue und else ein weiteres Mal begegnen.
Einstieg
Bisher können wir Werte in Variablen speichern, das funktioniert auch, solange wir wissen, wieviele Werte wir speichern müssen. Das muss aber nicht der Fall sein. Die Datentypen, die wir in diesem Level kennenlernen ermöglichen es meherere Werte in einem Objekt zu speichern. Jeder dieser Typen hat dabei seine Besonderheiten, die wir im Laufe des Levels lernen werden.
list
Der list Typ ist der erste, den wir uns in diesem Level anschauen möchten. Er bietet viele Möglichkeiten an, ist einfach zu bedienen, weshalb er auch häufig benutzt wird. Im Gegensatz zu allen anderen Typen, die wir bereits kennengelernt haben und auch einigen anderen, die wir noch kennenlernen, ist die Liste ein veränderlicher Typ. Das heißt wir können ein list Objekt ändern und müssen es nicht überschreiben. Die Elemente eines list Objektes können einen beliebigen Typen haben.
Erstellen einer Liste
Zuerst erstellen wir eine leere Liste, entweder benutzen wir dafür die list() Funktion oder eckige Klammern [], welche die Literale einer Liste darstellen.
Step1: Es ist allerdings möglich eine Liste zu erstellen, die bereits Einträge enthält. Diese Einträge nennen wir Elemente. Für das erstellen können wir wieder die Literale nehmen oder die list() Funktion. Beim Benutzen der list() Funktion müssen wir allerdings beachten, dass diese ein Objekt in eine Liste umwandelt und das nicht mit allen Objekten geht. In dem Beispiel unten erstellen wir aus einem string eine Liste.
Step2: Operatoren
Für Listen gibt es ebenfalls einen + und einen * Operator
Step3: Zugriff auf Elemente
Auf die Elemente einer Liste wird über deren Index zugegriffen. Dieser ist ein integer startet beim ersten Element mit 0.
Step4: Es ist allerdings auch möglich mit negativen Indices zu arbeiten. Diese starten beim letzten Element der Liste mit dem Index -1. Dadurch wird es möglich auf die letzten Elemente der Liste zuzugreifen ohne die len() Funktion benutzen zu müssen. <br>
So können wir die Liste von eben auch rückwärts durchlaufen
Step5: Slicing
Es ist nicht nur möglich auf die gesamte Liste oder auf einzelne Elemente zuzugreifen. Mit slicing können wir auch auf Teile einer Liste zugreifen.
python
liste[start
Step6: start, stop und step können allerdings auch weggelassen werden.
* Wird start weggelassen startet die Teilliste mit dem ersten Element.
* Wird stop weggelassen, endet die Teilliste mit dem letzten Element.
* wird step weggelassen, ist die Schrittweite 1
Step7: Elemente hinzufügen
Nachdem wir eine Liste erstellt haben, möchten gegebenenfalls auch Elemente hinzufügen. Dazu gibt es mehrere Möglichkeiten. Wir können mit Hilfe der list.append() Methode ein Element an unsere Liste hinten anhängen oder mit der list.insert() Methode ein Element an einem Index in unsere Liste einfügen, wobei die Elemente nach diesem Index nach hinten aufrücken.
Step8: Elemente finden
Eventuell wollen wir wissen ob ein Element in unserer Liste enthalten ist, und wenn es in unserer Liste enthalten ist, wollen wir eventuell wissen an welcher Stelle oder wie oft.
Zuerst wollen wir lernen, wie wir mit dem Schlüsselwort in feststellen, ob ein Element in unserer Liste enthalten ist
Step9: Aber vorsicht
Step10: Scheinbar ist der integer 1 in der Liste enthalten, obwohl keiner der Einträge auf den ersten Blick danach aussieht. Also wird ein Element unserer Liste als 1 interpretiert oder ist == 1. Um rauszufinden an welcher Stelle sich dieses Element befindet können wir die list.index() Methode benutzen. Dabei müssen wir allerdings vorsichtig sein, versuchen wir nämlich den Index eines Elementes zu finden, der nicht in der Liste enthalten ist, erhalten wir einen Fehler.
Step11: Der boolean True auf dem Index 1 wird hier also als 1 erkannt. Dieses Phänomen tritt allerdings nur mit 1 und True und 0 und False auf. Um dieses Problem zu umgehen nutzen wir im folgenden eine Liste mit anderen Elementen.
Step12: Wie wir sehen können zeigt und list.index() lediglich das erste Auftreten eines Elementes an, auch wenn dieses Element mehrfach in der Liste auftaucht.
Um rauszufinden wie häufig ein Element in unserer Liste auftaucht können wir die list.count() Methode benutzen.
Step13: Dabei sehen wir auch, dass uns die list.count() Methode keinen Fehler gibt, wenn ein Element (im obigen Fall 'e') nicht in der Liste enthalten ist, sondern 0 zurück gibt.
Elemente entfernen
Wenn wir Elemente entfernen wollen, haben wir auch wieder mehrere Möglichkeiten, die sich im Wesentlichen in unserer Herangehungsweise entscheiden. Kennen wir den Index des Elementes, welches wir aus der Liste entfernen möchten, können wir das Schlüsselwort del oder die list.pop() Methode verwenden, kennen wir jedoch das Element, das wir entfernen möchten, benutzen wir die list.remove() Methode.
Step14: Wie wir sehen, können wir nicht nur einzelne Elemente einer Liste anhand ihres Indexes, sondern auch die gesamte Liste entfernen. Das Schlüsselwort del entfernt die Referenz einer Variable und somit die Variable, weshalb wir auch einen NameError erhalten, wenn wir versuchen die Variable zu benutzen, nachdem wir sie gelöscht haben.
Step15: Oben haben wir statt des del Schlüsselwortes die list.pop() Methode benutzt. Das hat den Vorteil, dass uns die list.pop() Methode das Element, welches wir aus der Liste entfernt haben, zurück gibt. Wenn wir der list.pop() keinen Index mitgeben, entfernt sie standardmäßig das letzte Element. Wenn wir der list.pop() Methode einen Index geben, entfernt sie das Element an diesem Index aus der Liste.
Step16: Nun wollen wir ein Element, dessen Wert wir kennen aus der Liste entfernen. Dazu benutzen wir die list.remove() Methode. Diese entfernt das erste Auftreten des Wertes, den wir ihr geben, aus der Liste.
Step17: Verschachtelung
Da wir in einer Liste Elemente beliebigen Typs speichern können, können wir auch eine Liste als Element einer Liste speichern. Auf die innere Liste können wir dann genauso zugreifen, wie auf jedes andere Element.
Step18: Die äußere Liste enthält zwei Elemente, die in diesem Fall jeweils Listen sind.
Step19: Die vergessenen Methoden
Wenn wir uns mit der dir() Funktion die Methoden eines list Objektes ansehen und alle Einträge mit __ am Anfang und Ende des Namens ignorieren, stellen wir fest, dass wir noch einige Methoden nicht behandelt haben.
Step20: Da uns die dir() Funktion eine Liste zurückgibt, können wir uns diese Methoden ausgeben lassen, indem wir die letzten 11 Elemente anzeigen
Step21: list.clear()
Die list.clear() Methode können wir benutzen um alle Elemente einer Liste zu entfernen. Der Anwendungsfall ist relativ begrenzt, da wir auch einfach eine leere Liste erstellen können.
Step22: list.copy()
Die list.copy() Methode kann benutzt werden um eine Kopie der Liste zu erstellen. Auch hier gibt es eine alternative Möglichkeit über Slicing, dasselbe zu erreichen.
Step23: list.extend()
Die list.extend() Methode kann benutzt werden um an die bestehende Liste eine andere Liste anzuhängen.
Step24: list.reverse()
Mit der list.reverse() können wir eine Liste umdrehen. Wie auch die voherigen Methoden gibt es auch hier Alternativen.
Step25: list.sort()
Mit list.sort() lässt sich eine Liste sortieren, solange die Elemente miteinander vergleichbar sind.
Step26: tuple
Ein tuple ist grob gesagt eine unveränderliche Liste. Ein Tupel hat eine Länge, Elemente können nicht entfernt oder hinzugefügt werden. Lediglich die Elemente eines Tupels lassen sich ändern, wenn sie veränderlich sind.
Step27: Ein Tuple lässt sich über die tuple() Funktion oder über runde Klammern () definieren. Die runden Klammern können wir allerdings meistens weglassen. Der Zugriff auf die Elemente funktioniert, wie bei Listen sowohl über den Index, als auch über Slicing.
Step28: Wenn wir uns die Methoden von einem Tuple anschauen, stellen wir fest, dass es nur zwei "normale" Methoden gibt, die wir auch schon von der Liste kennen, nämlich tuple.count() und tuple.index(). Desweiteren können wir auf einen Tuple auch die len() Funktion und das Schlüsselwort in anwenden.
Step29: Wir können Tuple auch benutzen um den Wert zweier Variablen zu tauschen. Bisher würden wir dafür den Wert einer Variable in einer temporären Variable (tmp) speichern und die Werte so tauschen.
Step30: Durch die Verwendung von Tuplen können wir nun auf unsere temporäre Variable verzichten und den Code lesbarer gestalten
Step31: Strings
Jetzt da wir Listen und Tuple kennengelernt haben lohnt es sich, nochmal Strings anzuschauen, da wir auch hier auch auf einzelne Zeichen mit dem Index zugreifen können. So wie bei Listen und Tupeln, können wir auch bei Strings die str.count() und str.index() Methoden verwenden.
Step32: dict
Ein Dictionary speichert Werte nicht anhand ihres Indexes, sondern anhand eines Schlüssels.
Erstellen eines dict
Ein leeres Dictionary kann auf zwei Arten erstellt werden
Step33: Möchten wir ein Dictionary mit Einträgen erstellen, können wir dies entweder durch die dict() Funktion erreichen, in der wir Schlüssel und Werte als Liste von Tuplen mit zwei Elementen übergeben, oder indem wir Schlüssel und Werte durch Doppelpunkte
Step34: Als Schlüssel sind Werte aller unveränderlichen Typen erlaubt, in einem Dictionary müssen die Schlüssel auch nicht denselben Typen haben, es ergibt sich meistens aber, dass die Schlüssel denselben Typen haben.
Step35: Zugriff
Der Zugriff auf die Elemente erfolgt dann über den entsprechenden Schlüssel
Step36: Alternativ kann für den Zugriff die dict.get() Methode benutzt werden. Diese ermöglicht es auch einen Standardwert anzugeben, wenn der Schlüssel in dem Dictionary nicht vorhanden ist.
Step37: Den Wert zu einem Schlüssel können wir setzen indem wir einem Schlüssel einen Wert zuweisen, existiert dieser Schlüssel bereits, wird sein Wert überschrieben.
Step38: Natürlich kann man auch das in Schlüsselwort mit Dictionaries benutzen
Step39: Die len() Funktion liefert bei einem Dictionary die Anzahl an Schlüsseln wieder
Step40: items(), keys() und values()
Die drei Methoden dict.items(), dict.keys() und dict.values() sind sich relativ ähnlich, weshalb wir sie zusammen betrachten wollen.
Step41: pop()
Auch Dictionaries besitzen eine dict.pop() Methode.
Step42: Wie wir sehen funktioniert die dict.pop() Methode ähnlich, wie bei den Listen. Der Wert mit dem angegebenen Schlüssel wird zurückgegeben und aus dem Dictionary entfernt.
Die for-Schleife
Die for-Schleife kann benutzt werden um über verschiedene Objekte zu iterieren. Dabei ist die Syntax einer for-Schleife die folgende
Step43: range()
Die range() Funktion ist in vielerlei Hinsicht praktisch. Sie ermöglicht es uns auf einfache, gut lesbare Art und Weise Zahlenfolgen zu erzeugen. Warum das so praktisch ist werden wir gleich sehen.
python
range(stop)
range(start, stop[, step])
Wir können range() entweder nur mit einem stop Wert aufrufen (dieser muss ein integer sein), oder mit einem start, einem stop und einen optionalen step Wert. Bei beiden Varianten erhalten wir ein range Objekt.
Diese verhalten sich ähnlich wie die Werte beim Slicing. Geben wir keinen Wert für start an startet unser range Objekt bei 0, geben wir keinen Wert für step an, ist die Schrittweite 1.
Step44: Genauso, wie beim Slicing können die Werte auch negativ sein.
Step45: Ein range Objekt kann auch benutzt werden um sehr große Zahlenreihen zu erzeugen, da die Zahlen erst berechnet werden, wenn sie benötigt werden.
Step46: Dadurch, dass wir über ein range Objekt iterieren können, können wir range() gut in einer for-Schleife benutzen.
Step47: break, continue und else
Die Schlüsselwörter break, continue und else können wir innerhalb einer for-Schleife genauso benutzen, wie in einer while-Schleife.
Step48: for-Schleifen können verschachtelt werden, was wir benutzen können um über ein verschachteltes Objekt zu iterieren.
Step49: sets
Sets sind Mengen im mathematischen Sinn. Das bedeutet ein Element kann entweder in einer Menge enthalten sein, oder eben nicht.
Erstellen eines set
Ein Set kann entweder über die set() Funktion aus einem anderen Objekt erzeugt werden, oder über geschweifte Klammern {} welche die Literale eines Sets bilden.
Step50: in und len()
Step51: Elemente hinzufügen
Nach dem wir eine Menge erstellt haben, können wir mit der set.add() Methode Elemente hinzufügen.
Step52: Um nicht nur einzelne Elemente, sondern mehrere Elemente an eine Menge anzuhängen, können wir die set.update() Methode benutzen.
Step53: Elemente entfernen
Um Elemente zu entfernen gibt es zwei Möglichkeiten. Die set.pop() Methode entfernt ein Element aus der Menge und gibt es dabei zurück. Die set.remove() Methode kann dafür benutzt werden, um Elemente anhand ihres Wertes aus der Menge zu entfernen.
Step54: Mengenoperationen
Set-Objekte bieten Mengenoperationen an, die auch aus der Mathematik bekannt sind. Diese sind die Schnittmenge, die Vereinigungsmenge und die Differenzmenge.
Die Schnittmenge enthält alle Elemente, die in beiden Mengen enthalten sind.
Step55: Alternativ können wir auch die set.intersection() Methode benutzen.
Step56: Die Vereinigungsmenge enthält alle Elemente, die in einer der Mengen enthalten sind.
Step57: Alternativ können wir auch die set.union() Methode benutzen.
Step58: Die Differenzmenge einer Menge S1 mit einer Menge S2 enthält alle Elemente, die in der Menge S1 aber nicht in der Menge S2 sind.
Step59: Alternativ können wir auch die set.difference() Methode benutzen.
Step60: Es gibt auch noch die symmetrische Differenzmenge zweier Mengen. Diese enthält alle Elemente, die in einer der beiden Mengen, aber nicht in beiden Mengen enthalten sind.
Step61: Alternativ können wir für die symmetrische Differenz auch die set.symmetric_difference() Methode benutzen.
Step62: Mit der set.isdisjoint() Methode können wir testen ob zwei Mengen disjunkt sind. Zwei Mengen sind disjunkt, wenn ihre Schnittmenge leer ist.
Step63: Mit der set.issubset() und der set.issuperset() Methode können wir feststellen, ob eine Menge eine Teilmenge, beziehungsweise eine Obermenge einer anderen Menge ist. Eine Menge s1 ist Teilmenge einer Menge s2, wenn alle Elemente von s1 in der Menge s2 enthalten sind. s2 ist dann die Obermenge von s1.
Step64: Statt der set.issubset() Methode können wir auch den <= Operator benutzen, statt der set.issuperset() Methode können wir auch den >= Operator benutzen.
Den > Operator können wir benutzen um zu ermitteln, ob eine Menge s1 eine echte Obermenge von einer Menge s2 ist. Dies ist der Fall, wenn s1 eine Obermenge von s2 und nicht die gleiche Menge ist.
Den < Operator können wir benutzen um zu ermitteln, ob eine Menge s1 eine echte Teilmenge einer Menge s2 ist. Dies ist der Fall, wenn s1 eine Teilmenge von s2 ist und nicht die gleiche Menge ist. | Python Code:
leer = list()
leer2 = []
Explanation: Level 3
In diesem Level lernen wir neue Datentypen, wie list, tuple, dict, set und frozenset kennen und lernen über Objekte dieser Typen mittels einer for-Schleife zu iterieren. Wir werden die Schlüsselwörter del und for kennenlernen und auch den Schlüsselwörtern in, break, continue und else ein weiteres Mal begegnen.
Einstieg
Bisher können wir Werte in Variablen speichern, das funktioniert auch, solange wir wissen, wieviele Werte wir speichern müssen. Das muss aber nicht der Fall sein. Die Datentypen, die wir in diesem Level kennenlernen ermöglichen es meherere Werte in einem Objekt zu speichern. Jeder dieser Typen hat dabei seine Besonderheiten, die wir im Laufe des Levels lernen werden.
list
Der list Typ ist der erste, den wir uns in diesem Level anschauen möchten. Er bietet viele Möglichkeiten an, ist einfach zu bedienen, weshalb er auch häufig benutzt wird. Im Gegensatz zu allen anderen Typen, die wir bereits kennengelernt haben und auch einigen anderen, die wir noch kennenlernen, ist die Liste ein veränderlicher Typ. Das heißt wir können ein list Objekt ändern und müssen es nicht überschreiben. Die Elemente eines list Objektes können einen beliebigen Typen haben.
Erstellen einer Liste
Zuerst erstellen wir eine leere Liste, entweder benutzen wir dafür die list() Funktion oder eckige Klammern [], welche die Literale einer Liste darstellen.
End of explanation
liste = [0, True, 4.2, "foo"]
liste2 = list("abracadabra")
print(liste, "Länge:", len(liste))
print(liste2, "Länge:", len(liste2))
Explanation: Es ist allerdings möglich eine Liste zu erstellen, die bereits Einträge enthält. Diese Einträge nennen wir Elemente. Für das erstellen können wir wieder die Literale nehmen oder die list() Funktion. Beim Benutzen der list() Funktion müssen wir allerdings beachten, dass diese ein Objekt in eine Liste umwandelt und das nicht mit allen Objekten geht. In dem Beispiel unten erstellen wir aus einem string eine Liste.
End of explanation
liste = [0, True, 4.2, "foo"]
liste2 = list("abracadabra")
print(liste + liste2)
print(liste * 2)
Explanation: Operatoren
Für Listen gibt es ebenfalls einen + und einen * Operator: <br>
Beim + Operator wird eine Liste erstellt, die erst alle Elemente der ersten Liste und dann alle Elemente der zweiten Liste enthält. <br>
Beim * Operator muss der andere Wert neben der Liste ein integer sein. Es wird eine neue Liste erstellt, welche die Elemente der Liste entsprechend häufig wiederholt.
End of explanation
liste = [0, True, 4.2, "foo"]
index = 0
length = len(liste)
while index < length:
element = liste[index]
print(index, element)
index = index + 1
Explanation: Zugriff auf Elemente
Auf die Elemente einer Liste wird über deren Index zugegriffen. Dieser ist ein integer startet beim ersten Element mit 0.
End of explanation
liste = [0, True, 4.2, "foo"]
index = -1
length = -1 * len(liste)
while index >= length:
element = liste[index]
print(index, element)
index = index - 1
Explanation: Es ist allerdings auch möglich mit negativen Indices zu arbeiten. Diese starten beim letzten Element der Liste mit dem Index -1. Dadurch wird es möglich auf die letzten Elemente der Liste zuzugreifen ohne die len() Funktion benutzen zu müssen. <br>
So können wir die Liste von eben auch rückwärts durchlaufen:
End of explanation
liste = [0, True, 4.2, "foo", False, "spam", "egg", 42]
print(liste[1:-1]) # von Index 1 bis Index -1
print(liste[0:6:2]) # von Index 0 bis Index 6 jedes zweite Element
print(liste[-1:0:-1]) # von Index -1 bis Index 0 jedes Element
Explanation: Slicing
Es ist nicht nur möglich auf die gesamte Liste oder auf einzelne Elemente zuzugreifen. Mit slicing können wir auch auf Teile einer Liste zugreifen.
python
liste[start:stop:step]
* start gibt den Index an, an dem unser Teil der Liste startet
* stop gibt den Index an, bis zu dem unser Teil der Liste geht. Das Element mit diesem Index ist jedoch nicht enthalten.
* step gibt die Schrittweite zwischen zwei Indices unserer Teilliste an
End of explanation
liste = [0, True, 4.2, "foo", False, "spam", "egg", 42]
print(liste[2::2]) # von Index 2 bis zum Ende jedes zweite Element
print(liste[:-1:3]) # vom Start bis Index -1 jedes dritte Element
print(liste[::4]) # vom Start bis zum Ende jedes vierte Element
print(liste[1:]) # von Index 1 bis zum Ende
Explanation: start, stop und step können allerdings auch weggelassen werden.
* Wird start weggelassen startet die Teilliste mit dem ersten Element.
* Wird stop weggelassen, endet die Teilliste mit dem letzten Element.
* wird step weggelassen, ist die Schrittweite 1
End of explanation
liste = [0, True, 4.2, "foo", False, "spam", "egg", 42]
print(liste)
liste.append(3.14)
print(liste)
liste = [0, True, 4.2, "foo", False, "spam", "egg", 42]
print(liste)
liste.insert(4, "test")
print(liste)
Explanation: Elemente hinzufügen
Nachdem wir eine Liste erstellt haben, möchten gegebenenfalls auch Elemente hinzufügen. Dazu gibt es mehrere Möglichkeiten. Wir können mit Hilfe der list.append() Methode ein Element an unsere Liste hinten anhängen oder mit der list.insert() Methode ein Element an einem Index in unsere Liste einfügen, wobei die Elemente nach diesem Index nach hinten aufrücken.
End of explanation
liste = [0, True, 4.2, False, "spam", "egg", 42]
print("egg:", "egg" in liste)
print("ham:", "ham" in liste)
Explanation: Elemente finden
Eventuell wollen wir wissen ob ein Element in unserer Liste enthalten ist, und wenn es in unserer Liste enthalten ist, wollen wir eventuell wissen an welcher Stelle oder wie oft.
Zuerst wollen wir lernen, wie wir mit dem Schlüsselwort in feststellen, ob ein Element in unserer Liste enthalten ist:
End of explanation
liste = [0, True, 4.2, False, "spam", "egg", 42]
print("1:", 1 in liste)
Explanation: Aber vorsicht:
End of explanation
liste = [0, True, 4.2, False, "spam", "egg", 42]
print("1:", 1 in liste)
ind = liste.index(1)
print(liste[ind])
Explanation: Scheinbar ist der integer 1 in der Liste enthalten, obwohl keiner der Einträge auf den ersten Blick danach aussieht. Also wird ein Element unserer Liste als 1 interpretiert oder ist == 1. Um rauszufinden an welcher Stelle sich dieses Element befindet können wir die list.index() Methode benutzen. Dabei müssen wir allerdings vorsichtig sein, versuchen wir nämlich den Index eines Elementes zu finden, der nicht in der Liste enthalten ist, erhalten wir einen Fehler.
End of explanation
liste = list("abracadabra")
print(liste)
print("Erstes Auftreten von 'a':", liste.index("a"))
print("Erstes Auftreten von 'b':", liste.index("b"))
print("Erstes Auftreten von 'c':", liste.index("c"))
print("Erstes Auftreten von 'd':", liste.index("d"))
Explanation: Der boolean True auf dem Index 1 wird hier also als 1 erkannt. Dieses Phänomen tritt allerdings nur mit 1 und True und 0 und False auf. Um dieses Problem zu umgehen nutzen wir im folgenden eine Liste mit anderen Elementen.
End of explanation
liste = ['a', 'b', 'r', 'a', 'c', 'a', 'd', 'a', 'b', 'r', 'a']
print("Anzahl von 'a':", liste.count("a"))
print("Anzahl von 'b':", liste.count("b"))
print("Anzahl von 'c':", liste.count("c"))
print("Anzahl von 'd':", liste.count("d"))
print("Anzahl von 'e':", liste.count("e"))
Explanation: Wie wir sehen können zeigt und list.index() lediglich das erste Auftreten eines Elementes an, auch wenn dieses Element mehrfach in der Liste auftaucht.
Um rauszufinden wie häufig ein Element in unserer Liste auftaucht können wir die list.count() Methode benutzen.
End of explanation
liste = ["foo", "test", 23, "spam", "egg", 42]
print(liste)
del liste[0]
print(liste)
# del liste
# print(liste)
Explanation: Dabei sehen wir auch, dass uns die list.count() Methode keinen Fehler gibt, wenn ein Element (im obigen Fall 'e') nicht in der Liste enthalten ist, sondern 0 zurück gibt.
Elemente entfernen
Wenn wir Elemente entfernen wollen, haben wir auch wieder mehrere Möglichkeiten, die sich im Wesentlichen in unserer Herangehungsweise entscheiden. Kennen wir den Index des Elementes, welches wir aus der Liste entfernen möchten, können wir das Schlüsselwort del oder die list.pop() Methode verwenden, kennen wir jedoch das Element, das wir entfernen möchten, benutzen wir die list.remove() Methode.
End of explanation
liste = ["foo", "test", 23, "spam", "egg", 42]
print(liste)
element = liste.pop()
print(liste)
print("Entfernt:", element)
Explanation: Wie wir sehen, können wir nicht nur einzelne Elemente einer Liste anhand ihres Indexes, sondern auch die gesamte Liste entfernen. Das Schlüsselwort del entfernt die Referenz einer Variable und somit die Variable, weshalb wir auch einen NameError erhalten, wenn wir versuchen die Variable zu benutzen, nachdem wir sie gelöscht haben.
End of explanation
liste = ["foo", "test", 23, "spam", "egg", 42]
print(liste)
element = liste.pop(3)
print(liste)
print("Entfernt:", element)
Explanation: Oben haben wir statt des del Schlüsselwortes die list.pop() Methode benutzt. Das hat den Vorteil, dass uns die list.pop() Methode das Element, welches wir aus der Liste entfernt haben, zurück gibt. Wenn wir der list.pop() keinen Index mitgeben, entfernt sie standardmäßig das letzte Element. Wenn wir der list.pop() Methode einen Index geben, entfernt sie das Element an diesem Index aus der Liste.
End of explanation
liste = list("abracadabra")
print(liste)
liste.remove("a")
print(liste)
liste.remove("a")
print(liste)
Explanation: Nun wollen wir ein Element, dessen Wert wir kennen aus der Liste entfernen. Dazu benutzen wir die list.remove() Methode. Diese entfernt das erste Auftreten des Wertes, den wir ihr geben, aus der Liste.
End of explanation
inner_list1 = [0, 1, 2]
inner_list2 = [3, 4, 5]
outer_list = [inner_list1, inner_list2]
print(outer_list)
print("Länge outer_list:", len(outer_list))
print("outer_list[0]:", outer_list[0])
print("outer_list[1]:", outer_list[1])
Explanation: Verschachtelung
Da wir in einer Liste Elemente beliebigen Typs speichern können, können wir auch eine Liste als Element einer Liste speichern. Auf die innere Liste können wir dann genauso zugreifen, wie auf jedes andere Element.
End of explanation
inner_list1 = [0, 1, 2]
inner_list2 = [3, 4, 5]
outer_list = [inner_list1, inner_list2]
# die jeweils ersten Elemente der inneren Listen:
print("Erstes Element von:")
print("outer_list[0]:", outer_list[0][0])
print("outer_list[1]:", outer_list[1][0])
Explanation: Die äußere Liste enthält zwei Elemente, die in diesem Fall jeweils Listen sind.
End of explanation
dir(list)
Explanation: Die vergessenen Methoden
Wenn wir uns mit der dir() Funktion die Methoden eines list Objektes ansehen und alle Einträge mit __ am Anfang und Ende des Namens ignorieren, stellen wir fest, dass wir noch einige Methoden nicht behandelt haben.
End of explanation
dir(list)[-11:]
Explanation: Da uns die dir() Funktion eine Liste zurückgibt, können wir uns diese Methoden ausgeben lassen, indem wir die letzten 11 Elemente anzeigen:
End of explanation
liste = list("abracadabra")
liste.clear()
print(liste)
# Alternativ:
liste = list("abracadabra")
liste = list()
print(liste)
Explanation: list.clear()
Die list.clear() Methode können wir benutzen um alle Elemente einer Liste zu entfernen. Der Anwendungsfall ist relativ begrenzt, da wir auch einfach eine leere Liste erstellen können.
End of explanation
liste = list("abracadabra")
liste_copy = liste.copy()
print(liste)
print(liste_copy)
# Alternativ:
liste = list("abracadabra")
liste_copy = liste[:]
print(liste)
print(liste_copy)
Explanation: list.copy()
Die list.copy() Methode kann benutzt werden um eine Kopie der Liste zu erstellen. Auch hier gibt es eine alternative Möglichkeit über Slicing, dasselbe zu erreichen.
End of explanation
liste = list("abracadabra")
liste.extend("simsala")
print(liste)
# Alternativ:
liste = list("abracadabra")
liste = liste + list("simsala")
print(liste)
Explanation: list.extend()
Die list.extend() Methode kann benutzt werden um an die bestehende Liste eine andere Liste anzuhängen.
End of explanation
liste = list("abracadabra")
liste.reverse()
print(liste)
# Alternativ:
liste = list("abracadabra")
liste = liste[::-1]
print(liste)
Explanation: list.reverse()
Mit der list.reverse() können wir eine Liste umdrehen. Wie auch die voherigen Methoden gibt es auch hier Alternativen.
End of explanation
liste = list("abracadabra")
liste.sort()
print(liste)
Explanation: list.sort()
Mit list.sort() lässt sich eine Liste sortieren, solange die Elemente miteinander vergleichbar sind.
End of explanation
t1 = tuple("abracadabra")
t2 = (2,3,23,42)
t3 = 3,2,3
print(t1)
print(t2)
print(t3)
Explanation: tuple
Ein tuple ist grob gesagt eine unveränderliche Liste. Ein Tupel hat eine Länge, Elemente können nicht entfernt oder hinzugefügt werden. Lediglich die Elemente eines Tupels lassen sich ändern, wenn sie veränderlich sind.
End of explanation
Tuple = tuple("abracadabra")
print(Tuple[4])
print(Tuple[0:5])
Explanation: Ein Tuple lässt sich über die tuple() Funktion oder über runde Klammern () definieren. Die runden Klammern können wir allerdings meistens weglassen. Der Zugriff auf die Elemente funktioniert, wie bei Listen sowohl über den Index, als auch über Slicing.
End of explanation
dir(tuple)
Tuple = tuple("abracadabra")
print(Tuple)
print("Länge:", len(Tuple))
print("Anzahl 'a':", Tuple.count("a"))
print("Erstes 'b':", Tuple.index("b"))
print("e?", "e" in Tuple)
Explanation: Wenn wir uns die Methoden von einem Tuple anschauen, stellen wir fest, dass es nur zwei "normale" Methoden gibt, die wir auch schon von der Liste kennen, nämlich tuple.count() und tuple.index(). Desweiteren können wir auf einen Tuple auch die len() Funktion und das Schlüsselwort in anwenden.
End of explanation
a = 5
b = 10
tmp = a
a = b
b = tmp
print("a:", a)
print("b:", b)
Explanation: Wir können Tuple auch benutzen um den Wert zweier Variablen zu tauschen. Bisher würden wir dafür den Wert einer Variable in einer temporären Variable (tmp) speichern und die Werte so tauschen.
End of explanation
a = 5
b = 10
a, b = b, a
# Alternativ: (a, b) = (b, a)
print("a:", a)
print("b:", b)
Explanation: Durch die Verwendung von Tuplen können wir nun auf unsere temporäre Variable verzichten und den Code lesbarer gestalten:
End of explanation
zauberwort = "abracadabra"
print(zauberwort[0])
print(zauberwort[-1])
teilwort = zauberwort[:4]
print(teilwort)
zauberwort = "abracadabra"
zauberwort.count("a")
zauberwort = "abracadabra"
print(zauberwort.index("b"))
zauberwort = "abracadabra"
print("e?", "e" in zauberwort)
Explanation: Strings
Jetzt da wir Listen und Tuple kennengelernt haben lohnt es sich, nochmal Strings anzuschauen, da wir auch hier auch auf einzelne Zeichen mit dem Index zugreifen können. So wie bei Listen und Tupeln, können wir auch bei Strings die str.count() und str.index() Methoden verwenden.
End of explanation
leer1 = dict()
leer2 = {}
Explanation: dict
Ein Dictionary speichert Werte nicht anhand ihres Indexes, sondern anhand eines Schlüssels.
Erstellen eines dict
Ein leeres Dictionary kann auf zwei Arten erstellt werden:
End of explanation
dict1 = {"name": "Max", "nachname": "Mustermann", "alter": 42}
dict2 = dict([("name", "Martha"), ("nachname", "Musterfrau"), ("alter", 23)])
print(dict1)
print(dict2)
Explanation: Möchten wir ein Dictionary mit Einträgen erstellen, können wir dies entweder durch die dict() Funktion erreichen, in der wir Schlüssel und Werte als Liste von Tuplen mit zwei Elementen übergeben, oder indem wir Schlüssel und Werte durch Doppelpunkte : getrennt in geschweiften Klammern {} als Literale definieren.
End of explanation
beispiel_dict = {0:"integer", True:"boolean", (0,0):"tuple", "s":"strings"}
print(beispiel_dict)
Explanation: Als Schlüssel sind Werte aller unveränderlichen Typen erlaubt, in einem Dictionary müssen die Schlüssel auch nicht denselben Typen haben, es ergibt sich meistens aber, dass die Schlüssel denselben Typen haben.
End of explanation
Max = {"name": "Max", "nachname": "Mustermann", "alter": 42}
Martha = {"name": "Martha", "nachname": "Musterfrau", "alter": 23}
print("Alter:")
print("Max:", Max["alter"])
print("Martha:", Martha["alter"])
Explanation: Zugriff
Der Zugriff auf die Elemente erfolgt dann über den entsprechenden Schlüssel:
End of explanation
Max = {"name": "Max", "nachname": "Mustermann", "alter": 42}
Martha = {"name": "Martha", "nachname": "Musterfrau"}
print("Alter:")
print("Max:", Max.get("alter", 0))
print("Martha:", Martha.get("alter", 0))
Explanation: Alternativ kann für den Zugriff die dict.get() Methode benutzt werden. Diese ermöglicht es auch einen Standardwert anzugeben, wenn der Schlüssel in dem Dictionary nicht vorhanden ist.
End of explanation
Max = {"name": "Max", "nachname": "Mustermann", "alter": 42}
Martha = {"name": "Martha", "nachname": "Musterfrau"}
Martha["alter"] = 23
print(Martha)
Explanation: Den Wert zu einem Schlüssel können wir setzen indem wir einem Schlüssel einen Wert zuweisen, existiert dieser Schlüssel bereits, wird sein Wert überschrieben.
End of explanation
Martha = {"name": "Martha", "nachname": "Musterfrau"}
print("name?", "name" in Martha)
print("alter?", "alter" in Martha)
Explanation: Natürlich kann man auch das in Schlüsselwort mit Dictionaries benutzen:
End of explanation
Martha = {"name": "Martha", "nachname": "Musterfrau"}
print("Anzahl Schlüssel:", len(Martha))
Explanation: Die len() Funktion liefert bei einem Dictionary die Anzahl an Schlüsseln wieder:
End of explanation
dictionary = {0:"integer", True:"boolean", (0,0):"tuple", "s":"strings"}
print(dictionary.keys()) # liefert die Schlüssel als Liste
print(dictionary.values()) # liefert die Werte als Liste
print(dictionary.items()) # liefert Schlüssel und Werte als Tuple in einer Liste
Explanation: items(), keys() und values()
Die drei Methoden dict.items(), dict.keys() und dict.values() sind sich relativ ähnlich, weshalb wir sie zusammen betrachten wollen.
End of explanation
dictionary = {0:"integer", True:"boolean", (0,0):"tuple", "s":"strings"}
value = dictionary.pop(0)
print(dictionary)
Explanation: pop()
Auch Dictionaries besitzen eine dict.pop() Methode.
End of explanation
# Iteration über einen string:
zauberwort = "abracadabra"
for zeichen in zauberwort:
print(zeichen)
# Iteration über eine Liste:
liste = [0, True, "foo", 42]
for element in liste:
print(element)
# Iteration über einen Tuple:
Tuple = (1,2,3)
for zahl in Tuple:
print(zahl)
# Iteration über ein Dictionary:
Max = {"name": "Max", "nachname": "Mustermann", "alter": 42}
for attribut in Max:
print(attribut, ":", Max[attribut])
# Alternativ:
Max = {"name": "Max", "nachname": "Mustermann", "alter": 42}
for attribut, wert in Max.items():
print(attribut, ":", wert)
Explanation: Wie wir sehen funktioniert die dict.pop() Methode ähnlich, wie bei den Listen. Der Wert mit dem angegebenen Schlüssel wird zurückgegeben und aus dem Dictionary entfernt.
Die for-Schleife
Die for-Schleife kann benutzt werden um über verschiedene Objekte zu iterieren. Dabei ist die Syntax einer for-Schleife die folgende:
python
for variable in objekt:
Befehle
* objekt ist dabei das Objekt, über das wir iterieren.
* variable enthält jeweils ein Element aus dem Objekt.
Wir kennen bereits Listen, Tuple, Dictionaries und Strings, über jeden dieser Typen können wir iterieren.
End of explanation
# alle Zahlen von 0 bis 10:
r = range(10)
print(list(r))
# jede zweite Zahl von 2 bis 42:
r = range(2, 42, 2)
print(list(r))
Explanation: range()
Die range() Funktion ist in vielerlei Hinsicht praktisch. Sie ermöglicht es uns auf einfache, gut lesbare Art und Weise Zahlenfolgen zu erzeugen. Warum das so praktisch ist werden wir gleich sehen.
python
range(stop)
range(start, stop[, step])
Wir können range() entweder nur mit einem stop Wert aufrufen (dieser muss ein integer sein), oder mit einem start, einem stop und einen optionalen step Wert. Bei beiden Varianten erhalten wir ein range Objekt.
Diese verhalten sich ähnlich wie die Werte beim Slicing. Geben wir keinen Wert für start an startet unser range Objekt bei 0, geben wir keinen Wert für step an, ist die Schrittweite 1.
End of explanation
# von 0 bis -100 jede zweite Zahl:
r = range(0, -100, -2)
print(list(r))
Explanation: Genauso, wie beim Slicing können die Werte auch negativ sein.
End of explanation
r = range(10000000000000000)
print(r)
Explanation: Ein range Objekt kann auch benutzt werden um sehr große Zahlenreihen zu erzeugen, da die Zahlen erst berechnet werden, wenn sie benötigt werden.
End of explanation
for i in range(10):
print(i)
Explanation: Dadurch, dass wir über ein range Objekt iterieren können, können wir range() gut in einer for-Schleife benutzen.
End of explanation
zauberwort = "abracadabra#test"
neuer_zauber = ""
for zeichen in zauberwort:
if zeichen == "#":
break
elif zeichen == "a":
continue
else:
neuer_zauber += zeichen
print(neuer_zauber)
string = "Das ist ein Teststring."
for zeichen in string:
if zeichen == "Y":
break
else:
print("Kein 'Y' gefunden.")
Explanation: break, continue und else
Die Schlüsselwörter break, continue und else können wir innerhalb einer for-Schleife genauso benutzen, wie in einer while-Schleife.
End of explanation
table = [
[
"test",
"foo",
"bar"
],
[
"ham",
"spam",
"egg"
]
]
for row in table:
for entry in row:
print(entry)
Explanation: for-Schleifen können verschachtelt werden, was wir benutzen können um über ein verschachteltes Objekt zu iterieren.
End of explanation
Set = set()
print(Set)
Set = {2,3,5,7}
print(Set)
Explanation: sets
Sets sind Mengen im mathematischen Sinn. Das bedeutet ein Element kann entweder in einer Menge enthalten sein, oder eben nicht.
Erstellen eines set
Ein Set kann entweder über die set() Funktion aus einem anderen Objekt erzeugt werden, oder über geschweifte Klammern {} welche die Literale eines Sets bilden.
End of explanation
some_primes = {2, 3, 5, 7, 11, 13}
print(12 in some_primes)
some_primes = {2, 3, 5, 7, 11, 13}
print(len(some_primes))
Explanation: in und len()
End of explanation
some_primes = {2, 3, 5, 7, 11, 13}
some_primes.add(17)
print(some_primes)
Explanation: Elemente hinzufügen
Nach dem wir eine Menge erstellt haben, können wir mit der set.add() Methode Elemente hinzufügen.
End of explanation
some_primes = {2, 3, 5, 7, 11, 13}
more_primes = [17, 19, 23]
some_primes.update(more_primes)
print(some_primes)
Explanation: Um nicht nur einzelne Elemente, sondern mehrere Elemente an eine Menge anzuhängen, können wir die set.update() Methode benutzen.
End of explanation
Set = {23, "foo", 42, "test", (1,2)}
element = Set.pop()
print(element)
print(Set)
Set = {23, "foo", 42, "test", (1,2)}
Set.remove("foo")
print(Set)
Explanation: Elemente entfernen
Um Elemente zu entfernen gibt es zwei Möglichkeiten. Die set.pop() Methode entfernt ein Element aus der Menge und gibt es dabei zurück. Die set.remove() Methode kann dafür benutzt werden, um Elemente anhand ihres Wertes aus der Menge zu entfernen.
End of explanation
# Schnittmenge zweier Mengen:
set1 = {1, 3, 4, 5, 2}
set2 = {5, 8, 1, 3, 7, 9}
schnittmenge = set1 & set2
print(schnittmenge)
Explanation: Mengenoperationen
Set-Objekte bieten Mengenoperationen an, die auch aus der Mathematik bekannt sind. Diese sind die Schnittmenge, die Vereinigungsmenge und die Differenzmenge.
Die Schnittmenge enthält alle Elemente, die in beiden Mengen enthalten sind.
End of explanation
# Schnittmenge zweier Mengen:
set1 = {1, 3, 4, 5, 2}
set2 = {5, 8, 1, 3, 7, 9}
schnittmenge = set1.intersection(set2)
print(schnittmenge)
Explanation: Alternativ können wir auch die set.intersection() Methode benutzen.
End of explanation
# Vereinigungsmenge zweier Mengen:
set1 = {1, 3, 4, 5, 2}
set2 = {5, 8, 1, 3, 7, 9}
vereinigung = set1 | set2
print(vereinigung)
Explanation: Die Vereinigungsmenge enthält alle Elemente, die in einer der Mengen enthalten sind.
End of explanation
# Vereinigungsmenge zweier Mengen:
set1 = {1, 3, 4, 5, 2}
set2 = {5, 8, 1, 3, 7, 9}
vereinigung = set1.union(set2)
print(vereinigung)
Explanation: Alternativ können wir auch die set.union() Methode benutzen.
End of explanation
s1 = {1, 3, 4, 5, 2}
s2 = {4, 5}
differenz = s1 - s2
print(differenz)
Explanation: Die Differenzmenge einer Menge S1 mit einer Menge S2 enthält alle Elemente, die in der Menge S1 aber nicht in der Menge S2 sind.
End of explanation
s1 = {1, 3, 4, 5, 2}
s2 = {4, 5}
differenz = s1.difference(s2)
print(differenz)
Explanation: Alternativ können wir auch die set.difference() Methode benutzen.
End of explanation
s1 = {1, 3, 4, 5, 2}
s2 = {4, 5, 6}
sym_differenz = s1 ^ s2
print(sym_differenz)
Explanation: Es gibt auch noch die symmetrische Differenzmenge zweier Mengen. Diese enthält alle Elemente, die in einer der beiden Mengen, aber nicht in beiden Mengen enthalten sind.
End of explanation
s1 = {1, 3, 4, 5, 2}
s2 = {4, 5, 6}
sym_differenz = s1.symmetric_difference(s2)
print(sym_differenz)
Explanation: Alternativ können wir für die symmetrische Differenz auch die set.symmetric_difference() Methode benutzen.
End of explanation
s1 = {1, 3, 4, 5, 2}
s2 = {4, 5, 6}
s3 = {7, 8, 2}
print(s1.isdisjoint(s2))
print(s2.isdisjoint(s3))
Explanation: Mit der set.isdisjoint() Methode können wir testen ob zwei Mengen disjunkt sind. Zwei Mengen sind disjunkt, wenn ihre Schnittmenge leer ist.
End of explanation
s1 = {1, 2, 3, 4, 5}
s2 = {1, 2}
print("Ist s2 eine Teilmenge von s1:")
print(s2.issubset(s1))
print("Ist s1 eine Obermenge von s2:")
print(s1.issuperset(s2))
Explanation: Mit der set.issubset() und der set.issuperset() Methode können wir feststellen, ob eine Menge eine Teilmenge, beziehungsweise eine Obermenge einer anderen Menge ist. Eine Menge s1 ist Teilmenge einer Menge s2, wenn alle Elemente von s1 in der Menge s2 enthalten sind. s2 ist dann die Obermenge von s1.
End of explanation
s1 = {1, 2, 3, 4}
s2 = {1, 2}
print(s2 < s1)
print(s1 > s2)
Explanation: Statt der set.issubset() Methode können wir auch den <= Operator benutzen, statt der set.issuperset() Methode können wir auch den >= Operator benutzen.
Den > Operator können wir benutzen um zu ermitteln, ob eine Menge s1 eine echte Obermenge von einer Menge s2 ist. Dies ist der Fall, wenn s1 eine Obermenge von s2 und nicht die gleiche Menge ist.
Den < Operator können wir benutzen um zu ermitteln, ob eine Menge s1 eine echte Teilmenge einer Menge s2 ist. Dies ist der Fall, wenn s1 eine Teilmenge von s2 ist und nicht die gleiche Menge ist.
End of explanation |
2,305 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="6"><b>Reshaping data</b></font></p>
© 2016, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
Step1: Pivoting data
Cfr. excel
People who know Excel, probably know the Pivot functionality
Step2: Interested in Grand totals?
Step3: Pivot is just reordering your data
Small subsample of the titanic dataset
Step4: So far, so good...
Let's now use the full titanic dataset
Step5: And try the same pivot (no worries about the try-except, this is here just used to catch a loooong error)
Step6: This does not work, because we would end up with multiple values for one cell of the resulting frame, as the error says
Step7: Since pivot is just restructuring data, where would both values of Fare for the same combination of Sex and Pclass need to go?
Well, they need to be combined, according to an aggregation functionality, which is supported by the functionpivot_table
<div class="alert alert-danger">
<b>NOTE</b>
Step8: <div class="alert alert-info">
<b>REMEMBER</b>
Step9: <div class="alert alert-info">
<b>REMEMBER</b>
Step10: <div class="alert alert-success">
<b>EXERCISE</b>
Step11: <div class="alert alert-success">
<b>EXERCISE</b>
Step12: Melt
The melt function performs the inverse operation of a pivot. This can be used to make your frame longer, i.e. to make a tidy version of your data.
Step13: Assume we have a DataFrame like the above. The observations (the average Fare people payed) are spread over different columns. In a tidy dataset, each observation is stored in one row. To obtain this, we can use the melt function
Step14: As you can see above, the melt function puts all column labels in one column, and all values in a second column.
In this case, this is not fully what we want. We would like to keep the 'Sex' column separately
Step15: Reshaping with stack and unstack
The docs say
Step16: To use stack/unstack, we need the values we want to shift from rows to columns or the other way around as the index
Step17: <div class="alert alert-info">
<b>REMEMBER</b>
Step18: <div class="alert alert-success">
<b>EXERCISE</b>
Step19: Mimick melt
Like the pivot table above, we can now also obtain the result of melt with stack/unstack.
Let's use the same pivoted frame as above, and look at the final melt result
Step20: <div class="alert alert-success">
<b>EXERCISE</b>
Step21: Exercises
Step22: <div class="alert alert-success">
<b>EXERCISE</b>
Step23: <div class="alert alert-success">
<b>EXERCISE</b>
Step24: <div class="alert alert-success">
<b>EXERCISE</b>
Step25: <div class="alert alert-success">
<b>EXERCISE</b> | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
Explanation: <p><font size="6"><b>Reshaping data</b></font></p>
© 2016, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
excelample = pd.DataFrame({'Month': ["January", "January", "January", "January",
"February", "February", "February", "February",
"March", "March", "March", "March"],
'Category': ["Transportation", "Grocery", "Household", "Entertainment",
"Transportation", "Grocery", "Household", "Entertainment",
"Transportation", "Grocery", "Household", "Entertainment"],
'Amount': [74., 235., 175., 100., 115., 240., 225., 125., 90., 260., 200., 120.]})
excelample
excelample_pivot = excelample.pivot(index="Category", columns="Month", values="Amount")
excelample_pivot
Explanation: Pivoting data
Cfr. excel
People who know Excel, probably know the Pivot functionality:
The data of the table:
End of explanation
# sum columns
excelample_pivot.sum(axis=1)
# sum rows
excelample_pivot.sum(axis=0)
Explanation: Interested in Grand totals?
End of explanation
df = pd.DataFrame({'Fare': [7.25, 71.2833, 51.8625, 30.0708, 7.8542, 13.0],
'Pclass': [3, 1, 1, 2, 3, 2],
'Sex': ['male', 'female', 'male', 'female', 'female', 'male'],
'Survived': [0, 1, 0, 1, 0, 1]})
df
df.pivot(index='Pclass', columns='Sex', values='Fare')
df.pivot(index='Pclass', columns='Sex', values='Survived')
Explanation: Pivot is just reordering your data
Small subsample of the titanic dataset:
End of explanation
df = pd.read_csv("data/titanic.csv")
df.head()
Explanation: So far, so good...
Let's now use the full titanic dataset:
End of explanation
try:
df.pivot(index='Sex', columns='Pclass', values='Fare')
except Exception as e:
print("Exception!", e)
Explanation: And try the same pivot (no worries about the try-except, this is here just used to catch a loooong error):
End of explanation
df.loc[[1, 3], ["Sex", 'Pclass', 'Fare']]
Explanation: This does not work, because we would end up with multiple values for one cell of the resulting frame, as the error says: duplicated values for the columns in the selection. As an example, consider the following rows of our three columns of interest:
End of explanation
df = pd.read_csv("data/titanic.csv")
df.pivot_table(index='Sex', columns='Pclass', values='Fare')
Explanation: Since pivot is just restructuring data, where would both values of Fare for the same combination of Sex and Pclass need to go?
Well, they need to be combined, according to an aggregation functionality, which is supported by the functionpivot_table
<div class="alert alert-danger">
<b>NOTE</b>:
<ul>
<li>**Pivot** is purely restructuring: a single value for each index/column combination is required.</li>
</ul>
</div>
Pivot tables - aggregating while pivoting
End of explanation
df.pivot_table(index='Sex', columns='Pclass',
values='Fare', aggfunc='max')
df.pivot_table(index='Sex', columns='Pclass',
values='Fare', aggfunc='count')
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li>By default, `pivot_table` takes the **mean** of all values that would end up into one cell. However, you can also specify other aggregation functions using the `aggfunc` keyword.</li>
</ul>
</div>
End of explanation
pd.crosstab(index=df['Sex'], columns=df['Pclass'])
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li>There is a shortcut function for a `pivot_table` with a `aggfunc=count` as aggregation: `crosstab`</li>
</ul>
</div>
End of explanation
df.pivot_table(index='Pclass', columns='Sex',
values='Survived', aggfunc='mean')
fig, ax1 = plt.subplots()
df.pivot_table(index='Pclass', columns='Sex',
values='Survived', aggfunc='mean').plot(kind='bar',
rot=0,
ax=ax1)
ax1.set_ylabel('Survival ratio')
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Make a pivot table with the survival rates (= number of persons survived / total number of persons) for Pclass vs Sex.</li>
<li>Plot the result as a bar plot.</li>
</ul>
</div>
End of explanation
df['Underaged'] = df['Age'] <= 18
df.pivot_table(index='Underaged', columns='Sex',
values='Fare', aggfunc='mean')
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Make a table of the median Fare payed by aged/underaged vs Sex.</li>
</ul>
</div>
End of explanation
pivoted = df.pivot_table(index='Sex', columns='Pclass', values='Fare').reset_index()
pivoted.columns.name = None
pivoted
Explanation: Melt
The melt function performs the inverse operation of a pivot. This can be used to make your frame longer, i.e. to make a tidy version of your data.
End of explanation
pd.melt(pivoted)
Explanation: Assume we have a DataFrame like the above. The observations (the average Fare people payed) are spread over different columns. In a tidy dataset, each observation is stored in one row. To obtain this, we can use the melt function:
End of explanation
pd.melt(pivoted, id_vars=['Sex']) #, var_name='Pclass', value_name='Fare')
Explanation: As you can see above, the melt function puts all column labels in one column, and all values in a second column.
In this case, this is not fully what we want. We would like to keep the 'Sex' column separately:
End of explanation
df = pd.DataFrame({'A':['one', 'one', 'two', 'two'],
'B':['a', 'b', 'a', 'b'],
'C':range(4)})
df
Explanation: Reshaping with stack and unstack
The docs say:
Pivot a level of the (possibly hierarchical) column labels, returning a
DataFrame (or Series in the case of an object with a single level of
column labels) having a hierarchical index with a new inner-most level
of row labels.
Indeed...
<img src="img/schema-stack.svg" width=50%>
Before we speak about hierarchical index, first check it in practice on the following dummy example:
End of explanation
df = df.set_index(['A', 'B']) # Indeed, you can combine two indices
df
result = df['C'].unstack()
result
df = result.stack().reset_index(name='C')
df
Explanation: To use stack/unstack, we need the values we want to shift from rows to columns or the other way around as the index:
End of explanation
df = pd.read_csv("data/titanic.csv")
df.head()
df.pivot_table(index='Pclass', columns='Sex',
values='Survived', aggfunc='mean')
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li>**stack**: make your data *longer* and *smaller* </li>
<li>**unstack**: make your data *shorter* and *wider* </li>
</ul>
</div>
Mimick pivot table
To better understand and reason about pivot tables, we can express this method as a combination of more basic steps. In short, the pivot is a convenient way of expressing the combination of a groupby and stack/unstack.
End of explanation
df.groupby(['Pclass', 'Sex'])['Survived'].mean().unstack()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Get the same result as above based on a combination of `groupby` and `unstack`</li>
<li>First use `groupby` to calculate the survival ratio for all groups</li>
<li>Then, use `unstack` to reshape the output of the groupby operation</li>
</ul>
</div>
End of explanation
pivoted = df.pivot_table(index='Sex', columns='Pclass', values='Fare').reset_index()
pivoted.columns.name = None
pivoted
pd.melt(pivoted, id_vars=['Sex'], var_name='Pclass', value_name='Fare')
Explanation: Mimick melt
Like the pivot table above, we can now also obtain the result of melt with stack/unstack.
Let's use the same pivoted frame as above, and look at the final melt result:
End of explanation
temp = pivoted.set_index('Sex')
temp
temp.columns.name = 'Pclass'
temp = temp.stack()
temp
temp.reset_index(name='Fare')
# alternative: rename columns at the end
temp = pivoted.set_index('Sex').stack().reset_index()
temp.rename(columns={'level_1': 'Pclass', 0: 'Fare'})
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Get the same result as above using `stack`/`unstack` (combined with `set_index` / `reset_index`)</li>
<li>Tip: set those columns as the index that you do not want to stack</li>
</ul>
</div>
End of explanation
cast = pd.read_csv('data/cast.csv')
cast.head()
titles = pd.read_csv('data/titles.csv')
titles.head()
Explanation: Exercises: use the reshaping methods with the movie data
These exercises are based on the PyCon tutorial of Brandon Rhodes (so credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv and put them in the /data folder.
End of explanation
grouped = cast.groupby(['year', 'type']).size()
table = grouped.unstack('type')
table.plot()
cast.pivot_table(index='year', columns='type', values="character", aggfunc='count').plot()
# for values in using the , take a column with no Nan values in order to count effectively all values -> at this stage: aha-erlebnis about crosstab function(!)
pd.crosstab(index=cast['year'], columns=cast['type']).plot()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the number of actor roles each year and the number of actress roles each year over the whole period of available movie data.</li>
</ul>
</div>
End of explanation
pd.crosstab(index=cast['year'], columns=cast['type']).plot(kind='area')
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the number of actor roles each year and the number of actress roles each year. Use kind='area' as plot type</li>
</ul>
</div>
End of explanation
grouped = cast.groupby(['year', 'type']).size()
table = grouped.unstack('type')
(table['actor'] / (table['actor'] + table['actress'])).plot(ylim=[0,1])
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the fraction of roles that have been 'actor' roles each year over the whole period of available movie data.</li>
</ul>
</div>
End of explanation
sel = cast[(cast.character == 'Superman') | (cast.character == 'Batman')]
sel = sel.groupby(['year', 'character']).size()
sel = sel.unstack()
sel = sel.fillna(0)
sel.head()
d = sel['Superman'] - sel['Batman']
print('Superman years:')
print(len(d[d > 0.0]))
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Define a year as a "Superman year" when films of that year feature more Superman characters than Batman characters. How many years in film history have been Superman years?</li>
</ul>
</div>
End of explanation |
2,306 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training Logistic Regression via Stochastic Gradient Ascent
The goal of this notebook is to implement a logistic regression classifier using stochastic gradient ascent. You will
Step1: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
Step2: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations
Step3: The SFrame products now contains one column for each of the 193 important_words.
Step4: Split data into training and validation sets
We will now split the data into a 90-10 split where 90% is in the training set and 10% is in the validation set. We use seed=1 so that everyone gets the same result.
Step5: Convert SFrame to NumPy array
Just like in the earlier assignments, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step6: Note that we convert both the training and validation sets into NumPy arrays.
Warning
Step7: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands
Step8: Derivative of log likelihood with respect to a single coefficient
Let us now work on making minor changes to how the derivative computation is performed for logistic regression.
Recall from the lectures and Module 3 assignment that for logistic regression, the derivative of log likelihood with respect to a single coefficient is as follows
Step9: Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.
To verify the correctness of the gradient computation, we provide a function for computing average log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
To track the performance of stochastic gradient ascent, we provide a function for computing average log likelihood.
$$\ell\ell_A(\mathbf{w}) = \color{red}{\frac{1}{N}} \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
Note that we made one tiny modification to the log likelihood function (called compute_log_likelihood) in our earlier assignments. We added a $\color{red}{1/N}$ term which averages the log likelihood accross all data points. The $\color{red}{1/N}$ term makes it easier for us to compare stochastic gradient ascent with batch gradient ascent. We will use this function to generate plots that are similar to those you saw in the lecture.
Step10: Quiz Question
Step11: Quiz Question
Step12: Quiz Question
Step13: Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.
Checkpoint
The following cell tests your stochastic gradient ascent function using a toy dataset consisting of two data points. If the test does not pass, make sure you are normalizing the gradient update rule correctly.
Step14: Compare convergence behavior of stochastic gradient ascent
For the remainder of the assignment, we will compare stochastic gradient ascent against batch gradient ascent. For this, we need a reference implementation of batch gradient ascent. But do we need to implement this from scratch?
Quiz Question
Step15: Quiz Question. When you set batch_size = 1, as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Now run batch gradient ascent over the feature_matrix_train for 200 iterations using
Step16: Quiz Question. When you set batch_size = len(train_data), as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Make "passes" over the dataset
To make a fair comparison betweeen stochastic gradient ascent and batch gradient ascent, we measure the average log likelihood as a function of the number of passes (defined as follows)
Step17: Log likelihood plots for stochastic gradient ascent
With the terminology in mind, let us run stochastic gradient ascent for 10 passes. We will use
* step_size=1e-1
* batch_size=100
* initial_coefficients to all zeros.
Step18: We provide you with a utility function to plot the average log likelihood as a function of the number of passes.
Step19: Smoothing the stochastic gradient ascent curve
The plotted line oscillates so much that it is hard to see whether the log likelihood is improving. In our plot, we apply a simple smoothing operation using the parameter smoothing_window. The smoothing is simply a moving average of log likelihood over the last smoothing_window "iterations" of stochastic gradient ascent.
Step20: Checkpoint
Step21: We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30.
Step22: Quiz Question
Step23: Plotting the log likelihood as a function of passes for each step size
Now, we will plot the change in log likelihood using the make_plot for each of the following values of step_size
Step24: Now, let us remove the step size step_size = 1e2 and plot the rest of the curves. | Python Code:
from __future__ import division
import graphlab
Explanation: Training Logistic Regression via Stochastic Gradient Ascent
The goal of this notebook is to implement a logistic regression classifier using stochastic gradient ascent. You will:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Write a function to compute the derivative of log likelihood function with respect to a single coefficient.
Implement stochastic gradient ascent.
Compare convergence of stochastic gradient ascent with that of batch gradient ascent.
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
products = graphlab.SFrame('amazon_baby_subset.gl/')
Explanation: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
End of explanation
import json
with open('important_words.json', 'r') as f:
important_words = json.load(f)
important_words = [str(s) for s in important_words]
# Remote punctuation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
Explanation: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:
Remove punctuation using Python's built-in string manipulation functionality.
Compute word counts (only for the important_words)
Refer to Module 3 assignment for more details.
End of explanation
products
Explanation: The SFrame products now contains one column for each of the 193 important_words.
End of explanation
train_data, validation_data = products.random_split(.9, seed=1)
print 'Training set : %d data points' % len(train_data)
print 'Validation set: %d data points' % len(validation_data)
Explanation: Split data into training and validation sets
We will now split the data into a 90-10 split where 90% is in the training set and 10% is in the validation set. We use seed=1 so that everyone gets the same result.
End of explanation
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
Explanation: Convert SFrame to NumPy array
Just like in the earlier assignments, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
End of explanation
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
Explanation: Note that we convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes.
End of explanation
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
score = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = 1. / (1.+np.exp(-score))
return predictions
Explanation: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-10-assignment-numpy-arrays.npz')
feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
Quiz question: In Module 3 assignment, there were 194 features (an intercept + one feature for each of the 193 important words). In this assignment, we will use stochastic gradient ascent to train the classifier using logistic regression. How does the changing the solver to stochastic gradient ascent affect the number of features?
Building on logistic regression
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in Module 3 assignment to make probability predictions, since this part is not affected by using stochastic gradient ascent as a solver. Only the way in which the coefficients are learned is affected by using stochastic gradient ascent as a solver.
End of explanation
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors, feature)
return derivative
Explanation: Derivative of log likelihood with respect to a single coefficient
Let us now work on making minor changes to how the derivative computation is performed for logistic regression.
Recall from the lectures and Module 3 assignment that for logistic regression, the derivative of log likelihood with respect to a single coefficient is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
In Module 3 assignment, we wrote a function to compute the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts the following two parameters:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
Complete the following code block:
End of explanation
def compute_avg_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)/len(feature_matrix)
return lp
Explanation: Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.
To verify the correctness of the gradient computation, we provide a function for computing average log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
To track the performance of stochastic gradient ascent, we provide a function for computing average log likelihood.
$$\ell\ell_A(\mathbf{w}) = \color{red}{\frac{1}{N}} \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
Note that we made one tiny modification to the log likelihood function (called compute_log_likelihood) in our earlier assignments. We added a $\color{red}{1/N}$ term which averages the log likelihood accross all data points. The $\color{red}{1/N}$ term makes it easier for us to compare stochastic gradient ascent with batch gradient ascent. We will use this function to generate plots that are similar to those you saw in the lecture.
End of explanation
j = 1 # Feature number
i = 10 # Data point number
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+1,:], coefficients)
indicator = (sentiment_train[i:i+1]==+1)
errors = indicator - predictions
gradient_single_data_point = feature_derivative(errors, feature_matrix_train[i:i+1,j])
print "Gradient single data point: %s" % gradient_single_data_point
print " --> Should print 0.0"
Explanation: Quiz Question: Recall from the lecture and the earlier assignment, the log likelihood (without the averaging term) is given by
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
How are the functions $\ell\ell(\mathbf{w})$ and $\ell\ell_A(\mathbf{w})$ related?
Modifying the derivative for stochastic gradient ascent
Recall from the lecture that the gradient for a single data point $\color{red}{\mathbf{x}_i}$ can be computed using the following formula:
$$
\frac{\partial\ell_{\color{red}{i}}(\mathbf{w})}{\partial w_j} = h_j(\color{red}{\mathbf{x}i})\left(\mathbf{1}[y\color{red}{i} = +1] - P(y_\color{red}{i} = +1 | \color{red}{\mathbf{x}_i}, \mathbf{w})\right)
$$
Computing the gradient for a single data point
Do we really need to re-write all our code to modify $\partial\ell(\mathbf{w})/\partial w_j$ to $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$?
Thankfully No!. Using NumPy, we access $\mathbf{x}i$ in the training data using feature_matrix_train[i:i+1,:]
and $y_i$ in the training data using sentiment_train[i:i+1]. We can compute $\partial\ell{\color{red}{i}}(\mathbf{w})/\partial w_j$ by re-using all the code written in feature_derivative and predict_probability.
We compute $\partial\ell_{\color{red}{i}}(\mathbf{w})/\partial w_j$ using the following steps:
* First, compute $P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ using the predict_probability function with feature_matrix_train[i:i+1,:] as the first parameter.
* Next, compute $\mathbf{1}[y_i = +1]$ using sentiment_train[i:i+1].
* Finally, call the feature_derivative function with feature_matrix_train[i:i+1, j] as one of the parameters.
Let us follow these steps for j = 1 and i = 10:
End of explanation
j = 1 # Feature number
i = 10 # Data point start
B = 10 # Mini-batch size
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+B,:], coefficients)
indicator = (sentiment_train[i:i+B]==+1)
errors = indicator - predictions
gradient_mini_batch = feature_derivative(errors, feature_matrix_train[i:i+B,j])
print "Gradient mini-batch data points: %s" % gradient_mini_batch
print " --> Should print 1.0"
Explanation: Quiz Question: The code block above computed $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$ for j = 1 and i = 10. Is $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$ a scalar or a 194-dimensional vector?
Modifying the derivative for using a batch of data points
Stochastic gradient estimates the ascent direction using 1 data point, while gradient uses $N$ data points to decide how to update the the parameters. In an optional video, we discussed the details of a simple change that allows us to use a mini-batch of $B \leq N$ data points to estimate the ascent direction. This simple approach is faster than regular gradient but less noisy than stochastic gradient that uses only 1 data point. Although we encorage you to watch the optional video on the topic to better understand why mini-batches help stochastic gradient, in this assignment, we will simply use this technique, since the approach is very simple and will improve your results.
Given a mini-batch (or a set of data points) $\mathbf{x}{i}, \mathbf{x}{i+1} \ldots \mathbf{x}{i+B}$, the gradient function for this mini-batch of data points is given by:
$$
\color{red}{\sum{s = i}^{i+B}} \frac{\partial\ell_{s}}{\partial w_j} = \color{red}{\sum_{s = i}^{i + B}} h_j(\mathbf{x}_s)\left(\mathbf{1}[y_s = +1] - P(y_s = +1 | \mathbf{x}_s, \mathbf{w})\right)
$$
Computing the gradient for a "mini-batch" of data points
Using NumPy, we access the points $\mathbf{x}i, \mathbf{x}{i+1} \ldots \mathbf{x}_{i+B}$ in the training data using feature_matrix_train[i:i+B,:]
and $y_i$ in the training data using sentiment_train[i:i+B].
We can compute $\color{red}{\sum_{s = i}^{i+B}} \partial\ell_{s}/\partial w_j$ easily as follows:
End of explanation
from math import sqrt
def logistic_regression_SG(feature_matrix, sentiment, initial_coefficients, step_size, batch_size, max_iter):
log_likelihood_all = []
# make sure it's a numpy array
coefficients = np.array(initial_coefficients)
# set seed=1 to produce consistent results
np.random.seed(seed=1)
# Shuffle the data before starting
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0 # index of current batch
# Do a linear scan over data
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,:]
### YOUR CODE HERE
predictions = predict_probability(feature_matrix[i:i+batch_size, :], coefficients)
if len(predictions) <= 0:
break;
# Compute indicator value for (y_i = +1)
# Make sure to slice the i-th entry with [i:i+batch_size]
### YOUR CODE HERE
indicator = (sentiment[i:i+batch_size] == +1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]
# Compute the derivative for coefficients[j] and save it to derivative.
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,j]
### YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[i:i+batch_size, j])
# compute the product of the step size, the derivative, and the **normalization constant** (1./batch_size)
### YOUR CODE HERE
coefficients[j] += step_size*derivative * 1.0 / batch_size
# Checking whether log likelihood is increasing
# Print the log likelihood over the *current batch*
lp = compute_avg_log_likelihood(feature_matrix[i:i+batch_size,:], sentiment[i:i+batch_size],
coefficients)
log_likelihood_all.append(lp)
if itr <= 15 or (itr <= 1000 and itr % 100 == 0) or (itr <= 10000 and itr % 1000 == 0) \
or itr % 10000 == 0 or itr == max_iter-1:
data_size = len(feature_matrix)
print 'Iteration %*d: Average log likelihood (of data points in batch [%0*d:%0*d]) = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, \
int(np.ceil(np.log10(data_size))), i, \
int(np.ceil(np.log10(data_size))), i+batch_size, lp)
# if we made a complete pass over data, shuffle and restart
i += batch_size
if i+batch_size > len(feature_matrix):
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0
# We return the list of log likelihoods for plotting purposes.
return coefficients, log_likelihood_all
Explanation: Quiz Question: The code block above computed
$\color{red}{\sum_{s = i}^{i+B}}\partial\ell_{s}(\mathbf{w})/{\partial w_j}$
for j = 10, i = 10, and B = 10. Is this a scalar or a 194-dimensional vector?
Quiz Question: For what value of B is the term
$\color{red}{\sum_{s = 1}^{B}}\partial\ell_{s}(\mathbf{w})/\partial w_j$
the same as the full gradient
$\partial\ell(\mathbf{w})/{\partial w_j}$?
Averaging the gradient across a batch
It is a common practice to normalize the gradient update rule by the batch size B:
$$
\frac{\partial\ell_{\color{red}{A}}(\mathbf{w})}{\partial w_j} \approx \color{red}{\frac{1}{B}} {\sum_{s = i}^{i + B}} h_j(\mathbf{x}_s)\left(\mathbf{1}[y_s = +1] - P(y_s = +1 | \mathbf{x}_s, \mathbf{w})\right)
$$
In other words, we update the coefficients using the average gradient over data points (instead of using a summation). By using the average gradient, we ensure that the magnitude of the gradient is approximately the same for all batch sizes. This way, we can more easily compare various batch sizes of stochastic gradient ascent (including a batch size of all the data points), and study the effect of batch size on the algorithm as well as the choice of step size.
Implementing stochastic gradient ascent
Now we are ready to implement our own logistic regression with stochastic gradient ascent. Complete the following function to fit a logistic regression model using gradient ascent:
End of explanation
sample_feature_matrix = np.array([[1.,2.,-1.], [1.,0.,1.]])
sample_sentiment = np.array([+1, -1])
coefficients, log_likelihood = logistic_regression_SG(sample_feature_matrix, sample_sentiment, np.zeros(3),
step_size=1., batch_size=2, max_iter=2)
print '-------------------------------------------------------------------------------------'
print 'Coefficients learned :', coefficients
print 'Average log likelihood per-iteration :', log_likelihood
if np.allclose(coefficients, np.array([-0.09755757, 0.68242552, -0.7799831]), atol=1e-3)\
and np.allclose(log_likelihood, np.array([-0.33774513108142956, -0.2345530939410341])):
# pass if elements match within 1e-3
print '-------------------------------------------------------------------------------------'
print 'Test passed!'
else:
print '-------------------------------------------------------------------------------------'
print 'Test failed'
Explanation: Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.
Checkpoint
The following cell tests your stochastic gradient ascent function using a toy dataset consisting of two data points. If the test does not pass, make sure you are normalizing the gradient update rule correctly.
End of explanation
coefficients, log_likelihood = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1, batch_size=1, max_iter=10)
Explanation: Compare convergence behavior of stochastic gradient ascent
For the remainder of the assignment, we will compare stochastic gradient ascent against batch gradient ascent. For this, we need a reference implementation of batch gradient ascent. But do we need to implement this from scratch?
Quiz Question: For what value of batch size B above is the stochastic gradient ascent function logistic_regression_SG act as a standard gradient ascent algorithm?
Running gradient ascent using the stochastic gradient ascent implementation
Instead of implementing batch gradient ascent separately, we save time by re-using the stochastic gradient ascent function we just wrote — to perform gradient ascent, it suffices to set batch_size to the number of data points in the training data. Yes, we did answer above the quiz question for you, but that is an important point to remember in the future :)
Small Caveat. The batch gradient ascent implementation here is slightly different than the one in the earlier assignments, as we now normalize the gradient update rule.
We now run stochastic gradient ascent over the feature_matrix_train for 10 iterations using:
* initial_coefficients = np.zeros(194)
* step_size = 5e-1
* batch_size = 1
* max_iter = 10
End of explanation
# YOUR CODE HERE
coefficients_batch, log_likelihood_batch = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1, batch_size=len(feature_matrix_train), max_iter=200)
Explanation: Quiz Question. When you set batch_size = 1, as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Now run batch gradient ascent over the feature_matrix_train for 200 iterations using:
* initial_coefficients = np.zeros(194)
* step_size = 5e-1
* batch_size = len(feature_matrix_train)
* max_iter = 200
End of explanation
2*50000/100
Explanation: Quiz Question. When you set batch_size = len(train_data), as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Make "passes" over the dataset
To make a fair comparison betweeen stochastic gradient ascent and batch gradient ascent, we measure the average log likelihood as a function of the number of passes (defined as follows):
$$
[\text{# of passes}] = \frac{[\text{# of data points touched so far}]}{[\text{size of dataset}]}
$$
Quiz Question Suppose that we run stochastic gradient ascent with a batch size of 100. How many gradient updates are performed at the end of two passes over a dataset consisting of 50000 data points?
End of explanation
step_size = 1e-1
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=1e-1, batch_size=100, max_iter=num_iterations)
Explanation: Log likelihood plots for stochastic gradient ascent
With the terminology in mind, let us run stochastic gradient ascent for 10 passes. We will use
* step_size=1e-1
* batch_size=100
* initial_coefficients to all zeros.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
def make_plot(log_likelihood_all, len_data, batch_size, smoothing_window=1, label=''):
plt.rcParams.update({'figure.figsize': (9,5)})
log_likelihood_all_ma = np.convolve(np.array(log_likelihood_all), \
np.ones((smoothing_window,))/smoothing_window, mode='valid')
plt.plot(np.array(range(smoothing_window-1, len(log_likelihood_all)))*float(batch_size)/len_data,
log_likelihood_all_ma, linewidth=4.0, label=label)
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
plt.xlabel('# of passes over data')
plt.ylabel('Average log likelihood per data point')
plt.legend(loc='lower right', prop={'size':14})
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
label='stochastic gradient, step_size=1e-1')
Explanation: We provide you with a utility function to plot the average log likelihood as a function of the number of passes.
End of explanation
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic gradient, step_size=1e-1')
Explanation: Smoothing the stochastic gradient ascent curve
The plotted line oscillates so much that it is hard to see whether the log likelihood is improving. In our plot, we apply a simple smoothing operation using the parameter smoothing_window. The smoothing is simply a moving average of log likelihood over the last smoothing_window "iterations" of stochastic gradient ascent.
End of explanation
step_size = 1e-1
batch_size = 100
num_passes = 200
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
## YOUR CODE HERE
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=step_size, batch_size=batch_size, max_iter=num_iterations)
Explanation: Checkpoint: The above plot should look smoother than the previous plot. Play around with smoothing_window. As you increase it, you should see a smoother plot.
Stochastic gradient ascent vs batch gradient ascent
To compare convergence rates for stochastic gradient ascent with batch gradient ascent, we call make_plot() multiple times in the same cell.
We are comparing:
* stochastic gradient ascent: step_size = 0.1, batch_size=100
* batch gradient ascent: step_size = 0.5, batch_size=len(feature_matrix_train)
Write code to run stochastic gradient ascent for 200 passes using:
* step_size=1e-1
* batch_size=100
* initial_coefficients to all zeros.
End of explanation
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic, step_size=1e-1')
make_plot(log_likelihood_batch, len_data=len(feature_matrix_train), batch_size=len(feature_matrix_train),
smoothing_window=1, label='batch, step_size=5e-1')
Explanation: We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30.
End of explanation
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd = {}
log_likelihood_sgd = {}
for step_size in np.logspace(-4, 2, num=7):
coefficients_sgd[step_size], log_likelihood_sgd[step_size] = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=step_size, batch_size=batch_size, max_iter=num_iterations)
Explanation: Quiz Question: In the figure above, how many passes does batch gradient ascent need to achieve a similar log likelihood as stochastic gradient ascent?
It's always better
10 passes
20 passes
150 passes or more
Explore the effects of step sizes on stochastic gradient ascent
In previous sections, we chose step sizes for you. In practice, it helps to know how to choose good step sizes yourself.
To start, we explore a wide range of step sizes that are equally spaced in the log space. Run stochastic gradient ascent with step_size set to 1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, and 1e2. Use the following set of parameters:
* initial_coefficients=np.zeros(194)
* batch_size=100
* max_iter initialized so as to run 10 passes over the data.
End of explanation
for step_size in np.logspace(-4, 2, num=7):
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
Explanation: Plotting the log likelihood as a function of passes for each step size
Now, we will plot the change in log likelihood using the make_plot for each of the following values of step_size:
step_size = 1e-4
step_size = 1e-3
step_size = 1e-2
step_size = 1e-1
step_size = 1e0
step_size = 1e1
step_size = 1e2
For consistency, we again apply smoothing_window=30.
End of explanation
for step_size in np.logspace(-4, 2, num=7)[0:6]:
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
Explanation: Now, let us remove the step size step_size = 1e2 and plot the rest of the curves.
End of explanation |
2,307 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Standar usage of TensoFlow with model class
Tipically use 3 files
Step2: model_mnist_cnn.py
Step3: train.py | Python Code:
#! /usr/bin/env python
import tensorflow as tf
# Access to the data
def get_data(data_dir='/tmp/MNIST_data'):
from tensorflow.examples.tutorials.mnist import input_data
return input_data.read_data_sets(data_dir, one_hot=True)
#Batch generator
def batch_generator(mnist, batch_size=256, type='train'):
if type=='train':
return mnist.train.next_batch(batch_size)
else:
return mnist.test.next_batch(batch_size)
Explanation: Standar usage of TensoFlow with model class
Tipically use 3 files:
- data_utils.py: With the data access and batch generator functions
- model.py: With the class model. A constructor with the graph definition and method to manage model needs
- train.py: With parameters. Access to the data, instance the model and train it. Optionaly add a parameter to train or inference.
data_utils.py
End of explanation
#! /usr/bin/env python
import tensorflow as tf
class mnistCNN(object):
A NN for mnist classification.
def __init__(self, dense=500):
# Placeholders for input, output and dropout
self.input_x = tf.placeholder(tf.float32, [None, 784], name="input_x")
self.input_y = tf.placeholder(tf.float32, [None, 10], name="input_y")
# First layer
self.dense_1 = self.dense_layer(self.input_x, input_dim=784, output_dim=dense)
# Final layer
self.dense_2 = self.dense_layer(self.dense_1, input_dim=dense, output_dim=10)
self.predictions = tf.argmax(self.dense_2, 1, name="predictions")
# Loss function
self.loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(self.dense_2, self.input_y))
# Accuracy
correct_predictions = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"), name="accuracy")
def dense_layer(self, x, input_dim=10, output_dim=10, name='dense'):
'''
Dense layer function
Inputs:
x: Input tensor
input_dim: Dimmension of the input tensor.
output_dim: dimmension of the output tensor
name: Layer name
'''
W = tf.Variable(tf.truncated_normal([input_dim, output_dim], stddev=0.1), name='W_'+name)
b = tf.Variable(tf.constant(0.1, shape=[output_dim]), name='b_'+name)
dense_output = tf.nn.relu(tf.matmul(x, W) + b)
return dense_output
Explanation: model_mnist_cnn.py
End of explanation
#! /usr/bin/env python
from __future__ import print_function
import tensorflow as tf
#from data_utils import get_data, batch_generator
#from model_mnist_cnn import mnistCNN
# Parameters
# ==================================================
# Data loading params
tf.flags.DEFINE_string("data_directory", '/tmp/MNIST_data', "Data dir (default /tmp/MNIST_data)")
# Model Hyperparameters
tf.flags.DEFINE_integer("dense_size", 500, "dense_size (default 500)")
# Training parameters
tf.flags.DEFINE_float("learning_rate", 0.001, "learning rate (default: 0.001)")
tf.flags.DEFINE_integer("batch_size", 256, "Batch Size (default: 256)")
tf.flags.DEFINE_integer("num_epochs", 20, "Number of training epochs (default: 20)")
# Misc Parameters
tf.flags.DEFINE_boolean("log_device_placement", False, "Log placement of ops on devices")
FLAGS = tf.flags.FLAGS
FLAGS._parse_flags()
print("\nParameters:")
for attr, value in sorted(FLAGS.__flags.items()):
print("{}={}".format(attr.upper(), value))
print("")
# Data Preparation
# ==================================================
#Access to the data
mnist_data = get_data(data_dir= FLAGS.data_directory)
# Training
# ==================================================
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333, allow_growth = True)
with tf.Graph().as_default():
session_conf = tf.ConfigProto(
gpu_options=gpu_options,
log_device_placement=FLAGS.log_device_placement)
sess = tf.Session(config=session_conf)
with sess.as_default():
# Create model
cnn = mnistCNN(dense=FLAGS.dense_size)
# Trainer
train_op = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize(cnn.loss)
# Saver
saver = tf.train.Saver(max_to_keep=1)
# Initialize all variables
sess.run(tf.global_variables_initializer())
# Train proccess
for epoch in range(FLAGS.num_epochs):
for n_batch in range(int(55000/FLAGS.batch_size)):
batch = batch_generator(mnist_data, batch_size=FLAGS.batch_size, type='train')
_, ce = sess.run([train_op, cnn.loss], feed_dict={cnn.input_x: batch[0], cnn.input_y: batch[1]})
print(epoch, ce)
model_file = saver.save(sess, '/tmp/mnist_model')
print('Model saved in', model_file)
Explanation: train.py
End of explanation |
2,308 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification
COMP4670/8600 - Introduction to Statistical Machine Learning - Tutorial 3
$\newcommand{\trace}[1]{\operatorname{tr}\left{#1\right}}$
$\newcommand{\Norm}[1]{\lVert#1\rVert}$
$\newcommand{\RR}{\mathbb{R}}$
$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$
$\newcommand{\DD}{\mathscr{D}}$
$\newcommand{\grad}[1]{\operatorname{grad}#1}$
$\DeclareMathOperator*{\argmin}{arg\,min}$
Setting up the environment
We use the SciPy implementation of the logistic sigmoid function, rather than (naively) implementing it ourselves, to avoid issues relating to numerical computation.
Step1: The data set
We will predict the incidence of diabetes based on various measurements (see description). Instead of directly using the raw data, we use a normalised version, where the label to be predicted (the incidence of diabetes) is in the first column. Download the data from mldata.org.
Read in the data using pandas.
Step2: Classification via Logistic Regression
Implement binary classification using logistic regression for a data set with two classes. Make sure you use appropriate python style and docstrings.
Use scipy.optimize.fmin_bfgs to optimise your cost function. fmin_bfgs requires the cost function to be optimised, and the gradient of this cost function. Implement these two functions as cost and grad by following the equations in the lectures.
Implement the function train that takes a matrix of examples, and a vector of labels, and returns the maximum likelihood weight vector for logistic regresssion. Also implement a function test that takes this maximum likelihood weight vector and the a matrix of examples, and returns the predictions. See the section Putting everything together below for expected usage.
We add an extra column of ones to represent the constant basis.
Step7: The Set-up
We have 9 input variables $x_0, \dots, x_8$ where $x_0$ is the dummy input variable fixed at 1. (The fixed dummy input variable could easily be $x_5$ or $x_8$, it's index is unimportant.) We set the basis functions to the simplest choice $\phi_0(\mathbf{x}) = x_0, \dots, \phi_8(\mathbf{x}) = x_8$. Our model then has the form
$$
y(\mathbf{x}) = \sigma(\sum_{j=0}^{8} w_j x_j) = \sigma(\mathbf{w}^T \mathbf{x}.)
$$
Here we have a dataset, ${(\mathbf{x}n, t_n)}{n=1}^{N}$ where $t_n \in {0, 1}$, with $N=768$ examples. We train our model by finding the parameter vector $\mathbf{w}$ which minimizes the (data-dependent) cross-entropy error function
$$
E_D(\mathbf{w}) = - \sum_{n=1}^{N} {t_n \ln \sigma(\mathbf{w}^T \mathbf{x}n) + (1 - t_n)\ln(1 - \sigma(\mathbf{w}^T \mathbf{x}_n))}.
$$
The gradient of this function is given by
$$
\nabla E(\mathbf{w}) = \sum{i=1}^{N} (\sigma(\mathbf{w}^T \mathbf{x}_n) - t_n)\mathbf{x}_n.
$$
Step13: Performance measure
There are many ways to compute the performance of a binary classifier. The key concept is the idea of a confusion matrix or contingency table
Step14: Putting everything together
Consider the following code, which trains on all the examples, and predicts on the training set. Discuss the results.
Step15: To aid our discussion we give the positive predictive value (PPV) and negative predictive value (NPV) also.
Step18: Discussion
Overall, the accuracy of our model is reasonable, given our naive choice of basis functions, as is its balanced accuracy. The discrepancy between these values can be accounted for by the PPV being higher than the NPV.
(optional) Effect of regularization parameter
By splitting the data into two halves, train on one half and report performance on the second half. By repeating this experiment for different values of the regularization parameter $\lambda$ we can get a feeling about the variability in the performance of the classifier due to regularization. Plot the values of accuracy and balanced accuracy for at least 3 different choices of $\lambda$. Note that you may have to update your implementation of logistic regression to include the regularisation parameter. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.optimize as opt
from scipy.special import expit # The logistic sigmoid function
%matplotlib inline
Explanation: Classification
COMP4670/8600 - Introduction to Statistical Machine Learning - Tutorial 3
$\newcommand{\trace}[1]{\operatorname{tr}\left{#1\right}}$
$\newcommand{\Norm}[1]{\lVert#1\rVert}$
$\newcommand{\RR}{\mathbb{R}}$
$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$
$\newcommand{\DD}{\mathscr{D}}$
$\newcommand{\grad}[1]{\operatorname{grad}#1}$
$\DeclareMathOperator*{\argmin}{arg\,min}$
Setting up the environment
We use the SciPy implementation of the logistic sigmoid function, rather than (naively) implementing it ourselves, to avoid issues relating to numerical computation.
End of explanation
names = ['diabetes', 'num preg', 'plasma', 'bp', 'skin fold', 'insulin', 'bmi', 'pedigree', 'age']
data = pd.read_csv('diabetes_scale.csv', header=None, names=names)
data['diabetes'].replace(-1, 0, inplace=True) # The target variable need be 1 or 0, not 1 or -1
data.head()
Explanation: The data set
We will predict the incidence of diabetes based on various measurements (see description). Instead of directly using the raw data, we use a normalised version, where the label to be predicted (the incidence of diabetes) is in the first column. Download the data from mldata.org.
Read in the data using pandas.
End of explanation
data['ones'] = np.ones((data.shape[0], 1)) # Add a column of ones
data.head()
data.shape
Explanation: Classification via Logistic Regression
Implement binary classification using logistic regression for a data set with two classes. Make sure you use appropriate python style and docstrings.
Use scipy.optimize.fmin_bfgs to optimise your cost function. fmin_bfgs requires the cost function to be optimised, and the gradient of this cost function. Implement these two functions as cost and grad by following the equations in the lectures.
Implement the function train that takes a matrix of examples, and a vector of labels, and returns the maximum likelihood weight vector for logistic regresssion. Also implement a function test that takes this maximum likelihood weight vector and the a matrix of examples, and returns the predictions. See the section Putting everything together below for expected usage.
We add an extra column of ones to represent the constant basis.
End of explanation
def cost(w, X, y, c=0):
Returns the cross-entropy error function with (optional) sum-of-squares regularization term.
w -- parameters
X -- dataset of features where each row corresponds to a single sample
y -- dataset of labels where each row corresponds to a single sample
c -- regularization coefficient (default = 0)
outputs = expit(X.dot(w)) # Vector of outputs (or predictions)
return -( y.transpose().dot(np.log(outputs)) + (1-y).transpose().dot(np.log(1-outputs)) ) + c*0.5*w.dot(w)
def grad(w, X, y, c=0):
Returns the gradient of the cross-entropy error function with (optional) sum-of-squares regularization term.
outputs = expit(X.dot(w))
return X.transpose().dot(outputs-y) + c*w
def train(X, y,c=0):
Returns the vector of parameters which minimizes the error function via the BFGS algorithm.
initial_values = np.zeros(X.shape[1]) # Error occurs if inital_values is set too high
return opt.fmin_bfgs(cost, initial_values, fprime=grad, args=(X,y,c))
def predict(w, X):
Returns a vector of predictions.
return expit(X.dot(w))
Explanation: The Set-up
We have 9 input variables $x_0, \dots, x_8$ where $x_0$ is the dummy input variable fixed at 1. (The fixed dummy input variable could easily be $x_5$ or $x_8$, it's index is unimportant.) We set the basis functions to the simplest choice $\phi_0(\mathbf{x}) = x_0, \dots, \phi_8(\mathbf{x}) = x_8$. Our model then has the form
$$
y(\mathbf{x}) = \sigma(\sum_{j=0}^{8} w_j x_j) = \sigma(\mathbf{w}^T \mathbf{x}.)
$$
Here we have a dataset, ${(\mathbf{x}n, t_n)}{n=1}^{N}$ where $t_n \in {0, 1}$, with $N=768$ examples. We train our model by finding the parameter vector $\mathbf{w}$ which minimizes the (data-dependent) cross-entropy error function
$$
E_D(\mathbf{w}) = - \sum_{n=1}^{N} {t_n \ln \sigma(\mathbf{w}^T \mathbf{x}n) + (1 - t_n)\ln(1 - \sigma(\mathbf{w}^T \mathbf{x}_n))}.
$$
The gradient of this function is given by
$$
\nabla E(\mathbf{w}) = \sum{i=1}^{N} (\sigma(\mathbf{w}^T \mathbf{x}_n) - t_n)\mathbf{x}_n.
$$
End of explanation
def confusion_matrix(predictions, y):
Returns the confusion matrix [[tp, fp], [fn, tn]].
predictions -- dataset of predictions (or outputs) from a model
y -- dataset of labels where each row corresponds to a single sample
tp, fp, fn, tn = 0, 0, 0, 0
predictions = predictions.round().values # Converts to numpy.ndarray
y = y.values
for prediction, label in zip(predictions, y):
if prediction == label:
if prediction == 1:
tp += 1
else:
tn += 1
else:
if prediction == 1:
fp += 1
else:
fn += 1
return np.array([[tp, fp], [fn, tn]])
def accuracy(cm):
Returns the accuracy, (tp + tn)/(tp + fp + fn + tn).
return cm.trace()/cm.sum()
def positive_pred_value(cm):
Returns the postive predictive value, tp/p.
return cm[0,0]/(cm[0,0] + cm[0,1])
def negative_pred_value(cm):
Returns the negative predictive value, tn/n.
return cm[1,1]/(cm[1,0] + cm[1,1])
def balanced_accuracy(cm):
Returns the balanced accuracy, (tp/p + tn/n)/2.
return (cm[0,0]/(cm[0,0] + cm[0,1]) + cm[1,1]/(cm[1,0] + cm[1,1]))/2
Explanation: Performance measure
There are many ways to compute the performance of a binary classifier. The key concept is the idea of a confusion matrix or contingency table:
| | | Label | |
|:-------------|:--:|:-----:|:--:|
| | | +1 | -1 |
|Prediction| +1 | TP | FP |
| | -1 | FN | TN |
where
* TP - true positive
* FP - false positive
* FN - false negative
* TN - true negative
Implement three functions, the first one which returns the confusion matrix for comparing two lists (one set of predictions, and one set of labels). Then implement two functions that take the confusion matrix as input and returns the accuracy and balanced accuracy respectively. The balanced accuracy is the average accuracy of each class.
End of explanation
y = data['diabetes']
X = data[['num preg', 'plasma', 'bp', 'skin fold', 'insulin', 'bmi', 'pedigree', 'age', 'ones']]
theta_best = train(X, y)
print(theta_best)
pred = predict(theta_best, X)
cmatrix = confusion_matrix(pred, y)
[accuracy(cmatrix), balanced_accuracy(cmatrix)]
Explanation: Putting everything together
Consider the following code, which trains on all the examples, and predicts on the training set. Discuss the results.
End of explanation
[positive_pred_value(cmatrix), negative_pred_value(cmatrix)]
Explanation: To aid our discussion we give the positive predictive value (PPV) and negative predictive value (NPV) also.
End of explanation
def split_data(data):
Randomly split data into two equal groups.
np.random.seed(1)
N = len(data)
idx = np.arange(N)
np.random.shuffle(idx)
train_idx = idx[:int(N/2)]
test_idx = idx[int(N/2):]
X_train = data.loc[train_idx].drop('diabetes', axis=1)
t_train = data.loc[train_idx]['diabetes']
X_test = data.loc[test_idx].drop('diabetes', axis=1)
t_test = data.loc[test_idx]['diabetes']
return X_train, t_train, X_test, t_test
def reg_coefficient_comparison(reg_coefficients, X_train, t_train, X_test, t_test):
Returns the accuracy and balanced accuracy for the given regularization coefficient values.
reg_coefficients -- list of regularization coefficient values
X_train -- the input dataset used for training
t_train -- the dataset of labels used for training
X_test -- the input dataset used to make predictions from the trained model
t_test -- dataset of labels for performance assessment
summary = []
for c in reg_coefficients:
w_best = train(X_train, t_train, c)
predictions = predict(w_best, X_test)
cm = confusion_matrix(predictions, t_test)
summary.append([c, accuracy(cm), balanced_accuracy(cm)])
return pd.DataFrame(summary, columns=["regularization coefficient", "accuracy", "balanced accuracy"])
X_train, t_train, X_test, t_test = split_data(data)
reg_coefficients = [0, 0.01, 0.1, 0.25, 0.5, 1, 1.5, 1.75, 2, 5, 9, 10, 11, 20, 100, 150]
reg_coefficient_comparison(reg_coefficients, X_train, t_train, X_test, t_test)
Explanation: Discussion
Overall, the accuracy of our model is reasonable, given our naive choice of basis functions, as is its balanced accuracy. The discrepancy between these values can be accounted for by the PPV being higher than the NPV.
(optional) Effect of regularization parameter
By splitting the data into two halves, train on one half and report performance on the second half. By repeating this experiment for different values of the regularization parameter $\lambda$ we can get a feeling about the variability in the performance of the classifier due to regularization. Plot the values of accuracy and balanced accuracy for at least 3 different choices of $\lambda$. Note that you may have to update your implementation of logistic regression to include the regularisation parameter.
End of explanation |
2,309 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XGBoost Cross Validation
The Python wrap around XGBoots implements a scikit-learn interface and this interface, more or less, support the scikit-learn cross validation system. More, XGBoost have is own cross validation system and the Python wrap support it. In other words, we have two cross validation systems. They are partialy supported and the functionalities supported for XGBoost are not the same for LightGBM. Currently, it's a puzzle.
The example presented covers both cases. The first, step_GradientBoostingCV, call the XGBoost cross validation. The second, step_GridSearchCV, call the scikit-learn cross validation.
The data preparation is the same as for the nbex_xgb_model.ipynb example. We take only two images to speed up the process.
The 'Tune' class manages everything.
The step_GradientBoostingCV method call the XGBoost cv() function.
The step_GridSearchCV method call the scikit-learn GridSearchCV() function.
Take note that this is in development and that changes can be significant.
Step1: X_train and y_train sets are built
The class Tune is created with the HyperXGBClassifier estimator. It's ready for cross validation, we can call Tune methods repeatedly with differents cv hypothesis.
Step2: We set an hypothesis and call the Gradient Boosting cross validation
Step3: Same but this time we call the scikit-learn cross validation
Step4: Finally, the result | Python Code:
%matplotlib inline
from __future__ import print_function
import os
import os.path as osp
import numpy as np
import pysptools.ml as ml
import pysptools.skl as skl
from sklearn.model_selection import train_test_split
home_path = os.environ['HOME']
source_path = osp.join(home_path, 'dev-data/CZ_hsdb')
result_path = None
def print_step_header(step_id, title):
print('================================================================')
print('{}: {}'.format(step_id, title))
print('================================================================')
print()
# img1
img1_scaled, img1_cmap = ml.get_scaled_img_and_class_map(source_path, result_path, 'img1',
[['Snow',{'rec':(41,79,49,100)}]],
skl.HyperGaussianNB, None,
display=False)
# img2
img2_scaled, img2_cmap = ml.get_scaled_img_and_class_map(source_path, result_path, 'img2',
[['Snow',{'rec':(83,50,100,79)},{'rec':(107,151,111,164)}]],
skl.HyperLogisticRegression, {'class_weight':{0:1.0,1:5}},
display=False)
def step_GradientBoostingCV(tune, update, cv_params, verbose):
print_step_header('Step', 'GradientBoosting cross validation')
tune.print_params('input')
tune.step_GradientBoostingCV(update, cv_params, verbose)
def step_GridSearchCV(tune, params, title, verbose):
print_step_header('Step', 'scikit-learn cross-validation')
tune.print_params('input')
tune.step_GridSearchCV(params, title, verbose)
tune.print_params('output')
Explanation: XGBoost Cross Validation
The Python wrap around XGBoots implements a scikit-learn interface and this interface, more or less, support the scikit-learn cross validation system. More, XGBoost have is own cross validation system and the Python wrap support it. In other words, we have two cross validation systems. They are partialy supported and the functionalities supported for XGBoost are not the same for LightGBM. Currently, it's a puzzle.
The example presented covers both cases. The first, step_GradientBoostingCV, call the XGBoost cross validation. The second, step_GridSearchCV, call the scikit-learn cross validation.
The data preparation is the same as for the nbex_xgb_model.ipynb example. We take only two images to speed up the process.
The 'Tune' class manages everything.
The step_GradientBoostingCV method call the XGBoost cv() function.
The step_GridSearchCV method call the scikit-learn GridSearchCV() function.
Take note that this is in development and that changes can be significant.
End of explanation
verbose = False
n_shrink = 3
snow_fname = ['img1','img2']
nosnow_fname = ['imga1','imgb1','imgb6','imga7']
all_fname = snow_fname + nosnow_fname
snow_img = [img1_scaled,img2_scaled]
nosnow_img = ml.batch_load(source_path, nosnow_fname, n_shrink)
snow_cmap = [img1_cmap,img2_cmap]
M = snow_img[0]
bkg_cmap = np.zeros((M.shape[0],M.shape[1]))
X,y = skl.shape_to_XY(snow_img+nosnow_img,
snow_cmap+[bkg_cmap,bkg_cmap,bkg_cmap,bkg_cmap])
seed = 5
train_size = 0.25
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=train_size,
random_state=seed)
start_param = {'max_depth':10,
'min_child_weight':1,
'gamma':0,
'subsample':0.8,
'colsample_bytree':0.5,
'scale_pos_weight':1.5}
# Tune can be call with HyperXGBClassifier or HyperLGBMClassifier,
# but hyperparameters and cv parameters are differents
t = ml.Tune(ml.HyperXGBClassifier, start_param, X_train, y_train)
Explanation: X_train and y_train sets are built
The class Tune is created with the HyperXGBClassifier estimator. It's ready for cross validation, we can call Tune methods repeatedly with differents cv hypothesis.
End of explanation
# Step 1: Fix learning rate and number of estimators for tuning tree-based parameters
step_GradientBoostingCV(t, {'learning_rate':0.2,'n_estimators':500,'silent':1},
{'verbose_eval':False},
True)
# After reading the cross validation results we manually set n_estimator
t.p_update({'n_estimators':9})
t.print_params('output')
Explanation: We set an hypothesis and call the Gradient Boosting cross validation
End of explanation
# Step 2: Tune max_depth and min_child_weight
step_GridSearchCV(t, {'max_depth':[24,25, 26], 'min_child_weight':[1]}, 'Step 2', True)
Explanation: Same but this time we call the scikit-learn cross validation
End of explanation
print(t.get_p_current())
Explanation: Finally, the result
End of explanation |
2,310 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plan
Some data
Step1: $\Rightarrow$ various different price series
Step2: $\Longrightarrow$ There was a stock split 7
Step3: Define new financial instruments
What we have now prices of financial instruments
Step4: Now the roughness of the chart looks more even $\Rightarrow$ We should model increments proportional to the stock price!
This leads us to some assumptions for the stock price process
Step5: Optionprices
Step6: This curve can also be calculated theoretically. Using stochastic calculus, one can deduce the famous Black-Scholes equation, to calculate this curve. We will not go into detail ...
Step7: ... but will just state the final result!
Black Scholes formula
Step8: For small prices we do not need to own shares, to hedge the option. For high prices we need exactly one share. The interesting area is around the strike price.
Simulate a portfolio consisting of 1 call option and $-\Delta$ Shares
Step9: Challenges
1) the price depends on the calibration of $\sigma$! Parameters may not be constant over time!
2) the price depends on the validity of the model
The main problem is the second one
Step10: This is not a normal distribution!
2) normally distributed increments are not realistic. Real distributions are
- Heavy tails
Step11: Proposed solution
Find a way to price an option without the assumption of a market model, without the need to calibrate and recalibrate the model. | Python Code:
aapl = data.DataReader('AAPL', 'yahoo', '2000-01-01')
print(aapl.head())
Explanation: Plan
Some data: look at some stock price series
devise a model for stock price series: Geometric Brownian Motion (GBM)
Example for a contingent claim: call option
Pricing of a call option under the assumtpion of GBM
Challenges
Some data: look at some stock price series
We import data from Yahoo finance: two examples are IBM and Apple
End of explanation
plt.plot(aapl.Close)
Explanation: $\Rightarrow$ various different price series
End of explanation
ibm = data.DataReader('AAPl', 'yahoo', '2000-1-1')
print(ibm['Adj Close'].head())
%matplotlib inline
ibm['Adj Close'].plot(figsize=(10,6))
plt.ylabel('price')
plt.xlabel('year')
plt.title('Price history of IBM stock')
ibm = data.DataReader('IBM', 'yahoo', '2000-1-1')
print(ibm['Adj Close'].head())
%matplotlib inline
ibm['Adj Close'].plot(figsize=(10,6))
plt.ylabel('price')
plt.xlabel('year')
plt.title('Price history of IBM stock')
Explanation: $\Longrightarrow$ There was a stock split 7:1 on 06/09/2014.
As we do not want to take care of things like that, we use the Adjusted close price!
End of explanation
Log_Data = plt.figure()
%matplotlib inline
plt.plot(np.log(aapl['Adj Close']))
plt.ylabel('logarithmic price')
plt.xlabel('year')
plt.title('Logarithmic price history of Apple stock')
Explanation: Define new financial instruments
What we have now prices of financial instruments:
- bonds (assume: fixed price)
- stocks
- exchange rates
- oil
- dots
$\Longrightarrow$ Tradeables with variable prices
We can form a portfolio by
- holding some cash (possibly less than 0, that's called debts)
- buying some stock/currency etc. (perhaps less than 0, that's called 'short)
Why do we need more ?
you want to play
you are producing something and you want to assure, that the prices you achieve in one year are suffiientlyone a stock, and arewant to protect yourself against lower prices
you want to protect yourself against higher prices
you want to protect yourself against an increase in volatility
you want to protect yourself against extreme price movements
you want to ...
$\Longrightarrow$ Essentially you want to be able to control the final value of your portfolio!
You go the bank, the bank offers you some product, you buy and are happy ....°
Obligations for the bank
construct a product
price the product
hedge the product
For this talk, we take one of the easiest such products:, a call option.
Call option
Definition
Call option on a stock $S$ with strike price $K$ and expiry $T$:
The buyer of the call option has the right, but not the obligation, to buy $1$ stock $S$ (the underlying) from the seller of the option at a certain time (the expiration date $T$) for a certain price (the strike price $K$).
Payoff: $$C_T =\max(0, S-K)\,.$$
What can you do with a Call-option?
Example: you want to buy a stock next year, 01.01.2018, for 100:
Buy now a call for 100 (strike price).
Next year you can distinguish two distinct cases:
stock trades at 80 < 100 $\Longrightarrow$ buy stock for 80 Euro, forget call option - it call is worthless
stock trades at 120 > 100 $\Longrightarrow$ use call to buy stock for 100
How to price the call option?
match expectations
utility pricing
arbitrage free pricing $\Longrightarrow$ this is the price, enforced by the market
...
What is a fair price for an option with strike price $K$ and expiry $T$?
If the stock trades at a price $S$ at time $T$, then the payoff is:
Payoff: $C_T =\max(0, S-K)\,.$
If the interest rate is $r$, we discount future cashflows with $e^{- r T}$. Thus if the stock traded at price $S$ at expire, the resulting cash-flow would be worth(time $t = 0$)
$$C_0 = e^{- r T} \max(0, S-K)\,.$$
Problem: we do not know $S_T$ at time $0$.
Solution: we take the expectation of $S$. This yields
$$C_{0, S} = e^{- r T} \mathbb{E}\left[ \max(0, S-K)\right]\,.$$
Caveat: We have hidden a lot!!
The formal deduction is complicated via arbitrage free pricing and the Feynmann-Kac-theorem
How to constructthe expectation? Need a model for the stock price!
Construct a theoretical model for stock price movements: Geometric Brownian motion
For the apple chart one can see, that price increments seem to correlate with the price: thus we plot logarithmic prices:
Let us plot the data logarithmically:
End of explanation
S0 = 1
sigma = 0.2/np.sqrt(252)
mu = 0.08/252
%matplotlib inline
for i in range(0, 5):
r = np.random.randn((1000))
plt.plot(S0 * np.cumprod(np.exp(sigma *r +mu)))
S0 = 1.5 # start price
K = 1.0 # strike price
mu = 0 # average growth
sigma = 0.2/np.sqrt(252) # volatility
N = 10000 # runs
M = 252*4 # length of each run (252 business days per year times 4 years)
def call_price(S, K):
return max(0.0, S-K)
def MC_call_price(S0, K, mu, sigma, N, M):
CSum = 0
SSum = 0
for n in range(N):
r = np.random.randn((M))
S = S0 * np.cumprod(np.exp(sigma *r))
SSum += S
CSum += call_price(S[M-1], K)
return CSum/N
Explanation: Now the roughness of the chart looks more even $\Rightarrow$ We should model increments proportional to the stock price!
This leads us to some assumptions for the stock price process:
- the distribution of relative changes is constant over time
- Small changes appear often, large changes rarly: changes are normally distributed
$\Rightarrow$ use an exponential Gaussian distribution for increments:
$$S_{n+1} = S_n e^{\sigma X+ \mu} $$
where $X \sim N(0,1)$, $\sigma$ denotes the variance and $\mu$ the mean growth rate.
Let us simulate this:
typical values for $\mu$ and $\sigma$ per year are:
- $\mu_y= 0,08$
- $\sigma_y = 0.2$
$\Rightarrow$ assuming 252 business days a year we get
$$\mu = \mu_d = \frac{\mu_y}{252}\sim 0.0003$$
$$\sigma = \sigma_d = \frac{\sigma_y}{\sqrt{252}}\sim 0,012$$
End of explanation
S0 = np.linspace(0.0, 2.0,21)
C = []
for k in range(21):
C.append(MC_call_price(k*2/20, K, mu, sigma, N, M))
C
plt.plot(S0, C)
plt.ylabel('Call price')
plt.xlabel('Start price')
plt.title('Call price')
plt.show()
Explanation: Optionprices:
End of explanation
from IPython.display import Image
Image("Picture_Then_Miracle_Occurs.PNG")
Explanation: This curve can also be calculated theoretically. Using stochastic calculus, one can deduce the famous Black-Scholes equation, to calculate this curve. We will not go into detail ...
End of explanation
d_1 = lambda σ, T, t, S, K: 1. / σ / np.sqrt(T - t) * (np.log(S / K) + 0.5 * (σ ** 2) * (T-t))
d_2 = lambda σ, T, t, S, K: 1. / σ / np.sqrt(T - t) * (np.log(S / K) - 0.5 * (σ ** 2) * (T-t))
call = lambda σ, T, t, S, K: S * sp.stats.norm.cdf( d_1(σ, T, t, S, K) ) - K * sp.stats.norm.cdf( d_2(σ, T, t, S, K) )
Delta = lambda σ, T, t, S, K: sp.stats.norm.cdf( d_1(σ, T, t, S, K) )
plt.plot(np.linspace(sigma, 4., 100), call(1., 1., .9, np.linspace(0.1, 4., 100), 1.))
plt.plot(d_1(1., 1., 0., np.linspace(0.1, 2.9, 10), 1))
#plt.plot(np.linspace(sigma, 4., 100), Delta(1., 1., .9, np.linspace(0.1, 4., 100), 1.))
plt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.2, np.linspace(0.01, 1.9, 100), 1.))
plt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.6, np.linspace(0.01, 1.9, 100), 1.))
plt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.9, np.linspace(0.01, 1.9, 100), 1.))
plt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.99, np.linspace(0.01, 1.9, 100), 1.))
plt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.9999, np.linspace(0.01, 1.9, 100), 1.))
plt.xlabel("Price/strike price")
plt.ylabel("$\Delta$")
plt.legend(['t = 0.2','t = 0.6', 't = 0.9', 't = 0.99', 't = 0.9999'], loc = 2)
Explanation: ... but will just state the final result!
Black Scholes formula:
$${\displaystyle d_{1}={\frac {1}{\sigma {\sqrt {T-t}}}}\left[\ln \left({\frac {S_{t}}{K}}\right)+(r-q+{\frac {1}{2}}\sigma ^{2})(T-t)\right]}$$
$${\displaystyle d_{2}=d_{1}-\sigma {\sqrt {T-t}}={\frac {1}{\sigma {\sqrt {T-t}}}}\left[\ln \left({\frac {S_{t}}{K}}\right)+(r-q-{\frac {1}{2}}\sigma ^{2})(T-t)\right]}$$
Black-Scholes Formula for the call price:
$${\displaystyle C(S_{t},t)=e^{-r(T-t)}[S_tN(d_{1})-KN(d_{2})]\,}$$
$\Delta$ describes the change in the price of the option if the stock price changes by $1$.
Black Scholes formula for the Delta:
$$ \Delta(C, t) = e^{-r(T-t)} N(d_1)$$
End of explanation
N = 10 #runs
def Simulate_Price_Series(S0, sigma, N, M):
for n in (1,N):
r = np.random.randn((M))
S = S0 * np.cumprod(np.exp(sigma *r))
for m in (1,M):
P.append = Delta(sigma, M, m, S, K)*
return S
plt.plot(1+np.cumsum(np.diff(S) * Delta(sigma, 4, 0, S, K)[1, M-1]))
plt.plot(S)
S
len(Delta(sigma, 4, 0, S, K)[[1:999]])
def Calculate_Portfolio(S0, K, mu, sigma, N, M):
S = Simulate_Price_Series(S0, sigma, N, M)
StockDelta = Delta(sigma, 4, 0, S, K) )
vol = vol0 * np.cumprod(np.exp(sigma*r2)
S = S0 * np.cumprod(np.exp(vol * r))
SSum += S
CSum += call_price(S[M-1], K)
Explanation: For small prices we do not need to own shares, to hedge the option. For high prices we need exactly one share. The interesting area is around the strike price.
Simulate a portfolio consisting of 1 call option and $-\Delta$ Shares:
$$P = C - \Delta S$$
In approximation, the portfolio value should be constant!
End of explanation
np.histogram(np.diff(aapl['Adj Close']))
plt.hist(np.diff(aapl['Adj Close']), bins='auto') # plt.hist passes it's arguments to np.histogram
plt.title("Histogram of daily returns for Apple")
plt.show()
Explanation: Challenges
1) the price depends on the calibration of $\sigma$! Parameters may not be constant over time!
2) the price depends on the validity of the model
The main problem is the second one:
A)
$\sigma$ and $\mu$ may change over time. Hence changes of volatility should adapted in the price
$\Longrightarrow$ new more complex models describing stochastic volatility are introduced, for example:
- Heston model,
- Ball-Roma model,
- SABR-model and many more
B)
let us look at the log-returns:
End of explanation
def MC_call_price_Loc_Vol(S0, K, mu, sigma, N, M):
CSum = 0
SSum = 0
for n in range(N):
r = np.random.randn((M))
r2 = np.random.randn((M))
vol = vol0 * np.cumprod(np.exp(sigma*r2)
S = S0 * np.cumprod(np.exp(vol * r))
SSum += S
CSum += call_price(S[M-1], K)
return CSum/N
S0 = np.linspace(0.0, 2.0,21)
CLoc = []
for k in range(21):
CLoc.append(MC_call_price_Loc_Vol(k*2/20, K, mu, 0.1*sigma, N, M))
CLoc
plt.plot(S0, C)
plt.plot(S0, CLoc)
plt.ylabel('Call price')
plt.xlabel('Start price')
plt.title('Call price')
plt.show()
Explanation: This is not a normal distribution!
2) normally distributed increments are not realistic. Real distributions are
- Heavy tails:
- Gain/Loss asymmetry
- Aggregational Gaussianity
- Intermittency (parameter changes over time)
- Volatility clustering
- Leverage effect
- Volume/volatility correlation:
- Slow decay of autocorrelation in absolute returns:
- Asymmetry in time scales
(see for example: Rama Cont: Empirical properties of asset returns: stylized facts and statistical issues, Journal of quantitative finance, Volume 1 (2001) 223–236)
The option price depends on the model, on the calibration.
Alternative model: Local volatility model
The closest alternative to the Black-Scholes model are local volatility models.
End of explanation
def iterate_series(n=1000, S0 = 1):
while True:
r = np.random.randn((n))
S = np.cumsum(r) + S0
yield S, r
for (s, r) in iterate_series():
t, t_0 = 0, 0
for t in np.linspace(0, len(s)-1, 100):
r = s[int(t)] / s[int(t_0)]
t_0 = t
break
state = (stock_val, besitz)
Explanation: Proposed solution
Find a way to price an option without the assumption of a market model, without the need to calibrate and recalibrate the model.
End of explanation |
2,311 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
原文地址:itchat+pillow实现微信好友头像爬取和拼接。原文github地址
Step1: =======注意=======
这里我用Python2.7,实际上代码在Python3.6,3.5运行一切正常。itchat更新后就没有再测试啦。
注意微信更新了反广告机制,注意不要一次性发太多东西,避免被微信封号。
=======注意=======
核心
itchat读取微信好友列表和头像
用pillow拼接头像画图
登录微信网页版,itchat.auto_login()会生成QR码,使用手机扫一扫即可登录。
提示Please scan the QR code to log in.时,用手机扫描弹出的二维码
命令行二维码技巧:
1. 通过以下命令可以在登陆的时候使用命令行显示二维码:itchat.auto_login(enableCmdQR=True)
2. 若背景色为浅色(白色),可以将enableCmdQR赋值为负值:itchat.auto_login(enableCmdQR=-1)
Step2: 使用friends储存好友列表,update=True可以确保好友列表是最新的。注意好友列表第0个是自己
Step3: friends好友列表第0个是自己,我们可以看一下。顺带说一下,好友列表的顺序 (貌似) 是按照好友添加顺序
Step4: 创建一个目录,用来保存所有好友头像。注意使用os.chdir(user)切换个到工作目录,方便后续保存图片。
Step5: 批量下载好友头像,储存到friends[i]['img']中。然后我们print(friends[0]看看有没有变化(正常情况下应该可以看到增加了img,以二进制方式储存头像)。因为我家网络经常链接失败,所以用try...except...来写这一段。
"UserName"这个字段开头总有个@符号,直接暴力去除。如果不喜欢的话,可以把"UserName"换成"PYQuanPin",不过不保证重名。
同时把头像保存在user目录地下,方便下一步使用。
python使用open经常报错:TypeError
Step6: 看看我有多少个好友(friends里面有多少条记录),看看下载了多少头像(os.listdir(os.getcwd())看目录底下有多少文件)
Step7: 单个图像边长eachsize=64像素,一行eachline=int(sqrt(numImages))+1个头像,最终图像边长eachSize*eachline
Step8: import图像处理Python图像处理库:PIL中Image
1. 新建一块画布
2. 在坐标(0,0)位置放上第一个人头像
3. 向右平移坐标
Step9: 看一下拼接好的图像是什么样的(注意文件过大是常有的现象,请先去掉注释)
回到上一级目录(没人想在一堆文件里面找拼图吧?)
然后保存文件,顺带发送给文件传输助手
Step10: 至此,大功告成,别忘记退出网页版微信 | Python Code:
#我的Python版本是:
import sys
print(sys.version)
print(sys.version_info)
Explanation: 原文地址:itchat+pillow实现微信好友头像爬取和拼接。原文github地址
End of explanation
import itchat
itchat.auto_login()
Explanation: =======注意=======
这里我用Python2.7,实际上代码在Python3.6,3.5运行一切正常。itchat更新后就没有再测试啦。
注意微信更新了反广告机制,注意不要一次性发太多东西,避免被微信封号。
=======注意=======
核心
itchat读取微信好友列表和头像
用pillow拼接头像画图
登录微信网页版,itchat.auto_login()会生成QR码,使用手机扫一扫即可登录。
提示Please scan the QR code to log in.时,用手机扫描弹出的二维码
命令行二维码技巧:
1. 通过以下命令可以在登陆的时候使用命令行显示二维码:itchat.auto_login(enableCmdQR=True)
2. 若背景色为浅色(白色),可以将enableCmdQR赋值为负值:itchat.auto_login(enableCmdQR=-1)
End of explanation
friends = itchat.get_friends(update=True)[0:]
Explanation: 使用friends储存好友列表,update=True可以确保好友列表是最新的。注意好友列表第0个是自己
End of explanation
friends[0]
Explanation: friends好友列表第0个是自己,我们可以看一下。顺带说一下,好友列表的顺序 (貌似) 是按照好友添加顺序
End of explanation
import os
user = friends[0]["PYQuanPin"][0:]
print(user)
os.mkdir(user)
os.chdir(user)
os.getcwd()
Explanation: 创建一个目录,用来保存所有好友头像。注意使用os.chdir(user)切换个到工作目录,方便后续保存图片。
End of explanation
for i in friends:
try:
i['img'] = itchat.get_head_img(userName=i["UserName"])
i['ImgName']=i["UserName"][1:] + ".jpg"
except ConnectionError:
print('get '+i["UserName"][1:]+' fail')
fileImage=open(i['ImgName'],'wb')
fileImage.write(i['img'])
fileImage.close()
#这里不建议看friends[0],太长了
Explanation: 批量下载好友头像,储存到friends[i]['img']中。然后我们print(friends[0]看看有没有变化(正常情况下应该可以看到增加了img,以二进制方式储存头像)。因为我家网络经常链接失败,所以用try...except...来写这一段。
"UserName"这个字段开头总有个@符号,直接暴力去除。如果不喜欢的话,可以把"UserName"换成"PYQuanPin",不过不保证重名。
同时把头像保存在user目录地下,方便下一步使用。
python使用open经常报错:TypeError: an integer is required的解决方案
错误是由于从os模块引入了所有的函数导致的,os模块下有一个open函数,接受整型的文件描述符和打开模式,from os import *引入os模块的open函数,覆盖了python内建的open函数,导致错误。删除from os import *这行,然后再根据需要,指定引入os模块下的函数
End of explanation
friendsSum=len(friends)
imgList=os.listdir(os.getcwd())
numImages=len(imgList)
print('I have ',friendsSum,'friend(s), and I got ',numImages,'image(s)')
Explanation: 看看我有多少个好友(friends里面有多少条记录),看看下载了多少头像(os.listdir(os.getcwd())看目录底下有多少文件)
End of explanation
import math
eachSize=64
eachLine=int(math.sqrt(numImages))+1
print("单个图像边长",eachSize,"像素,一行",eachLine,"个头像,最终图像边长",eachSize*eachLine)
Explanation: 单个图像边长eachsize=64像素,一行eachline=int(sqrt(numImages))+1个头像,最终图像边长eachSize*eachline
End of explanation
import PIL.Image as Image
toImage = Image.new('RGBA', (eachSize*eachLine,eachSize*eachLine))#新建一块画布
x = 0
y = 0
for i in imgList:
try:
img = Image.open(i)#打开图片
except IOError:
print("Error: 没有找到文件或读取文件失败",i)
else:
img = img.resize((eachSize, eachSize), Image.ANTIALIAS)#缩小图片
toImage.paste(img, (x * eachSize, y * eachSize))#拼接图片
x += 1
if x == eachLine:
x = 0
y += 1
print("图像拼接完成")
Explanation: import图像处理Python图像处理库:PIL中Image
1. 新建一块画布
2. 在坐标(0,0)位置放上第一个人头像
3. 向右平移坐标
End of explanation
toImage.show()
os.chdir(os.path.pardir)
os.getcwd()
toImage.save(friends[0]["PYQuanPin"][0:]+".jpg")
itchat.send_image(friends[0]["PYQuanPin"][0:]+".jpg", 'filehelper')
Explanation: 看一下拼接好的图像是什么样的(注意文件过大是常有的现象,请先去掉注释)
回到上一级目录(没人想在一堆文件里面找拼图吧?)
然后保存文件,顺带发送给文件传输助手
End of explanation
itchat.logout()
Explanation: 至此,大功告成,别忘记退出网页版微信
End of explanation |
2,312 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Worapol B. and hamuel.me reserved some right maybe hahaha
for muic math club and muic student that want to use this as references
Import as DF
From the data seen below we will use "master" section subject only and we will use the number of student in "registered" not actual registered because registered include both master and joint section this will eliminate duplicate section we also remove subject that does not specify the date and time
Step1: Here we want to generate a histogram that is in the following format
[t1 , t2, ..., tn]
Here t1 could is the time from 8 - 9
the following is the logic in putting the subject in the correct time freq
we use the example of 8-10 we round up the 50 to 60
We aim to plot a histogram from Monday to Friday
Step2: Histogram for Monday
monday frequency of people in classes
Step3: Histogram for Tuesday | Python Code:
df = pd.read_csv('t2_2016.csv')
df = df[df['Type'] == 'master']
df.head()
#format [Day, start_time, end_time]
def time_extract(s):
s = str(s).strip().split(" "*16)
def helper(s):
try:
temp = s.strip().split(" ")[1:]
comb = temp[:2] + temp[3:]
comb[0] = comb[0][1:]
comb[2] = comb[2][:-1]
return comb
except:
temp = s.strip().split(" ")
comb = temp[:2] + temp[3:]
comb[0] = comb[0][1:]
comb[2] = comb[2][:-1]
return comb
top = helper(s[0])
if len(s) > 1:
bottom = helper(s[1])
return top, bottom
return top
# df.iloc[791]
# time_extract(df['Room/Time'][791])
tdf = df[df['Room/Time'].notnull()]['Room/Time']
tdf.apply(time_extract)[:10]
Explanation: Worapol B. and hamuel.me reserved some right maybe hahaha
for muic math club and muic student that want to use this as references
Import as DF
From the data seen below we will use "master" section subject only and we will use the number of student in "registered" not actual registered because registered include both master and joint section this will eliminate duplicate section we also remove subject that does not specify the date and time
End of explanation
def normalize_time(t):
temp = t.split(":")
h = int(temp[0]) * 60
m = 60 if int(temp[1]) == 50 else 0
return int(h + m)
Explanation: Here we want to generate a histogram that is in the following format
[t1 , t2, ..., tn]
Here t1 could is the time from 8 - 9
the following is the logic in putting the subject in the correct time freq
we use the example of 8-10 we round up the 50 to 60
We aim to plot a histogram from Monday to Friday
End of explanation
def gen_hist(day):
filtered = []
for i,d in zip(tdf.index, tdf.apply(time_extract)):
if len(d) == 2:
for dd in d:
if dd[0] == day:
filtered.append((i, dd))
else:
if d[0] == day:
filtered.append((i, d))
hist = []
for i, d in filtered:
start = normalize_time(d[1])
end = normalize_time(d[2])
cc = start
while cc <= end:
for f in range(df['Registered'][i]):
hist.append(cc/60)
cc += 60
plt.title("Student studying on " + day)
plt.ylabel("Frequency")
plt.xlabel("Time in hours")
plt.hist(hist, bins=11);
# return hist
gen_hist('Mon')
Explanation: Histogram for Monday
monday frequency of people in classes
End of explanation
gen_hist('Tue')
gen_hist('Wed')
gen_hist('Thu')
gen_hist('Fri')
gen_hist('Sat')
Explanation: Histogram for Tuesday
End of explanation |
2,313 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Introduction to SimpleITKv4 Registration - Continued</h1>
ITK v4 Registration Components
<img src="ITKv4RegistrationComponentsDiagram.svg" style="width
Step3: Utility functions
A number of utility functions, saving a transform and corresponding resampled image, callback for selecting a
DICOM series from several series found in the same directory.
Step4: Loading Data
In this notebook we will work with CT and MR scans of the CIRS 057A multi-modality abdominal phantom. The scans are multi-slice DICOM images. The data is stored in a zip archive which is automatically retrieved and extracted when we request a file which is part of the archive.
Step5: Initial Alignment
A reasonable guesstimate for the initial translational alignment can be obtained by using
the CenteredTransformInitializer (functional interface to the CenteredTransformInitializerFilter).
The resulting transformation is centered with respect to the fixed image and the
translation aligns the centers of the two images. There are two options for
defining the centers of the images, either the physical centers
of the two data sets (GEOMETRY), or the centers defined by the intensity
moments (MOMENTS).
Two things to note about this filter, it requires the fixed and moving image
have the same type even though it is not algorithmically required, and its
return type is the generic SimpleITK.Transform.
Step6: Look at the transformation, what type is it?
Step7: Final registration
Version 1
<ul>
<li> Single scale (not using image pyramid).</li>
<li> Initial transformation is not modified in place.</li>
</ul>
<ol>
<li>
Illustrate the need for scaling the step size differently for each parameter
Step8: Look at the final transformation, what type is it?
Step9: Version 1.1
The previous example illustrated the use of the ITK v4 registration framework in an ITK v3 manner. We only referred to a single transformation which was what we optimized.
In ITK v4 the registration method accepts three transformations (if you look at the diagram above you will only see two transformations, Moving transform represents $T_{opt} \circ T_m$)
Step10: Look at the final transformation, what type is it? Why is it differnt from the previous example?
Step11: Version 2
<ul>
<li> Multi scale - specify both scale, and how much to smooth with respect to original image.</li>
<li> Initial transformation modified in place, so in the end we have the same type of transformation in hand.</li>
</ul>
Step12: Look at the final transformation, what type is it? | Python Code:
import SimpleITK as sitk
# Utility method that either downloads data from the network or
# if already downloaded returns the file name for reading from disk (cached data).
from downloaddata import fetch_data as fdata
# Always write output to a separate directory, we don't want to pollute the source directory.
import os
OUTPUT_DIR = 'Output'
# GUI components (sliders, dropdown...).
from ipywidgets import interact, fixed
# Enable display of html.
from IPython.display import display, HTML
# Plots will be inlined.
%matplotlib inline
# Callbacks for plotting registration progress.
import registration_callbacks
Explanation: <h1 align="center">Introduction to SimpleITKv4 Registration - Continued</h1>
ITK v4 Registration Components
<img src="ITKv4RegistrationComponentsDiagram.svg" style="width:700px"/><br><br>
Before starting with this notebook, please go over the first introductory notebook found here.
In this notebook we will visually assess registration by viewing the overlap between images using external viewers.
The two viewers we recommend for this task are ITK-SNAP and 3D Slicer. ITK-SNAP supports concurrent linked viewing between multiple instances of the program. 3D Slicer supports concurrent viweing of multiple volumes via alpha blending.
End of explanation
def save_transform_and_image(transform, fixed_image, moving_image, outputfile_prefix):
Write the given transformation to file, resample the moving_image onto the fixed_images grid and save the
result to file.
Args:
transform (SimpleITK Transform): transform that maps points from the fixed image coordinate system to the moving.
fixed_image (SimpleITK Image): resample onto the spatial grid defined by this image.
moving_image (SimpleITK Image): resample this image.
outputfile_prefix (string): transform is written to outputfile_prefix.tfm and resampled image is written to
outputfile_prefix.mha.
resample = sitk.ResampleImageFilter()
resample.SetReferenceImage(fixed_image)
# SimpleITK supports several interpolation options, we go with the simplest that gives reasonable results.
resample.SetInterpolator(sitk.sitkLinear)
resample.SetTransform(transform)
sitk.WriteImage(resample.Execute(moving_image), outputfile_prefix+'.mha')
sitk.WriteTransform(transform, outputfile_prefix+'.tfm')
def DICOM_series_dropdown_callback(fixed_image, moving_image, series_dictionary):
Callback from dropbox which selects the two series which will be used for registration.
The callback prints out some information about each of the series from the meta-data dictionary.
For a list of all meta-dictionary tags and their human readable names see DICOM standard part 6,
Data Dictionary (http://medical.nema.org/medical/dicom/current/output/pdf/part06.pdf)
# The callback will update these global variables with the user selection.
global selected_series_fixed
global selected_series_moving
img_fixed = sitk.ReadImage(series_dictionary[fixed_image][0])
img_moving = sitk.ReadImage(series_dictionary[moving_image][0])
# There are many interesting tags in the DICOM data dictionary, display a selected few.
tags_to_print = {'0010|0010': 'Patient name: ',
'0008|0060' : 'Modality: ',
'0008|0021' : 'Series date: ',
'0008|0031' : 'Series time:',
'0008|0070' : 'Manufacturer: '}
html_table = []
html_table.append('<table><tr><td><b>Tag</b></td><td><b>Fixed Image</b></td><td><b>Moving Image</b></td></tr>')
for tag in tags_to_print:
fixed_tag = ''
moving_tag = ''
try:
fixed_tag = img_fixed.GetMetaData(tag)
except: # ignore if the tag isn't in the dictionary
pass
try:
moving_tag = img_moving.GetMetaData(tag)
except: # ignore if the tag isn't in the dictionary
pass
html_table.append('<tr><td>' + tags_to_print[tag] +
'</td><td>' + fixed_tag +
'</td><td>' + moving_tag + '</td></tr>')
html_table.append('</table>')
display(HTML(''.join(html_table)))
selected_series_fixed = fixed_image
selected_series_moving = moving_image
Explanation: Utility functions
A number of utility functions, saving a transform and corresponding resampled image, callback for selecting a
DICOM series from several series found in the same directory.
End of explanation
data_directory = os.path.dirname(fdata("CIRS057A_MR_CT_DICOM/readme.txt"))
# 'selected_series_moving/fixed' will be updated by the interact function.
selected_series_fixed = ''
selected_series_moving = ''
# Directory contains multiple DICOM studies/series, store the file names
# in dictionary with the key being the seriesID.
reader = sitk.ImageSeriesReader()
series_file_names = {}
series_IDs = reader.GetGDCMSeriesIDs(data_directory) #list of all series
if series_IDs: #check that we have at least one series
for series in series_IDs:
series_file_names[series] = reader.GetGDCMSeriesFileNames(data_directory, series)
interact(DICOM_series_dropdown_callback, fixed_image=series_IDs, moving_image =series_IDs, series_dictionary=fixed(series_file_names));
else:
print('This is surprising, data directory does not contain any DICOM series.')
# Actually read the data based on the user's selection.
reader.SetFileNames(series_file_names[selected_series_fixed])
fixed_image = reader.Execute()
reader.SetFileNames(series_file_names[selected_series_moving])
moving_image = reader.Execute()
# Save images to file and view overlap using external viewer.
sitk.WriteImage(fixed_image, os.path.join(OUTPUT_DIR, "fixedImage.mha"))
sitk.WriteImage(moving_image, os.path.join(OUTPUT_DIR, "preAlignment.mha"))
Explanation: Loading Data
In this notebook we will work with CT and MR scans of the CIRS 057A multi-modality abdominal phantom. The scans are multi-slice DICOM images. The data is stored in a zip archive which is automatically retrieved and extracted when we request a file which is part of the archive.
End of explanation
initial_transform = sitk.CenteredTransformInitializer(sitk.Cast(fixed_image,moving_image.GetPixelIDValue()),
moving_image,
sitk.Euler3DTransform(),
sitk.CenteredTransformInitializerFilter.GEOMETRY)
# Save moving image after initial transform and view overlap using external viewer.
save_transform_and_image(initial_transform, fixed_image, moving_image, os.path.join(OUTPUT_DIR, "initialAlignment"))
Explanation: Initial Alignment
A reasonable guesstimate for the initial translational alignment can be obtained by using
the CenteredTransformInitializer (functional interface to the CenteredTransformInitializerFilter).
The resulting transformation is centered with respect to the fixed image and the
translation aligns the centers of the two images. There are two options for
defining the centers of the images, either the physical centers
of the two data sets (GEOMETRY), or the centers defined by the intensity
moments (MOMENTS).
Two things to note about this filter, it requires the fixed and moving image
have the same type even though it is not algorithmically required, and its
return type is the generic SimpleITK.Transform.
End of explanation
print(initial_transform)
Explanation: Look at the transformation, what type is it?
End of explanation
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100)
# Scale the step size differently for each parameter, this is critical!!!
registration_method.SetOptimizerScalesFromPhysicalShift()
registration_method.SetInitialTransform(initial_transform, inPlace=False)
registration_method.AddCommand(sitk.sitkStartEvent, registration_callbacks.metric_start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, registration_callbacks.metric_end_plot)
registration_method.AddCommand(sitk.sitkIterationEvent,
lambda: registration_callbacks.metric_plot_values(registration_method))
final_transform_v1 = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
# Save moving image after registration and view overlap using external viewer.
save_transform_and_image(final_transform_v1, fixed_image, moving_image, os.path.join(OUTPUT_DIR, "finalAlignment-v1"))
Explanation: Final registration
Version 1
<ul>
<li> Single scale (not using image pyramid).</li>
<li> Initial transformation is not modified in place.</li>
</ul>
<ol>
<li>
Illustrate the need for scaling the step size differently for each parameter:
<ul>
<li> SetOptimizerScalesFromIndexShift - estimated from maximum shift of voxel indexes (only use if data is isotropic).</li>
<li> SetOptimizerScalesFromPhysicalShift - estimated from maximum shift of physical locations of voxels.</li>
<li> SetOptimizerScalesFromJacobian - estimated from the averaged squared norm of the Jacobian w.r.t. parameters.</li>
</ul>
</li>
<li>
Look at the optimizer's stopping condition to ensure we have not terminated prematurely.
</li>
</ol>
End of explanation
print(final_transform_v1)
Explanation: Look at the final transformation, what type is it?
End of explanation
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100)
registration_method.SetOptimizerScalesFromPhysicalShift()
# Set the initial moving and optimized transforms.
optimized_transform = sitk.Euler3DTransform()
registration_method.SetMovingInitialTransform(initial_transform)
registration_method.SetInitialTransform(optimized_transform)
registration_method.AddCommand(sitk.sitkStartEvent, registration_callbacks.metric_start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, registration_callbacks.metric_end_plot)
registration_method.AddCommand(sitk.sitkIterationEvent,
lambda: registration_callbacks.metric_plot_values(registration_method))
registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
# Need to compose the transformations after registration.
final_transform_v11 = sitk.Transform(optimized_transform)
final_transform_v11.AddTransform(initial_transform)
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
# Save moving image after registration and view overlap using external viewer.
save_transform_and_image(final_transform_v11, fixed_image, moving_image, os.path.join(OUTPUT_DIR, "finalAlignment-v1.1"))
Explanation: Version 1.1
The previous example illustrated the use of the ITK v4 registration framework in an ITK v3 manner. We only referred to a single transformation which was what we optimized.
In ITK v4 the registration method accepts three transformations (if you look at the diagram above you will only see two transformations, Moving transform represents $T_{opt} \circ T_m$):
<ul>
<li>
SetInitialTransform, $T_{opt}$ - composed with the moving initial transform, maps points from the virtual image domain to the moving image domain, modified during optimization.
</li>
<li>
SetFixedInitialTransform $T_f$- maps points from the virtual image domain to the fixed image domain, never modified.
</li>
<li>
SetMovingInitialTransform $T_m$- maps points from the virtual image domain to the moving image domain, never modified.
</li>
</ul>
The transformation that maps points from the fixed to moving image domains is thus: $^M\mathbf{p} = T_{opt}(T_m(T_f^{-1}(^F\mathbf{p})))$
We now modify the previous example to use $T_{opt}$ and $T_m$.
End of explanation
print(final_transform_v11)
Explanation: Look at the final transformation, what type is it? Why is it differnt from the previous example?
End of explanation
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100) #, estimateLearningRate=registration_method.EachIteration)
registration_method.SetOptimizerScalesFromPhysicalShift()
final_transform = sitk.Euler3DTransform(initial_transform)
registration_method.SetInitialTransform(final_transform)
registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas = [2,1,0])
registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
registration_method.AddCommand(sitk.sitkStartEvent, registration_callbacks.metric_start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, registration_callbacks.metric_end_plot)
registration_method.AddCommand(sitk.sitkMultiResolutionIterationEvent,
registration_callbacks.metric_update_multires_iterations)
registration_method.AddCommand(sitk.sitkIterationEvent,
lambda: registration_callbacks.metric_plot_values(registration_method))
registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
# Save moving image after registration and view overlap using external viewer.
save_transform_and_image(final_transform, fixed_image, moving_image, os.path.join(OUTPUT_DIR, 'finalAlignment-v2'))
Explanation: Version 2
<ul>
<li> Multi scale - specify both scale, and how much to smooth with respect to original image.</li>
<li> Initial transformation modified in place, so in the end we have the same type of transformation in hand.</li>
</ul>
End of explanation
print(final_transform)
Explanation: Look at the final transformation, what type is it?
End of explanation |
2,314 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Variant calling with kevlar
Step1: Generate a random genome
Rather than generating a truly random genome, I wanted one that shared some compositional features with the human genome.
I used the nuclmm package to train a 6th-order Markov model of nucleotide composition on the human genome, and then use this to simulate a 2.5 Mb random genome maintaining the same composition.
Downloading the human genome and running nuclmm train are time intensive, so I've provided the associated commands here only as comments.
Step2: Simulate a trio
The files [proband|mother|father]-mutations.tsv contain lists of mutations to apply to the reference genome for each simulated sample.
The proband shares 3 mutations with each parent, and has 10 unique mutations.
The kevlar mutate command applies the mutations to the provided reference genome to create the mutated genome.
Step3: "Sequence" the genomes
Use wgsim to simulate Illumina sequencing of each sample, with a small error rate.
Step4: Dump boring reads
Discarding reads that match the reference genome perfectly eliminates many k-mers and allows us to count the remaining k-mers accurately with much less memory.
Typically kevlar dump would operate on BAM files, but here I'm processing the bwa SAM output directly and skipping kevlar dump.
Step5: Count all remaining k-mers
First control sample uses full 100M for counting. All subsequent samples check against the first control before counting (no need to count if k-mer is already disqualified in first sample), thus requiring only 100Mb x 0.25 = 25Mb for counting.
Step6: Identify "interesting" k-mers
Select k-mers that are high abundance (> 8) in the proband and effectively absent (<= 1) in each control.
Print the reads that contain these k-mers.
Step7: Filter "interesting" k-mers
Recompute k-mer abundances with a much smaller amount of input data. In normal circumstances you'd be able to achieve an effective FPR = 0.0 with much less memory than in the kevlar novel step, but here I was just lazy and used the same.
Step8: Partition reads by shared "interesting" k-mers
Here we expect to see 10 connected components, corresponding to the 10 mutations unique to the proband.
Step9: Assemble each partition
Perform abundance trimming on reads from each partition and then assemble.
Each contig (or set of contigs) should be reflective of a distinct variant! | Python Code:
from __future__ import print_function
import subprocess
import kevlar
import random
import sys
def gen_muts():
locs = [random.randint(0, 2500000) for _ in range(10)]
types = [random.choice(['snv', 'ins', 'del', 'inv']) for _ in range(10)]
for l, t in zip(locs, types):
if t == 'snv':
value = random.randint(1, 3)
elif t == 'ins':
length = random.randint(20, 200)
value = ''.join(random.choice('ACGT') for _ in range(length))
elif t == 'del':
value = random.randint(20, 200)
else:
value = random.randint(50, 900)
print(l, t, value, sep='\t')
Explanation: Variant calling with kevlar: human simulation "pico"
At this stage, kevlar takes quite a bit of time to run on human-sized data sets.
To facilitate more rapid method development, I needed a small test data set that can be processed quickly.
And while faithfully modeling a eukaryotic genome in all of its repetitive glory is extremely complicated, I wanted to at least capture a couple of features realistically in this simulation: higher-order nucleotide composition, and shared versus unique variants.
In brief, this notebook shows how I simulated a 2.5 Mb "reference" genome, simulated a trio of 3 individuals from that reference genome, and then invoked the kevlar workflow to identify variants.
Technical preliminaries
Nothing interesting to see here.
End of explanation
# !wget ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/reference/GRCh38_reference_genome/GRCh38_full_analysis_set_plus_decoy_hla.fa
# !pip install git+https://github.com/standage/nuclmm.git
# !nuclmm train --order 6 --out human.order6.mm GRCh38_full_analysis_set_plus_decoy_hla.fa
# !nuclmm simulate --out human.random.fa --order 6 --numseqs 1 --seqlen 2500000 --seed 42 human.order6.mm
Explanation: Generate a random genome
Rather than generating a truly random genome, I wanted one that shared some compositional features with the human genome.
I used the nuclmm package to train a 6th-order Markov model of nucleotide composition on the human genome, and then use this to simulate a 2.5 Mb random genome maintaining the same composition.
Downloading the human genome and running nuclmm train are time intensive, so I've provided the associated commands here only as comments.
End of explanation
arglist = ['mutate', '-o', 'proband-genome.fa', 'proband-mutations.tsv', 'human.random.fa']
args = kevlar.cli.parser().parse_args(arglist)
kevlar.mutate.main(args)
arglist = ['mutate', '-o', 'mother-genome.fa', 'mother-mutations.tsv', 'human.random.fa']
args = kevlar.cli.parser().parse_args(arglist)
kevlar.mutate.main(args)
arglist = ['mutate', '-o', 'father-genome.fa', 'father-mutations.tsv', 'human.random.fa']
args = kevlar.cli.parser().parse_args(arglist)
kevlar.mutate.main(args)
Explanation: Simulate a trio
The files [proband|mother|father]-mutations.tsv contain lists of mutations to apply to the reference genome for each simulated sample.
The proband shares 3 mutations with each parent, and has 10 unique mutations.
The kevlar mutate command applies the mutations to the provided reference genome to create the mutated genome.
End of explanation
random.seed(55555555)
# wgsim uses an `int` type for its seed value
# Using extremely large integer values led to non-deterministic behavior,
# so I'm just using what can fit in a 16-bit integer here.
maxint = 65535
seed = random.randint(1, maxint)
cmd = 'wgsim -e 0.005 -r 0.0 -d 450 -s 50 -N 375000 -1 100 -2 100 -S {} proband-genome.fa proband-reads-1.fq proband-reads-2.fq'.format(seed)
_ = subprocess.check_call(cmd, shell=True)
seed = random.randint(1, maxint)
cmd = 'wgsim -e 0.005 -r 0.0 -d 450 -s 50 -N 375000 -1 100 -2 100 -S {} mother-genome.fa mother-reads-1.fq mother-reads-2.fq'.format(seed)
_ = subprocess.check_call(cmd, shell=True)
seed = random.randint(1, maxint)
cmd = 'wgsim -e 0.005 -r 0.0 -d 450 -s 50 -N 375000 -1 100 -2 100 -S {} father-genome.fa father-reads-1.fq father-reads-2.fq'.format(seed)
_ = subprocess.check_call(cmd, shell=True)
Explanation: "Sequence" the genomes
Use wgsim to simulate Illumina sequencing of each sample, with a small error rate.
End of explanation
!time bwa index human.random.fa
!time bwa mem human.random.fa proband-reads-[1,2].fq 2> proband-bwa.log | samtools view | perl -ne 'print if !/\t\d+M\t/ || !/NM:i:0/' | perl -ane '$suffix = $F[1] & 64 ? "/1" : "/2"; print "\@" . "$F[0]$suffix\n$F[9]\n+\n$F[10]\n"' | gzip -c > proband-reads-dump.fq.gz
!time bwa mem human.random.fa mother-reads-[1,2].fq 2> mother-bwa.log | samtools view | perl -ne 'print if !/\t\d+M\t/ || !/NM:i:0/' | perl -ane '$suffix = $F[1] & 64 ? "/1" : "/2"; print "\@" . "$F[0]$suffix\n$F[9]\n+\n$F[10]\n"' | gzip -c > mother-reads-dump.fq.gz
!time bwa mem human.random.fa father-reads-[1,2].fq 2> father-bwa.log | samtools view | perl -ne 'print if !/\t\d+M\t/ || !/NM:i:0/' | perl -ane '$suffix = $F[1] & 64 ? "/1" : "/2"; print "\@" . "$F[0]$suffix\n$F[9]\n+\n$F[10]\n"' | gzip -c > father-reads-dump.fq.gz
Explanation: Dump boring reads
Discarding reads that match the reference genome perfectly eliminates many k-mers and allows us to count the remaining k-mers accurately with much less memory.
Typically kevlar dump would operate on BAM files, but here I'm processing the bwa SAM output directly and skipping kevlar dump.
End of explanation
arglist = ['count', '--ksize', '25', '--memory', '100M', '--mem-frac', '0.25',
'--case', 'proband.counttable', 'proband-reads-dump.fq.gz',
'--control', 'father.counttable', 'father-reads-dump.fq.gz',
'--control', 'mother.counttable', 'mother-reads-dump.fq.gz']
args = kevlar.cli.parser().parse_args(arglist)
kevlar.count.main(args)
Explanation: Count all remaining k-mers
First control sample uses full 100M for counting. All subsequent samples check against the first control before counting (no need to count if k-mer is already disqualified in first sample), thus requiring only 100Mb x 0.25 = 25Mb for counting.
End of explanation
arglist = ['novel', '--ctrl-max', '1', '--case-min', '8', '--ksize', '25',
'--case', 'proband-reads-dump.fq.gz', '--case-counts', 'proband.counttable',
'--control-counts', 'mother.counttable', 'father.counttable',
'--out', 'proband.novel.augfastq.gz']
args = kevlar.cli.parser().parse_args(arglist)
kevlar.novel.main(args)
Explanation: Identify "interesting" k-mers
Select k-mers that are high abundance (> 8) in the proband and effectively absent (<= 1) in each control.
Print the reads that contain these k-mers.
End of explanation
arglist = ['filter', '--refr', 'human.random.fa', '--refr-memory', '50M', '--refr-max-fpr', '0.001',
'--abund-memory', '10M', '--abund-max-fpr', '0.001', '--min-abund', '8',
'--out', 'proband.novel.filtered.fq.gz', '--aug-out', 'proband.novel.filtered.augfastq.gz',
'--ksize', '25', 'proband.novel.augfastq.gz']
args = kevlar.cli.parser().parse_args(arglist)
kevlar.filter.main(args)
Explanation: Filter "interesting" k-mers
Recompute k-mer abundances with a much smaller amount of input data. In normal circumstances you'd be able to achieve an effective FPR = 0.0 with much less memory than in the kevlar novel step, but here I was just lazy and used the same.
End of explanation
arglist = ['partition', 'part', 'proband.novel.filtered.augfastq.gz']
args = kevlar.cli.parser().parse_args(arglist)
kevlar.partition.main(args)
Explanation: Partition reads by shared "interesting" k-mers
Here we expect to see 10 connected components, corresponding to the 10 mutations unique to the proband.
End of explanation
for i in range(10):
print('\n\n==== iter {i} ===='.format(i=i), file=sys.stderr)
# Strip interesting k-mer annotations
cmd = "gunzip -c part.cc{i}.augfastq.gz | grep -v '#$' | gzip -c > part.cc{i}.fq.gz".format(i=i)
subprocess.check_call(cmd, shell=True)
# Perform trimming
cmd = "trim-low-abund.py -M 50M -k 25 --output part.cc{i}.trim.fq.gz --gzip part.cc{i}.fq.gz 2> part.cc{i}.trim.log".format(i=i)
subprocess.check_call(cmd, shell=True)
# Re-annotate interesting k-mers
arglist = ['reaugment', '--out', 'part.cc{i}.trim.augfastq.gz'.format(i=i),
'part.cc{i}.augfastq.gz'.format(i=i), 'part.cc{i}.trim.fq.gz'.format(i=i)]
args = kevlar.cli.parser().parse_args(arglist)
kevlar.reaugment.main(args)
# Assemble
arglist = ['assemble', '--out', 'part.cc{i}.asmbl.augfasta.gz'.format(i=i),
'part.cc{i}.trim.augfastq.gz'.format(i=i)]
args = kevlar.cli.parser().parse_args(arglist)
kevlar.assemble.main(args)
# Plain Fasta file for convenience with downstream analysis.
cmd = "gunzip -c part.cc{i}.asmbl.augfasta.gz | grep -v '#$' > part.cc{i}.fa".format(i=i)
subprocess.check_call(cmd, shell=True)
Explanation: Assemble each partition
Perform abundance trimming on reads from each partition and then assemble.
Each contig (or set of contigs) should be reflective of a distinct variant!
End of explanation |
2,315 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial Overview
This is part three of the tutorial where you will learn how to run same code in Part One (with minor changes) in Google's new Vertex AI pipeline. Vertex Pipelines helps you to automate, monitor, and govern your ML systems by orchestrating your ML workflow in a serverless manner, and storing your workflow's artifacts using Vertex ML Metadata. By storing the artifacts of your ML workflow in Vertex ML Metadata, you can analyze the lineage of your workflow's artifacts — for example, an ML model's lineage may include the training data, hyperparameters, and code that were used to create the model.
You will also learn how to export the final BQML model and hosted on the Google Vertex AI Endpoint.
Prerequisites
Download the Expedia Hotel Recommendation Dataset from Kaggle. You will be mostly working with the train.csv dataset for this tutorial
Upload the dataset to BigQuery by following the how-to guide Loading CSV Data
Follow the how-to guide create flex slots, reservation and assignment in BigQuery for training ML models. <strong>Make sure to create Flex slots and not month/year slots so you can delete them after the tutorial.</strong>
Build and push a docker image using this dockerfile as the base image for the Kubeflow pipeline components.
Create or use a Google Cloud Storage bucket to export the finalized model to. <strong>Make sure to create the bucket in the same region where you will create Vertex AI Endpoint to host your model.</strong>
If you do not specify a service account, Vertex Pipelines uses the Compute Engine default service account to run your pipelines. The Compute Engine default service account has the Project Editor role by default so it should have access to BigQuery as well as Google Cloud Storage.
Change the following cell to reflect your setup
Step2: Create BigQuery function
Create a generic BigQuery function that runs a BigQuery query and returns the table/model created. This will be re-used to return BigQuery results for all the different segments of the BigQuery process in the Kubeflow Pipeline. You will see later in the tutorial where this function is being passed as parameter (ddlop) to other functions to perform certain BigQuery operation.
Step5: Train Matrix Factorization model and evaluate it
We will start by training a matrix factorization model that will allow us to understand the latent relationship between user and hotel clusters. The reason why we are doing this is because matrix factorization approach can only find latent relationship between a user and a hotel. However, there are other intuitive useful predictors (such as is_mobile, location, and etc) that can improve the model performance. So togther, we can feed the resulting weights/factors as features among with other features to train the final XGBoost model.
Step8: Creating embedding features for users and hotels
We will use the matrix factorization model to create corresponding user factors, hotel factors and embed them together with additional features such as total visits and distinct cities to create a new training dataset to an XGBoost classifier which will try to predict the the likelihood of booking for any user/hotel combination. Also note that we aggregated and grouped the orginal dataset by user_id.
Step10: Function below combines all the features selected (total_mobile_searches) and engineered (user factors and hotel factors) into a training dataset for the XGBoost classifier. Note the target variable is rating which is converted into a binary classfication.
Step11: We will create a couple of BigQuery user-defined functions (UDF) to convert arrays to a struct and its array elements are the fields in the struct. <strong>Be sure to change the BigQuery dataset name to your dataset name. </strong>
Step14: Train XGBoost model and evaluate it
Step15: Export XGBoost model and host it on Vertex AI
One of the nice features of BigQuery ML is the ability to import and export machine learning models. In the function defined below, we are going to export the trained XGBoost model to a Google Cloud Storage bucket. We will later have Google Cloud AI Platform host this model for predictions. It is worth mentioning that you can host this model on any platform that supports Booster (XGBoost 0.82). Check out the documentation for more information on exporting BigQuery ML models and their formats.
Step16: Defining the Kubeflow Pipelines
Now we have the necessary functions defined, we are now ready to create a workflow using Kubeflow Pipeline. The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL).
The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
The pipeline performs the following steps -
* Trains a Matrix Factorization model
* Evaluates the trained Matrix Factorization model and if the Mean Square Error is less than threadshold, it will continue to the next step, otherwise, the pipeline will stop
* Engineers new user factors feature with the Matrix Factorization model
* Engineers new hotel factors feature with the Matrix Factorization model
* Combines all the features selected (total_mobile_searches) and engineered (user factors and hotel factors) into a training dataset for the XGBoost classifier
* Trains a XGBoost classifier
* Evalutes the trained XGBoost model and if the ROC AUC score is more than threadshold, it will continue to the next step, otherwise, the pipeline will stop
* Exports the XGBoost model to a Google Cloud Storage bucket
* Deploys the XGBoost model from the Google Cloud Storage bucket to Google Cloud AI Platform for prediction
Step17: Submitting pipeline runs
You can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run. | Python Code:
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
# CHANGE the following settings
BASE_IMAGE='gcr.io/your-image-name' #This is the image built from the Dockfile in the same folder
REGION='vertex-ai-region' #For example, us-central1, note that Vertex AI endpoint deployment region must match MODEL_STORAGE bucket region
MODEL_STORAGE = 'gs://your-bucket-name/folder-name' #Make sure this bucket is created in the same region defined above
BQ_DATASET_NAME="hotel_recommendations" #This is the name of the target dataset where you model and predictions will be stored
PROJECT_ID="your-project-id" #This is your GCP project ID that can be found in the GCP console
# Required Parameters for Vertex AI
USER = 'your-user-name'
BUCKET_NAME = 'your-bucket-name'
PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(BUCKET_NAME, USER) #Cloud Storage URI that your pipelines service account can access.
ENDPOINT_NAME='bqml-hotel-recommendations' #Vertex AI Endpoint Name
DEPLOY_COMPUTE='n1-standard-4' #Could be any supported Vertex AI Instance Types
DEPLOY_IMAGE='us-docker.pkg.dev/vertex-ai/prediction/xgboost-cpu.0-82:latest'#Do not change, BQML XGBoost is currently compatible with 0.82
print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))
# Check the KFP version, The KFP version should be >= 1.6. If lower, run !pip3 install --user kfp --upgrade, then restart the kernel
!python3 -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
Explanation: Tutorial Overview
This is part three of the tutorial where you will learn how to run same code in Part One (with minor changes) in Google's new Vertex AI pipeline. Vertex Pipelines helps you to automate, monitor, and govern your ML systems by orchestrating your ML workflow in a serverless manner, and storing your workflow's artifacts using Vertex ML Metadata. By storing the artifacts of your ML workflow in Vertex ML Metadata, you can analyze the lineage of your workflow's artifacts — for example, an ML model's lineage may include the training data, hyperparameters, and code that were used to create the model.
You will also learn how to export the final BQML model and hosted on the Google Vertex AI Endpoint.
Prerequisites
Download the Expedia Hotel Recommendation Dataset from Kaggle. You will be mostly working with the train.csv dataset for this tutorial
Upload the dataset to BigQuery by following the how-to guide Loading CSV Data
Follow the how-to guide create flex slots, reservation and assignment in BigQuery for training ML models. <strong>Make sure to create Flex slots and not month/year slots so you can delete them after the tutorial.</strong>
Build and push a docker image using this dockerfile as the base image for the Kubeflow pipeline components.
Create or use a Google Cloud Storage bucket to export the finalized model to. <strong>Make sure to create the bucket in the same region where you will create Vertex AI Endpoint to host your model.</strong>
If you do not specify a service account, Vertex Pipelines uses the Compute Engine default service account to run your pipelines. The Compute Engine default service account has the Project Editor role by default so it should have access to BigQuery as well as Google Cloud Storage.
Change the following cell to reflect your setup
End of explanation
from typing import NamedTuple
import json
import os
def run_bigquery_ddl(project_id: str, query_string: str, location: str) -> NamedTuple(
'DDLOutput', [('created_table', str), ('query', str)]):
Runs BigQuery query and returns a table/model name
print(query_string)
from google.cloud import bigquery
from google.api_core.future import polling
from google.cloud import bigquery
from google.cloud.bigquery import retry as bq_retry
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query_string, retry=bq_retry.DEFAULT_RETRY)
job._retry = polling.DEFAULT_RETRY
print('bq version: {}'.format(bigquery.__version__))
while job.running():
from time import sleep
sleep(30)
print('Running ...')
tblname = '{}.{}'.format(job.ddl_target_table.dataset_id, job.ddl_target_table.table_id)
print('{} created in {}'.format(tblname, job.ended - job.started))
from collections import namedtuple
result_tuple = namedtuple('DDLOutput', ['created_table', 'query'])
return result_tuple(tblname, query_string)
Explanation: Create BigQuery function
Create a generic BigQuery function that runs a BigQuery query and returns the table/model created. This will be re-used to return BigQuery results for all the different segments of the BigQuery process in the Kubeflow Pipeline. You will see later in the tutorial where this function is being passed as parameter (ddlop) to other functions to perform certain BigQuery operation.
End of explanation
def train_matrix_factorization_model(ddlop, project_id: str, dataset: str):
query =
CREATE OR REPLACE MODEL `{project_id}.{dataset}.my_implicit_mf_model_quantiles_demo_binary_prod`
OPTIONS
(model_type='matrix_factorization',
feedback_type='implicit',
user_col='user_id',
item_col='hotel_cluster',
rating_col='rating',
l2_reg=30,
num_factors=15) AS
SELECT
user_id,
hotel_cluster,
if(sum(is_booking) > 0, 1, sum(is_booking)) AS rating
FROM `{project_id}.{dataset}.hotel_train`
group by 1,2
.format(project_id = project_id, dataset = dataset)
return ddlop(project_id, query, 'US')
def evaluate_matrix_factorization_model(project_id:str, mf_model: str, location: str='US')-> NamedTuple('MFMetrics', [('msqe', float)]):
query =
SELECT * FROM ML.EVALUATE(MODEL `{project_id}.{mf_model}`)
.format(project_id = project_id, mf_model = mf_model)
print(query)
from google.cloud import bigquery
import json
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query)
metrics_df = job.result().to_dataframe()
from collections import namedtuple
result_tuple = namedtuple('MFMetrics', ['msqe'])
return result_tuple(metrics_df.loc[0].to_dict()['mean_squared_error'])
Explanation: Train Matrix Factorization model and evaluate it
We will start by training a matrix factorization model that will allow us to understand the latent relationship between user and hotel clusters. The reason why we are doing this is because matrix factorization approach can only find latent relationship between a user and a hotel. However, there are other intuitive useful predictors (such as is_mobile, location, and etc) that can improve the model performance. So togther, we can feed the resulting weights/factors as features among with other features to train the final XGBoost model.
End of explanation
def create_user_features(ddlop, project_id:str, dataset:str, mf_model:str):
#Feature engineering for useres
query =
CREATE OR REPLACE TABLE `{project_id}.{dataset}.user_features_prod` AS
WITH u as
(
select
user_id,
count(*) as total_visits,
count(distinct user_location_city) as distinct_cities,
sum(distinct site_name) as distinct_sites,
sum(is_mobile) as total_mobile,
sum(is_booking) as total_bookings,
FROM `{project_id}.{dataset}.hotel_train`
GROUP BY 1
)
SELECT
u.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors
FROM
u JOIN ML.WEIGHTS( MODEL `{mf_model}`) w
ON processed_input = 'user_id' AND feature = CAST(u.user_id AS STRING)
.format(project_id = project_id, dataset = dataset, mf_model=mf_model)
return ddlop(project_id, query, 'US')
def create_hotel_features(ddlop, project_id:str, dataset:str, mf_model:str):
#Feature eingineering for hotels
query =
CREATE OR REPLACE TABLE `{project_id}.{dataset}.hotel_features_prod` AS
WITH h as
(
select
hotel_cluster,
count(*) as total_cluster_searches,
count(distinct hotel_country) as distinct_hotel_countries,
sum(distinct hotel_market) as distinct_hotel_markets,
sum(is_mobile) as total_mobile_searches,
sum(is_booking) as total_cluster_bookings,
FROM `{project_id}.{dataset}.hotel_train`
group by 1
)
SELECT
h.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS hotel_factors
FROM
h JOIN ML.WEIGHTS( MODEL `{mf_model}`) w
ON processed_input = 'hotel_cluster' AND feature = CAST(h.hotel_cluster AS STRING)
.format(project_id = project_id, dataset = dataset, mf_model=mf_model)
return ddlop(project_id, query, 'US')
Explanation: Creating embedding features for users and hotels
We will use the matrix factorization model to create corresponding user factors, hotel factors and embed them together with additional features such as total visits and distinct cities to create a new training dataset to an XGBoost classifier which will try to predict the the likelihood of booking for any user/hotel combination. Also note that we aggregated and grouped the orginal dataset by user_id.
End of explanation
def combine_features(ddlop, project_id:str, dataset:str, mf_model:str, hotel_features:str, user_features:str):
#Combine user and hotel embedding features with the rating associated with each combination
query =
CREATE OR REPLACE TABLE `{project_id}.{dataset}.total_features_prod` AS
with ratings as(
SELECT
user_id,
hotel_cluster,
if(sum(is_booking) > 0, 1, sum(is_booking)) AS rating
FROM `{project_id}.{dataset}.hotel_train`
group by 1,2
)
select
h.* EXCEPT(hotel_cluster),
u.* EXCEPT(user_id),
IFNULL(rating,0) as rating
from `{hotel_features}` h, `{user_features}` u
LEFT OUTER JOIN ratings r
ON r.user_id = u.user_id AND r.hotel_cluster = h.hotel_cluster
.format(project_id = project_id, dataset = dataset, mf_model=mf_model, hotel_features=hotel_features, user_features=user_features)
return ddlop(project_id, query, 'US')
Explanation: Function below combines all the features selected (total_mobile_searches) and engineered (user factors and hotel factors) into a training dataset for the XGBoost classifier. Note the target variable is rating which is converted into a binary classfication.
End of explanation
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE FUNCTION `hotel_recommendations.arr_to_input_15_hotels`(h ARRAY<FLOAT64>)
RETURNS
STRUCT<
h1 FLOAT64,
h2 FLOAT64,
h3 FLOAT64,
h4 FLOAT64,
h5 FLOAT64,
h6 FLOAT64,
h7 FLOAT64,
h8 FLOAT64,
h9 FLOAT64,
h10 FLOAT64,
h11 FLOAT64,
h12 FLOAT64,
h13 FLOAT64,
h14 FLOAT64,
h15 FLOAT64
> AS (STRUCT(
h[OFFSET(0)],
h[OFFSET(1)],
h[OFFSET(2)],
h[OFFSET(3)],
h[OFFSET(4)],
h[OFFSET(5)],
h[OFFSET(6)],
h[OFFSET(7)],
h[OFFSET(8)],
h[OFFSET(9)],
h[OFFSET(10)],
h[OFFSET(11)],
h[OFFSET(12)],
h[OFFSET(13)],
h[OFFSET(14)]
));
CREATE OR REPLACE FUNCTION `hotel_recommendations.arr_to_input_15_users`(u ARRAY<FLOAT64>)
RETURNS
STRUCT<
u1 FLOAT64,
u2 FLOAT64,
u3 FLOAT64,
u4 FLOAT64,
u5 FLOAT64,
u6 FLOAT64,
u7 FLOAT64,
u8 FLOAT64,
u9 FLOAT64,
u10 FLOAT64,
u11 FLOAT64,
u12 FLOAT64,
u13 FLOAT64,
u14 FLOAT64,
u15 FLOAT64
> AS (STRUCT(
u[OFFSET(0)],
u[OFFSET(1)],
u[OFFSET(2)],
u[OFFSET(3)],
u[OFFSET(4)],
u[OFFSET(5)],
u[OFFSET(6)],
u[OFFSET(7)],
u[OFFSET(8)],
u[OFFSET(9)],
u[OFFSET(10)],
u[OFFSET(11)],
u[OFFSET(12)],
u[OFFSET(13)],
u[OFFSET(14)]
));
Explanation: We will create a couple of BigQuery user-defined functions (UDF) to convert arrays to a struct and its array elements are the fields in the struct. <strong>Be sure to change the BigQuery dataset name to your dataset name. </strong>
End of explanation
def train_xgboost_model(ddlop, project_id:str, dataset:str, total_features:str):
#Combine user and hotel embedding features with the rating associated with each combination
query =
CREATE OR REPLACE MODEL `{project_id}.{dataset}.recommender_hybrid_xgboost_prod`
OPTIONS(model_type='boosted_tree_classifier', input_label_cols=['rating'], AUTO_CLASS_WEIGHTS=True)
AS
SELECT
* EXCEPT(user_factors, hotel_factors),
{dataset}.arr_to_input_15_users(user_factors).*,
{dataset}.arr_to_input_15_hotels(hotel_factors).*
FROM
`{total_features}`
.format(project_id = project_id, dataset = dataset, total_features=total_features)
return ddlop(project_id, query, 'US')
def evaluate_class(project_id:str, dataset:str, class_model:str, total_features:str, location:str='US')-> NamedTuple('ClassMetrics', [('roc_auc', float)]):
query =
SELECT
*
FROM ML.EVALUATE(MODEL `{class_model}`, (
SELECT
* EXCEPT(user_factors, hotel_factors),
{dataset}.arr_to_input_15_users(user_factors).*,
{dataset}.arr_to_input_15_hotels(hotel_factors).*
FROM
`{total_features}`
))
.format(dataset = dataset, class_model = class_model, total_features = total_features)
print(query)
from google.cloud import bigquery
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query)
metrics_df = job.result().to_dataframe()
from collections import namedtuple
result_tuple = namedtuple('ClassMetrics', ['roc_auc'])
return result_tuple(metrics_df.loc[0].to_dict()['roc_auc'])
Explanation: Train XGBoost model and evaluate it
End of explanation
def export_bqml_model(project_id:str, model:str, destination:str) -> NamedTuple('ModelExport', [('destination', str)]):
import subprocess
import shutil
#command='bq extract -destination_format=ML_XGBOOST_BOOSTER -m {}:{} {}'.format(project_id, model, destination)
model_name = '{}:{}'.format(project_id, model)
print (model_name)
#subprocess.run(['bq', 'extract', '-destination_format=ML_XGBOOST_BOOSTER', '-m', model_name, destination], check=True)
subprocess.run(
(
shutil.which("bq"),
"extract",
"-destination_format=ML_XGBOOST_BOOSTER",
"--project_id=" + project_id,
"-m",
model_name,
destination
),
stderr=subprocess.PIPE,
check=True)
from collections import namedtuple
result_tuple = namedtuple('ModelExport', ['destination'])
return result_tuple(destination)
def deploy_bqml_model_vertexai(project_id:str, region:str, model_name:str, endpoint_name:str, model_dir:str, deploy_image:str, deploy_compute:str):
from google.cloud import aiplatform
parent = "projects/" + project_id + "/locations/" + region
client_options = {"api_endpoint": "{}-aiplatform.googleapis.com".format(region)}
clients = {}
#upload the model to Vertex AI
clients['model'] = aiplatform.gapic.ModelServiceClient(client_options=client_options)
model = {
"display_name": model_name,
"metadata_schema_uri": "",
"artifact_uri": model_dir,
"container_spec": {
"image_uri": deploy_image,
"command": [],
"args": [],
"env": [],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": ""
}
}
upload_model_response = clients['model'].upload_model(parent=parent, model=model)
print("Long running operation on uploading the model:", upload_model_response.operation.name)
model_info = clients['model'].get_model(name=upload_model_response.result(timeout=180).model)
#Create an endpoint on Vertex AI to host the model
clients['endpoint'] = aiplatform.gapic.EndpointServiceClient(client_options=client_options)
create_endpoint_response = clients['endpoint'].create_endpoint(parent=parent, endpoint={"display_name": endpoint_name})
print("Long running operation on creating endpoint:", create_endpoint_response.operation.name)
endpoint_info = clients['endpoint'].get_endpoint(name=create_endpoint_response.result(timeout=180).name)
#Deploy the model to the endpoint
dmodel = {
"model": model_info.name,
"display_name": 'deployed_'+model_name,
"dedicated_resources": {
"min_replica_count": 1,
"max_replica_count": 1,
"machine_spec": {
"machine_type": deploy_compute,
"accelerator_count": 0,
}
}
}
traffic = {
'0' : 100
}
deploy_model_response = clients['endpoint'].deploy_model(endpoint=endpoint_info.name, deployed_model=dmodel, traffic_split=traffic)
print("Long running operation on deploying the model:", deploy_model_response.operation.name)
deploy_model_result = deploy_model_response.result()
Explanation: Export XGBoost model and host it on Vertex AI
One of the nice features of BigQuery ML is the ability to import and export machine learning models. In the function defined below, we are going to export the trained XGBoost model to a Google Cloud Storage bucket. We will later have Google Cloud AI Platform host this model for predictions. It is worth mentioning that you can host this model on any platform that supports Booster (XGBoost 0.82). Check out the documentation for more information on exporting BigQuery ML models and their formats.
End of explanation
import kfp.v2.dsl as dsl
import kfp.v2.components as comp
import time
@dsl.pipeline(
name='hotel-recs-pipeline',
description='training pipeline for hotel recommendation prediction'
)
def training_pipeline():
import json
#Minimum threshold for model metric to determine if model will be deployed to inference: 0.5 is a basically a coin toss with 50/50 chance
mf_msqe_threshold = 0.5
class_auc_threshold = 0.8
#Defining function containers
ddlop = comp.func_to_container_op(run_bigquery_ddl, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery'])
evaluate_mf_op = comp.func_to_container_op(evaluate_matrix_factorization_model, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery', 'google-cloud-bigquery-storage', 'pandas', 'pyarrow'], output_component_file='mf_eval.yaml')
evaluate_class_op = comp.func_to_container_op(evaluate_class, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery','pandas', 'pyarrow'])
export_bqml_model_op = comp.func_to_container_op(export_bqml_model, base_image=BASE_IMAGE, output_component_file='export_bqml.yaml')
deploy_bqml_model_op = comp.func_to_container_op(deploy_bqml_model_vertexai, base_image=BASE_IMAGE, packages_to_install=['google-cloud-aiplatform'])
#############################
#Defining pipeline execution graph
dataset = BQ_DATASET_NAME
#Train matrix factorization model
mf_model_output = train_matrix_factorization_model(ddlop, PROJECT_ID, dataset).set_display_name('train matrix factorization model')
mf_model_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
mf_model = mf_model_output.outputs['created_table']
#Evaluate matrix factorization model
mf_eval_output = evaluate_mf_op(PROJECT_ID, mf_model).set_display_name('evaluate matrix factorization model')
mf_eval_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
#mean squared quantization error
with dsl.Condition(mf_eval_output.outputs['msqe'] < mf_msqe_threshold):
#Create features for Classification model
user_features_output = create_user_features(ddlop, PROJECT_ID, dataset, mf_model).set_display_name('create user factors features')
user_features = user_features_output.outputs['created_table']
user_features_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
hotel_features_output = create_hotel_features(ddlop, PROJECT_ID, dataset, mf_model).set_display_name('create hotel factors features')
hotel_features = hotel_features_output.outputs['created_table']
hotel_features_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
total_features_output = combine_features(ddlop, PROJECT_ID, dataset, mf_model, hotel_features, user_features).set_display_name('combine all features')
total_features = total_features_output.outputs['created_table']
total_features_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
#Train XGBoost model
class_model_output = train_xgboost_model(ddlop, PROJECT_ID, dataset, total_features).set_display_name('train XGBoost model')
class_model = class_model_output.outputs['created_table']
class_model_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
#Evaluate XGBoost model
class_eval_output = evaluate_class_op(PROJECT_ID, dataset, class_model, total_features).set_display_name('evaluate XGBoost model')
class_eval_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
with dsl.Condition(class_eval_output.outputs['roc_auc'] > class_auc_threshold):
#Export model
export_destination_output = export_bqml_model_op(PROJECT_ID, class_model, MODEL_STORAGE).set_display_name('export XGBoost model')
export_destination_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
export_destination = export_destination_output.outputs['destination']
deploy_model = deploy_bqml_model_op(PROJECT_ID, REGION, class_model, ENDPOINT_NAME, MODEL_STORAGE, DEPLOY_IMAGE, DEPLOY_COMPUTE).set_display_name('Deploy XGBoost model')
deploy_model.execution_options.caching_strategy.max_cache_staleness = 'P0D'
Explanation: Defining the Kubeflow Pipelines
Now we have the necessary functions defined, we are now ready to create a workflow using Kubeflow Pipeline. The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL).
The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
The pipeline performs the following steps -
* Trains a Matrix Factorization model
* Evaluates the trained Matrix Factorization model and if the Mean Square Error is less than threadshold, it will continue to the next step, otherwise, the pipeline will stop
* Engineers new user factors feature with the Matrix Factorization model
* Engineers new hotel factors feature with the Matrix Factorization model
* Combines all the features selected (total_mobile_searches) and engineered (user factors and hotel factors) into a training dataset for the XGBoost classifier
* Trains a XGBoost classifier
* Evalutes the trained XGBoost model and if the ROC AUC score is more than threadshold, it will continue to the next step, otherwise, the pipeline will stop
* Exports the XGBoost model to a Google Cloud Storage bucket
* Deploys the XGBoost model from the Google Cloud Storage bucket to Google Cloud AI Platform for prediction
End of explanation
import kfp.v2 as kfp
from kfp.v2 import compiler
pipeline_func = training_pipeline
compiler.Compiler().compile(pipeline_func=pipeline_func,
package_path='hotel_rec_pipeline_job.json')
from kfp.v2.google.client import AIPlatformClient
api_client = AIPlatformClient(project_id=PROJECT_ID, region=REGION)
response = api_client.create_run_from_job_spec(
job_spec_path='hotel_rec_pipeline_job.json',
enable_caching=False,
pipeline_root=PIPELINE_ROOT # optional- use if want to override compile-time value
#parameter_values={'text': 'Hello world!'}
)
Explanation: Submitting pipeline runs
You can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run.
End of explanation |
2,316 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data
Step3: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step4: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
Step6: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint
Step7: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint
Step8: Problem 3
Another check
Step9: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
Step10: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step11: Problem 4
Convince yourself that the data is still good after shuffling!
Someone smart wrote this code. If they were smart, surely they didn't screw up. If they didn't screw up, the data is still good.
Q.E.D.
Finally, let's save the data for later reuse
Step12: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matlotlib backend as plotting inline in IPython
%matplotlib inline
Explanation: Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
url = 'http://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
def download_progress_hook(count, blockSize, totalSize):
A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 1% change in download progress.
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
End of explanation
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
# ndimage is from scipy
# the data is also normalized here
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
Explanation: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
eog's good enough for me.
Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
End of explanation
print(train_datasets)
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
End of explanation
# for both the training and testing dataset
for dataset in [train_datasets, test_datasets]:
# for each letter in this dataset
for letter_pickle in dataset:
# unpickle the letter
with open(letter_pickle, 'rb') as f:
unpickled_list = pickle.load(f)
# how many samples are here
print('Samples for {}: {}'.format(letter_pickle, len(unpickled_list)))
Explanation: Problem 3
Another check: we expect the data to be balanced across classes. Verify that.
End of explanation
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
Explanation: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
End of explanation
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
print(pickle_file)
Explanation: Problem 4
Convince yourself that the data is still good after shuffling!
Someone smart wrote this code. If they were smart, surely they didn't screw up. If they didn't screw up, the data is still good.
Q.E.D.
Finally, let's save the data for later reuse:
End of explanation
from sklearn.linear_model import LogisticRegressionCV
from sklearn.cross_validation import cross_val_score
ns = [50,100,1000,5000]
# load from pickled file
unpickled = {}
with open(pickle_file, 'rb') as f:
unpickled = pickle.load(f)
print(unpickled.keys())
# create the training sets (flattened to 1 dimension)
flat_training_sets = []
for n in ns:
flat_training_sets.append([x.flatten() for x in unpickled['train_dataset'][:n]])
print("Got {} training sets".format(len(flat_training_sets)))
# train the models
logregCV = LogisticRegressionCV()
models = []
for i in range(len(ns)):
print("Training classifier on subset of {} samples...".format(ns[i]))
#print(unpickled['train_labels'][:ns[i]])
models.append(logregCV.fit(flat_training_sets[i], unpickled['train_labels'][:ns[i]]))
# check how we did
for i in range(len(ns)):
# TODO: run this on the testing set
score = cross_val_score(models[i], flat_training_sets[i], train_labels[:ns[i]])
print("Score for classifier trained on {} samples: {}".format(ns[i], score))
Explanation: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions:
- What about near duplicates between datasets? (images that are almost identical)
- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.
Meh, that sounds boring.
Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
End of explanation |
2,317 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'cams-csm1-0', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CAMS
Source ID: CAMS-CSM1-0
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
2,318 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A real-world case (Physics
Step1: 1 - The raw data
Since the asymptotic behaviour is important, we place the majority of points on the $x>2$ area. Note that the definition of the grid (i.e. how many points and where) is fundamental and has great impact on the search performances.
Step2: 2 - The symbolic regression problem
Step3: 4 - The search algorithm
Step4: 5 - The search
Step5: 6 - Inspecting the solution | Python Code:
# Some necessary imports.
import dcgpy
import pygmo as pg
import numpy as np
# Sympy is nice to have for basic symbolic manipulation.
from sympy import init_printing
from sympy.parsing.sympy_parser import *
init_printing()
# Fundamental for plotting.
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: A real-world case (Physics: dynamics)
In this tutorial we will be using data from a real-world case. The data comes from a piecewise continuous function representing the gravitational interaction between two swarm of particles. It is of interest to represent such an interaction with a one only continuous function, albeit introducing some error. If succesfull, this would allow to have some analytical insight on the qualitative stability of the resulting orbits, as well as to make use of methods requiring high order continuity to study the resulting dynamical system.
The equation is (derived from a work by Francesco Biscani):
$$
a(x) = \left{
\begin{array}{ll}
\frac{x^3 - 18x+32}{32} & x < 2 \
\frac{1}{x^2} & x \ge 2
\end{array}
\right.
$$
It is important, on this problem, to respect the asymptotic behaviour of the acceleration so that $\lim_{x\rightarrow \infty}a(x) = \frac 1{x^2}$.
End of explanation
X = np.linspace(0,15, 100)
Y = X * ((X**3) - 18 * X + 32) / 32
Y[X>2] = 1. / X[X>2]**2
X = np.reshape(X, (100,1))
Y = np.reshape(Y, (100,1))
# And we plot them as to visualize the problem.
_ = plt.plot(X, Y, '.')
_ = plt.title('Acceleration')
_ = plt.xlabel('a')
_ = plt.ylabel('f')
Explanation: 1 - The raw data
Since the asymptotic behaviour is important, we place the majority of points on the $x>2$ area. Note that the definition of the grid (i.e. how many points and where) is fundamental and has great impact on the search performances.
End of explanation
# We define our kernel set, that is the mathematical operators we will
# want our final model to possibly contain. What to choose in here is left
# to the competence and knowledge of the user. For this particular application we want to mainly look into rational
#functions. Note we do not include the difference as that can be obtained via negative constants
ss = dcgpy.kernel_set_double(["sum", "mul","pdiv"])
# We instantiate the symbolic regression optimization problem (note: many important options are here not
# specified and thus set to their default values).
# Note that we allow for three constants in the final expression
udp = dcgpy.symbolic_regression(points = X, labels = Y, kernels=ss(), n_eph=3, rows =1, cols=20, levels_back=21, multi_objective=True)
print(udp)
Explanation: 2 - The symbolic regression problem
End of explanation
# We instantiate here the evolutionary strategy we want to use to search for models.
# In this case we use a multiple objective memetic algorithm.
uda = dcgpy.momes4cgp(gen = 3000, max_mut = 4)
Explanation: 4 - The search algorithm
End of explanation
prob = pg.problem(udp)
algo = pg.algorithm(uda)
# Note that the screen output will happen on the terminal, not on your Jupyter notebook.
# It can be recovered afterwards from the log.
algo.set_verbosity(10)
pop = pg.population(prob, 20)
pop = algo.evolve(pop)
# This extract the population individual with lowest loss
idx = np.argmin(pop.get_f(), axis=0)[0]
print("Best loss (MSE) found is: ", pop.get_f()[idx][0])
Explanation: 5 - The search
End of explanation
pop.get_f()
# Lets have a look to the symbolic representation of our model (using sympy)
parse_expr(udp.prettier(pop.get_x()[idx]))
# And lets see what our model actually predicts on the inputs
Y_pred = udp.predict(X, pop.get_x()[idx])
# Lets comapre to the data
_ = plt.plot(X, Y_pred, 'r.')
_ = plt.plot(X, Y, '.', alpha=0.2)
_ = plt.title('measurements')
_ = plt.xlabel('unknown')
_ = plt.ylabel('temperature in unknown units')
print("Values for the constants: ", pop.get_x()[idx][:3])
Explanation: 6 - Inspecting the solution
End of explanation |
2,319 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License
Step1: This chapter introduces two related topics
Step2: Each update uses the same likelihood, but the changes in probability are not the same. The first update decreases the probability by about 14 percentage points, the second by 24, and the third by 26.
That's normal for this kind of update, and in fact it's necessary; if the changes were the same size, we would quickly get into negative probabilities.
The odds follow a more obvious pattern. Because each update multiplies the odds by the same likelihood ratio, the odds form a geometric sequence.
And that brings us to consider another way to represent uncertainty
Step3: You might notice
Step4: That's true in this example, and we can show that it's true in general by taking the log of both sides of Bayes's Rule.
$$\log O(H|F) = \log O(H) + \log \frac{P(F|H)}{P(F|not H)}$$
On a log odds scale, a Bayesian update is additive. So if $F^x$ means that $x$ female students arrive while I am waiting, the posterior log odds that I am in the right room are
Step5: I'll read the data and do some cleaning.
Step6: Here are the first few rows
Step7: The columns are
Step9: The following figure shows the relationship between damage and temperature.
Step10: When the outside temperature was below 65 degrees, there was always damage to the O-rings. When the temperature was above 65 degrees, there was usually no damage.
Based on this figure, it seems plausible that the probability of damage is related to temperature. If we assume this probability follows a logistic model, we can write
Step11: And for consistency I'll create a copy of the Damage columns called y.
Step12: Before doing a Bayesian update, I'll use statsmodels to run a conventional (non-Bayesian) logistic regression.
Step13: results contains a "point estimate" for each parameter, that is, a single value rather than a posterior distribution.
The intercept is about -1.2, and the estimated slope is about -0.23.
To see what these parameters mean, I'll use them to compute probabilities for a range of temperatures.
Here's the range
Step14: We can use the logistic regression equation to compute log odds
Step15: And then convert to probabilities.
Step16: Converting log odds to probabilities is a common enough operation that it has a name, expit, and SciPy provides a function that computes it.
Step17: Here's what the logistic model looks like with these estimated parameters.
Step18: At low temperatures, the probability of damage is high; at high temperatures, it drops off to near 0.
But that's based on conventional logistic regression.
Now we'll do the Bayesian version.
Prior Distribution
I'll use uniform distributions for both parameters, using the point estimates from the previous section to help me choose the upper and lower bounds.
Step19: We can use make_joint to construct the joint prior distribution.
Step20: The values of intercept run across the columns, the values of slope run down the rows.
For this problem, it will be convenient to "stack" the prior so the parameters are levels in a MultiIndex, and put the result in a Pmf.
Step21: joint_pmf is a Pmf with two levels in the index, one for each parameter. That makes it easy to loop through possible pairs of parameters, as we'll see in the next section.
Likelihood
To do the update, we have to compute the likelihood of the data for each possible pair of parameters.
To make that easier, I'm going to group the data by temperature, x, and count the number of launches and damage incidents at each temperature.
Step22: The result is a DataFrame with two columns
Step23: To compute the likelihood of the data, let's assume temporarily that the parameters we just estimated, slope and inter, are correct.
We can use them to compute the probability of damage at each launch temperature, like this
Step24: ps contains the probability of damage for each launch temperature, according to the model.
Now, for each temperature we have ns, ps, and ks;
we can use the binomial distribution to compute the likelihood of the data.
Step25: Each element of likes is the probability of seeing k damage incidents in n launches if the probability of damage is p.
The likelihood of the whole dataset is the product of this array.
Step26: That's how we compute the likelihood of the data for a particular pair of parameters.
Now we can compute the likelihood of the data for all possible pairs
Step27: To initialize likelihood, we make a copy of joint_pmf, which is a convenient way to make sure that likelihood has the same type, index, and data type as joint_pmf.
The loop iterates through the parameters. For each possible pair, it uses the logistic model to compute ps, computes the likelihood of the data, and assigns the result to a row in likelihood.
The Update
Now we can compute the posterior distribution in the usual way.
Step28: Because we used a uniform prior, the parameter pair with the highest likelihood is also the pair with maximum posterior probability
Step29: So we can confirm that the results of the Bayesian update are consistent with the maximum likelihood estimate computed by StatsModels
Step30: They are approximately the same, within the precision of the grid we're using.
If we unstack the posterior Pmf we can make a contour plot of the joint posterior distribution.
Step31: The ovals in the contour plot are aligned along a diagonal, which indicates that there is some correlation between slope and inter in the posterior distribution.
But the correlation is weak, which is one of the reasons we subtracted off the mean launch temperature when we computed x; centering the data minimizes the correlation between the parameters.
Exercise
Step32: Here's the posterior distribution of inter.
Step33: And here's the posterior distribution of slope.
Step34: Here are the posterior means.
Step35: Both marginal distributions are moderately skewed, so the posterior means are somewhat different from the point estimates.
Step37: Transforming Distributions
Let's interpret these parameters. Recall that the intercept is the log odds of the hypothesis when $x$ is 0, which is when temperature is about 70 degrees F (the value of offset).
So we can interpret the quantities in marginal_inter as log odds.
To convert them to probabilities, I'll use the following function, which transforms the quantities in a Pmf by applying a given function
Step38: If we call transform and pass expit as a parameter, it transforms the log odds in marginal_inter into probabilities and returns the posterior distribution of inter expressed in terms of probabilities.
Step39: Pmf provides a transform method that does the same thing.
Step40: Here's the posterior distribution for the probability of damage at 70 degrees F.
Step41: The mean of this distribution is about 22%, which is the probability of damage at 70 degrees F, according to the model.
Step42: This result shows the second reason I defined x to be zero when temperature is 70 degrees F; this way, the intercept corresponds to the probability of damage at a relevant temperature, rather than 0 degrees F.
Now let's look more closely at the estimated slope. In the logistic model, the parameter $\beta_1$ is the log of the likelihood ratio.
So we can interpret the quantities in marginal_slope as log likelihood ratios, and we can use exp to transform them to likelihood ratios (also known as Bayes factors).
Step43: The result is the posterior distribution of likelihood ratios; here's what it looks like.
Step44: The mean of this distribution is about 0.75, which means that each additional degree Fahrenheit provides evidence against the possibility of damage, with a likelihood ratio (Bayes factor) of 0.75.
Notice
Step45: And here's the posterior mean of marginal_slope, transformed to a likelihood ratio, compared to the mean marginal_lr.
Step46: In this example, the differences are not huge, but they can be.
As a general rule, transform first, then compute summary statistics.
Predictive Distributions
In the logistic model, the parameters are interpretable, at least after transformation. But often what we care about are predictions, not parameters. In the Space Shuttle problem, the most important prediction is, "What is the probability of O-ring damage if the outside temperature is 31 degrees F?"
To make that prediction, I'll draw a sample of parameter pairs from the posterior distribution.
Step47: The result is an array of 101 tuples, each representing a possible pair of parameters.
I chose this sample size to make the computation fast.
Increasing it would not change the results much, but they would be a little more precise.
Step48: To generate predictions, I'll use a range of temperatures from 31 degrees F (the temperature when the Challenger launched) to 82 degrees F (the highest observed temperature).
Step49: The following loop uses xs and the sample of parameters to construct an array of predicted probabilities.
Step50: The result has one column for each value in xs and one row for each element of sample.
To get a quick sense of what the predictions look like, we can loop through the rows and plot them.
Step51: The overlapping lines in this figure give a sense of the most likely value at each temperature and the degree of uncertainty.
In each column, I'll compute the median to quantify the central tendency and a 90% credible interval to quantify the uncertainty.
np.percentile computes the given percentiles; with the argument axis=0, it computes them for each column.
Step52: The results are arrays containing predicted probabilities for the lower bound of the 90% CI, the median, and the upper bound of the CI.
Here's what they look like
Step53: According to these results, the probability of damage to the O-rings at 80 degrees F is near 2%, but there is some uncertainty about that prediction; the upper bound of the CI is around 10%.
At 60 degrees, the probability of damage is near 80%, but the CI is even wider, from 48% to 97%.
But the primary goal of the model is to predict the probability of damage at 31 degrees F, and the answer is at least 97%, and more likely to be more than 99.9%.
Step54: One conclusion we might draw is this
Step55: Exercise
Step56: First, I'm going to "roll" the data so it starts in September rather than January.
Step57: And I'll put it in a DataFrame with one row for each month and the diagnosis rate per 10,000.
Step58: Here's what the diagnosis rates look like.
Step59: For the first 9 months, from September to May, we see what we would expect if some of the excess diagnoses are due to "age-based variation in behavior". For each month of difference in age, we see an increase in the number of diagnoses.
This pattern breaks down for the last three months, June, July, and August. This might be explained by random variation, but it also might be due to parental manipulation; if some parents hold back children born near the deadline, the observations for these month would include a mixture of children who are relatively old for their grade and therefore less likely to be diagnosed.
Unfortunately, the dataset includes only month of birth, not year, so we don't know the actual ages of these students when they started school. However, we can use the first nine months to estimate the effect of age on diagnosis rate; then we can think about what to do with the other three months.
Use the methods in this chapter to estimate the probability of diagnosis as a function of birth month.
Start with the following prior distributions.
Step60: Make a joint prior distribution and update it using the data for the first nine months.
Then draw a sample from the posterior distribution and use it to compute the median probability of diagnosis for each month and a 90% credible interval.
As a bonus exercise, do a second update using the data from the last three months, but treating the observed number of diagnoses as a lower bound on the number of diagnoses there would be if no children were kept back. | Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
Explanation: Logistic Regression
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
def prob(o):
return o / (o+1)
import pandas as pd
index = ['prior', '1 student', '2 students', '3 students']
table = pd.DataFrame(index=index)
table['odds'] = [10, 10/3, 10/9, 10/27]
table['prob'] = prob(table['odds'])
table['prob diff'] = table['prob'].diff() * 100
table.fillna('--')
Explanation: This chapter introduces two related topics: log odds and logistic regression.
In <<_BayessRule>>, we rewrote Bayes's Theorem in terms of odds and derived Bayes's Rule, which can be a convenient way to do a Bayesian update on paper or in your head.
In this chapter, we'll look at Bayes's Rule on a logarithmic scale, which provides insight into how we accumulate evidence through successive updates.
That leads directly to logistic regression, which is based on a linear model of the relationship between evidence and the log odds of a hypothesis.
As an example, we'll use data from the Space Shuttle to explore the relationship between temperature and the probability of damage to the O-rings.
As an exercise, you'll have a chance to model the relationship between a child's age when they start school and their probability of being diagnosed with Attention Deficit Hyperactivity Disorder (ADHD).
Log Odds
When I was in grad school, I signed up for a class on the Theory of Computation.
On the first day of class, I was the first to arrive.
A few minutes later, another student arrived.
At the time, about 83% of the students in the computer science program were male, so I was mildly surprised to note that the other student was female.
When another female student arrived a few minutes later, I started to think I was in the wrong room.
When third female student arrived, I was confident I was in the wrong room.
And as it turned out, I was.
I'll use this anecdote to demonstrate Bayes's Rule on a logarithmic scale and show how it relates to logistic regression.
Using $H$ to represent the hypothesis that I was in the right room, and $F$ to represent the observation that the first other student was female, we can write Bayes's Rule like this:
$$O(H|F) = O(H) \frac{P(F|H)}{P(F|not H)}$$
Before I saw the other students, I was confident I was in the right room, so I might assign prior odds of 10:1 in favor:
$$O(H) = 10$$
If I was in the right room, the likelihood of the first female student was about 17%.
If I was not in the right room, the likelihood of the first female student was more like 50%,
$$\frac{P(F|H)}{P(F|not H)} = 17 / 50$$
So the likelihood ratio is close to 1/3. Applying Bayes's Rule, the posterior odds were
$$O(H|F) = 10 / 3$$
After two students, the posterior odds were
$$O(H|FF) = 10 / 9$$
And after three students:
$$O(H|FFF) = 10 / 27$$
At that point, I was right to suspect I was in the wrong room.
The following table shows the odds after each update, the corresponding probabilities, and the change in probability after each step, expressed in percentage points.
End of explanation
import numpy as np
table['log odds'] = np.log(table['odds'])
table['log odds diff'] = table['log odds'].diff()
table.fillna('--')
Explanation: Each update uses the same likelihood, but the changes in probability are not the same. The first update decreases the probability by about 14 percentage points, the second by 24, and the third by 26.
That's normal for this kind of update, and in fact it's necessary; if the changes were the same size, we would quickly get into negative probabilities.
The odds follow a more obvious pattern. Because each update multiplies the odds by the same likelihood ratio, the odds form a geometric sequence.
And that brings us to consider another way to represent uncertainty: log odds, which is the logarithm of odds, usually expressed using the natural log (base $e$).
Adding log odds to the table:
End of explanation
np.log(1/3)
Explanation: You might notice:
When probability is greater than 0.5, odds are greater than 1, and log odds are positive.
When probability is less than 0.5, odds are less than 1, and log odds are negative.
You might also notice that the log odds are equally spaced.
The change in log odds after each update is the logarithm of the likelihood ratio.
End of explanation
download('https://raw.githubusercontent.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/master/Chapter2_MorePyMC/data/challenger_data.csv')
Explanation: That's true in this example, and we can show that it's true in general by taking the log of both sides of Bayes's Rule.
$$\log O(H|F) = \log O(H) + \log \frac{P(F|H)}{P(F|not H)}$$
On a log odds scale, a Bayesian update is additive. So if $F^x$ means that $x$ female students arrive while I am waiting, the posterior log odds that I am in the right room are:
$$\log O(H|F^x) = \log O(H) + x \log \frac{P(F|H)}{P(F|not H)}$$
This equation represents a linear relationship between the log likelihood ratio and the posterior log odds.
In this example the linear equation is exact, but even when it's not, it is common to use a linear function to model the relationship between an explanatory variable, $x$, and a dependent variable expressed in log odds, like this:
$$\log O(H | x) = \beta_0 + \beta_1 x$$
where $\beta_0$ and $\beta_1$ are unknown parameters:
The intercept, $\beta_0$, is the log odds of the hypothesis when $x$ is 0.
The slope, $\beta_1$, is the log of the likelihood ratio.
This equation is the basis of logistic regression.
The Space Shuttle Problem
As an example of logistic regression, I'll solve a problem from Cameron Davidson-Pilon's book, Bayesian Methods for Hackers. He writes:
"On January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23 (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend."
The dataset is originally from this paper, but also available from Davidson-Pilon.
End of explanation
data = pd.read_csv('challenger_data.csv', parse_dates=[0])
# avoiding column names with spaces
data.rename(columns={'Damage Incident': 'Damage'}, inplace=True)
# dropping row 3, in which Damage Incident is NaN,
# and row 24, which is the record for the Challenger
data.drop(labels=[3, 24], inplace=True)
# convert the Damage column to integer
data['Damage'] = data['Damage'].astype(int)
data
Explanation: I'll read the data and do some cleaning.
End of explanation
data.head()
Explanation: Here are the first few rows:
End of explanation
len(data), data['Damage'].sum()
Explanation: The columns are:
Date: The date of launch,
Temperature: Outside temperature in Fahrenheit, and
Damage: 1 if there was a damage incident and 0 otherwise.
There are 23 launches in the dataset, 7 with damage incidents.
End of explanation
import matplotlib.pyplot as plt
from utils import decorate
def plot_data(data):
Plot damage as a function of temperature.
data: DataFrame
plt.plot(data['Temperature'], data['Damage'], 'o',
label='data', color='C0', alpha=0.4)
decorate(ylabel="Probability of damage",
xlabel="Outside temperature (deg F)",
title="Damage to O-Rings vs Temperature")
plot_data(data)
Explanation: The following figure shows the relationship between damage and temperature.
End of explanation
offset = data['Temperature'].mean().round()
data['x'] = data['Temperature'] - offset
offset
Explanation: When the outside temperature was below 65 degrees, there was always damage to the O-rings. When the temperature was above 65 degrees, there was usually no damage.
Based on this figure, it seems plausible that the probability of damage is related to temperature. If we assume this probability follows a logistic model, we can write:
$$\log O(H | x) = \beta_0 + \beta_1 x$$
where $H$ is the hypothesis that the O-rings will be damaged, $x$ is temperature, and $\beta_0$ and $\beta_1$ are the parameters we will estimate.
For reasons I'll explain soon, I'll define $x$ to be temperature shifted by an offset so its mean is 0.
End of explanation
data['y'] = data['Damage']
Explanation: And for consistency I'll create a copy of the Damage columns called y.
End of explanation
import statsmodels.formula.api as smf
formula = 'y ~ x'
results = smf.logit(formula, data=data).fit(disp=False)
results.params
Explanation: Before doing a Bayesian update, I'll use statsmodels to run a conventional (non-Bayesian) logistic regression.
End of explanation
inter = results.params['Intercept']
slope = results.params['x']
xs = np.arange(53, 83) - offset
Explanation: results contains a "point estimate" for each parameter, that is, a single value rather than a posterior distribution.
The intercept is about -1.2, and the estimated slope is about -0.23.
To see what these parameters mean, I'll use them to compute probabilities for a range of temperatures.
Here's the range:
End of explanation
log_odds = inter + slope * xs
Explanation: We can use the logistic regression equation to compute log odds:
End of explanation
odds = np.exp(log_odds)
ps = odds / (odds + 1)
ps.mean()
Explanation: And then convert to probabilities.
End of explanation
from scipy.special import expit
ps = expit(inter + slope * xs)
ps.mean()
Explanation: Converting log odds to probabilities is a common enough operation that it has a name, expit, and SciPy provides a function that computes it.
End of explanation
plt.plot(xs+offset, ps, label='model', color='C1')
plot_data(data)
Explanation: Here's what the logistic model looks like with these estimated parameters.
End of explanation
from utils import make_uniform
qs = np.linspace(-5, 1, num=101)
prior_inter = make_uniform(qs, 'Intercept')
qs = np.linspace(-0.8, 0.1, num=101)
prior_slope = make_uniform(qs, 'Slope')
Explanation: At low temperatures, the probability of damage is high; at high temperatures, it drops off to near 0.
But that's based on conventional logistic regression.
Now we'll do the Bayesian version.
Prior Distribution
I'll use uniform distributions for both parameters, using the point estimates from the previous section to help me choose the upper and lower bounds.
End of explanation
from utils import make_joint
joint = make_joint(prior_inter, prior_slope)
Explanation: We can use make_joint to construct the joint prior distribution.
End of explanation
from empiricaldist import Pmf
joint_pmf = Pmf(joint.stack())
joint_pmf.head()
Explanation: The values of intercept run across the columns, the values of slope run down the rows.
For this problem, it will be convenient to "stack" the prior so the parameters are levels in a MultiIndex, and put the result in a Pmf.
End of explanation
grouped = data.groupby('x')['y'].agg(['count', 'sum'])
grouped.head()
Explanation: joint_pmf is a Pmf with two levels in the index, one for each parameter. That makes it easy to loop through possible pairs of parameters, as we'll see in the next section.
Likelihood
To do the update, we have to compute the likelihood of the data for each possible pair of parameters.
To make that easier, I'm going to group the data by temperature, x, and count the number of launches and damage incidents at each temperature.
End of explanation
ns = grouped['count']
ks = grouped['sum']
Explanation: The result is a DataFrame with two columns: count is the number of launches at each temperature; sum is the number of damage incidents.
To be consistent with the parameters of the binomial distributions, I'll assign them to variables named ns and ks.
End of explanation
xs = grouped.index
ps = expit(inter + slope * xs)
Explanation: To compute the likelihood of the data, let's assume temporarily that the parameters we just estimated, slope and inter, are correct.
We can use them to compute the probability of damage at each launch temperature, like this:
End of explanation
from scipy.stats import binom
likes = binom.pmf(ks, ns, ps)
likes
Explanation: ps contains the probability of damage for each launch temperature, according to the model.
Now, for each temperature we have ns, ps, and ks;
we can use the binomial distribution to compute the likelihood of the data.
End of explanation
likes.prod()
Explanation: Each element of likes is the probability of seeing k damage incidents in n launches if the probability of damage is p.
The likelihood of the whole dataset is the product of this array.
End of explanation
likelihood = joint_pmf.copy()
for slope, inter in joint_pmf.index:
ps = expit(inter + slope * xs)
likes = binom.pmf(ks, ns, ps)
likelihood[slope, inter] = likes.prod()
Explanation: That's how we compute the likelihood of the data for a particular pair of parameters.
Now we can compute the likelihood of the data for all possible pairs:
End of explanation
posterior_pmf = joint_pmf * likelihood
posterior_pmf.normalize()
Explanation: To initialize likelihood, we make a copy of joint_pmf, which is a convenient way to make sure that likelihood has the same type, index, and data type as joint_pmf.
The loop iterates through the parameters. For each possible pair, it uses the logistic model to compute ps, computes the likelihood of the data, and assigns the result to a row in likelihood.
The Update
Now we can compute the posterior distribution in the usual way.
End of explanation
pd.Series(posterior_pmf.max_prob(),
index=['slope', 'inter'])
Explanation: Because we used a uniform prior, the parameter pair with the highest likelihood is also the pair with maximum posterior probability:
End of explanation
results.params
Explanation: So we can confirm that the results of the Bayesian update are consistent with the maximum likelihood estimate computed by StatsModels:
End of explanation
from utils import plot_contour
joint_posterior = posterior_pmf.unstack()
plot_contour(joint_posterior)
decorate(title='Joint posterior distribution')
Explanation: They are approximately the same, within the precision of the grid we're using.
If we unstack the posterior Pmf we can make a contour plot of the joint posterior distribution.
End of explanation
from utils import marginal
marginal_inter = marginal(joint_posterior, 0)
marginal_slope = marginal(joint_posterior, 1)
Explanation: The ovals in the contour plot are aligned along a diagonal, which indicates that there is some correlation between slope and inter in the posterior distribution.
But the correlation is weak, which is one of the reasons we subtracted off the mean launch temperature when we computed x; centering the data minimizes the correlation between the parameters.
Exercise: To see why this matters, go back and set offset=60 and run the analysis again.
The slope should be the same, but the intercept will be different. And if you plot the joint distribution, the contours you get will be elongated, indicating stronger correlation between the estimated parameters.
In theory, this correlation is not a problem, but in practice it is. With uncentered data, the posterior distribution is more spread out, so it's harder to cover with the joint prior distribution.
Centering the data maximizes the precision of the estimates; with uncentered data, we have to do more computation to get the same precision.
Marginal Distributions
Finally, we can extract the marginal distributions.
End of explanation
marginal_inter.plot(label='intercept', color='C4')
decorate(xlabel='Intercept',
ylabel='PDF',
title='Posterior marginal distribution of intercept')
Explanation: Here's the posterior distribution of inter.
End of explanation
marginal_slope.plot(label='slope', color='C2')
decorate(xlabel='Slope',
ylabel='PDF',
title='Posterior marginal distribution of slope')
Explanation: And here's the posterior distribution of slope.
End of explanation
pd.Series([marginal_inter.mean(), marginal_slope.mean()],
index=['inter', 'slope'])
Explanation: Here are the posterior means.
End of explanation
results.params
Explanation: Both marginal distributions are moderately skewed, so the posterior means are somewhat different from the point estimates.
End of explanation
def transform(pmf, func):
Transform the quantities in a Pmf.
ps = pmf.ps
qs = func(pmf.qs)
return Pmf(ps, qs, copy=True)
Explanation: Transforming Distributions
Let's interpret these parameters. Recall that the intercept is the log odds of the hypothesis when $x$ is 0, which is when temperature is about 70 degrees F (the value of offset).
So we can interpret the quantities in marginal_inter as log odds.
To convert them to probabilities, I'll use the following function, which transforms the quantities in a Pmf by applying a given function:
End of explanation
marginal_probs = transform(marginal_inter, expit)
Explanation: If we call transform and pass expit as a parameter, it transforms the log odds in marginal_inter into probabilities and returns the posterior distribution of inter expressed in terms of probabilities.
End of explanation
marginal_probs = marginal_inter.transform(expit)
Explanation: Pmf provides a transform method that does the same thing.
End of explanation
marginal_probs.plot(color='C1')
decorate(xlabel='Probability of damage at 70 deg F',
ylabel='PDF',
title='Posterior marginal distribution of probabilities')
Explanation: Here's the posterior distribution for the probability of damage at 70 degrees F.
End of explanation
mean_prob = marginal_probs.mean()
mean_prob
Explanation: The mean of this distribution is about 22%, which is the probability of damage at 70 degrees F, according to the model.
End of explanation
marginal_lr = marginal_slope.transform(np.exp)
Explanation: This result shows the second reason I defined x to be zero when temperature is 70 degrees F; this way, the intercept corresponds to the probability of damage at a relevant temperature, rather than 0 degrees F.
Now let's look more closely at the estimated slope. In the logistic model, the parameter $\beta_1$ is the log of the likelihood ratio.
So we can interpret the quantities in marginal_slope as log likelihood ratios, and we can use exp to transform them to likelihood ratios (also known as Bayes factors).
End of explanation
marginal_lr.plot(color='C3')
decorate(xlabel='Likelihood ratio of 1 deg F',
ylabel='PDF',
title='Posterior marginal distribution of likelihood ratios')
mean_lr = marginal_lr.mean()
mean_lr
Explanation: The result is the posterior distribution of likelihood ratios; here's what it looks like.
End of explanation
expit(marginal_inter.mean()), marginal_probs.mean()
Explanation: The mean of this distribution is about 0.75, which means that each additional degree Fahrenheit provides evidence against the possibility of damage, with a likelihood ratio (Bayes factor) of 0.75.
Notice:
I computed the posterior mean of the probability of damage at 70 deg F by transforming the marginal distribution of the intercept to the marginal distribution of probability, and then computing the mean.
I computed the posterior mean of the likelihood ratio by transforming the marginal distribution of slope to the marginal distribution of likelihood ratios, and then computing the mean.
This is the correct order of operations, as opposed to computing the posterior means first and then transforming them.
To see the difference, let's compute both values the other way around.
Here's the posterior mean of marginal_inter, transformed to a probability, compared to the mean of marginal_probs.
End of explanation
np.exp(marginal_slope.mean()), marginal_lr.mean()
Explanation: And here's the posterior mean of marginal_slope, transformed to a likelihood ratio, compared to the mean marginal_lr.
End of explanation
np.random.seed(17)
sample = posterior_pmf.choice(101)
Explanation: In this example, the differences are not huge, but they can be.
As a general rule, transform first, then compute summary statistics.
Predictive Distributions
In the logistic model, the parameters are interpretable, at least after transformation. But often what we care about are predictions, not parameters. In the Space Shuttle problem, the most important prediction is, "What is the probability of O-ring damage if the outside temperature is 31 degrees F?"
To make that prediction, I'll draw a sample of parameter pairs from the posterior distribution.
End of explanation
sample.shape
sample.dtype
type(sample[0])
Explanation: The result is an array of 101 tuples, each representing a possible pair of parameters.
I chose this sample size to make the computation fast.
Increasing it would not change the results much, but they would be a little more precise.
End of explanation
temps = np.arange(31, 83)
xs = temps - offset
Explanation: To generate predictions, I'll use a range of temperatures from 31 degrees F (the temperature when the Challenger launched) to 82 degrees F (the highest observed temperature).
End of explanation
pred = np.empty((len(sample), len(xs)))
for i, (slope, inter) in enumerate(sample):
pred[i] = expit(inter + slope * xs)
Explanation: The following loop uses xs and the sample of parameters to construct an array of predicted probabilities.
End of explanation
for ps in pred:
plt.plot(temps, ps, color='C1', lw=0.5, alpha=0.4)
plot_data(data)
Explanation: The result has one column for each value in xs and one row for each element of sample.
To get a quick sense of what the predictions look like, we can loop through the rows and plot them.
End of explanation
low, median, high = np.percentile(pred, [5, 50, 95], axis=0)
Explanation: The overlapping lines in this figure give a sense of the most likely value at each temperature and the degree of uncertainty.
In each column, I'll compute the median to quantify the central tendency and a 90% credible interval to quantify the uncertainty.
np.percentile computes the given percentiles; with the argument axis=0, it computes them for each column.
End of explanation
plt.fill_between(temps, low, high, color='C1', alpha=0.2)
plt.plot(temps, median, color='C1', label='logistic model')
plot_data(data)
Explanation: The results are arrays containing predicted probabilities for the lower bound of the 90% CI, the median, and the upper bound of the CI.
Here's what they look like:
End of explanation
low = pd.Series(low, temps)
median = pd.Series(median, temps)
high = pd.Series(high, temps)
t = 80
print(median[t], (low[t], high[t]))
t = 60
print(median[t], (low[t], high[t]))
t = 31
print(median[t], (low[t], high[t]))
Explanation: According to these results, the probability of damage to the O-rings at 80 degrees F is near 2%, but there is some uncertainty about that prediction; the upper bound of the CI is around 10%.
At 60 degrees, the probability of damage is near 80%, but the CI is even wider, from 48% to 97%.
But the primary goal of the model is to predict the probability of damage at 31 degrees F, and the answer is at least 97%, and more likely to be more than 99.9%.
End of explanation
# Solution
prior_log_odds = np.log(4)
prior_log_odds
# Solution
lr1 = np.log(7/5)
lr2 = np.log(3/5)
lr3 = np.log(9/5)
lr1, lr2, lr3
# Solution
# In total, these three outcomes provide evidence that the
# pundit's algorithm is legitmate, although with K=1.8,
# it is weak evidence.
posterior_log_odds = prior_log_odds + lr1 + lr2 + lr3
posterior_log_odds
Explanation: One conclusion we might draw is this: If the people responsible for the Challenger launch had taken into account all of the data, and not just the seven damage incidents, they could have predicted that the probability of damage at 31 degrees F was nearly certain. If they had, it seems likely they would have postponed the launch.
At the same time, if they considered the previous figure, they might have realized that the model makes predictions that extend far beyond the data. When we extrapolate like that, we have to remember not just the uncertainty quantified by the model, which we expressed as a credible interval; we also have to consider the possibility that the model itself is unreliable.
This example is based on a logistic model, which assumes that each additional degree of temperature contributes the same amount of evidence in favor of (or against) the possibility of damage. Within a narrow range of temperatures, that might be a reasonable assumption, especially if it is supported by data. But over a wider range, and beyond the bounds of the data, reality has no obligation to stick to the model.
Empirical Bayes
In this chapter I used StatsModels to compute the parameters that maximize the probability of the data, and then used those estimates to choose the bounds of the uniform prior distributions.
It might have occurred to you that this process uses the data twice, once to choose the priors and again to do the update. If that bothers you, you are not alone.
The process I used is an example of what's called the Empirical Bayes method, although I don't think that's a particularly good name for it.
Although it might seem problematic to use the data twice, in these examples, it is not. To see why, consider an alternative: instead of using the estimated parameters to choose the bounds of the prior distribution, I could have used uniform distributions with much wider ranges.
In that case, the results would be the same; the only difference is that I would spend more time computing likelihoods for parameters where the posterior probabilities are negligibly small.
So you can think of this version of Empirical Bayes as an optimization that minimizes computation by putting the prior distributions where the likelihood of the data is worth computing.
This optimization doesn't affect the results, so it doesn't "double-count" the data.
Summary
So far we have seen three ways to represent degrees of confidence in a hypothesis: probability, odds, and log odds.
When we write Bayes's Rule in terms of log odds, a Bayesian update is the sum of the prior and the likelihood; in this sense, Bayesian statistics is the arithmetic of hypotheses and evidence.
This form of Bayes's Theorem is also the foundation of logistic regression, which we used to infer parameters and make predictions. In the Space Shuttle problem, we modeled the relationship between temperature and the probability of damage, and showed that the Challenger disaster might have been predictable. But this example is also a warning about the hazards of using a model to extrapolate far beyond the data.
In the exercises below you'll have a chance to practice the material in this chapter, using log odds to evaluate a political pundit and using logistic regression to model diagnosis rates for Attention Deficit Hyperactivity Disorder (ADHD).
In the next chapter we'll move from logistic regression to linear regression, which we will use to model changes over time in temperature, snowfall, and the marathon world record.
Exercises
Exercise: Suppose a political pundit claims to be able to predict the outcome of elections, but instead of picking a winner, they give each candidate a probability of winning.
With that kind of prediction, it can be hard to say whether it is right or wrong.
For example, suppose the pundit says that Alice has a 70% chance of beating Bob, and then Bob wins the election. Does that mean the pundit was wrong?
One way to answer this question is to consider two hypotheses:
H: The pundit's algorithm is legitimate; the probabilities it produces are correct in the sense that they accurately reflect the candidates' probabilities of winning.
not H: The pundit's algorithm is bogus; the probabilities it produces are random values with a mean of 50%.
If the pundit says Alice has a 70% chance of winning, and she does, that provides evidence in favor of H with likelihood ratio 70/50.
If the pundit says Alice has a 70% chance of winning, and she loses, that's evidence against H with a likelihood ratio of 50/30.
Suppose we start with some confidence in the algorithm, so the prior odds are 4 to 1. And suppose the pundit generates predictions for three elections:
In the first election, the pundit says Alice has a 70% chance of winning and she does.
In the second election, the pundit says Bob has a 30% chance of winning and he does.
In the third election, the pundit says Carol has an 90% chance of winning and she does.
What is the log likelihood ratio for each of these outcomes? Use the log-odds form of Bayes's Rule to compute the posterior log odds for H after these outcomes. In total, do these outcomes increase or decrease your confidence in the pundit?
If you are interested in this topic, you can read more about it in this blog post.
End of explanation
n = np.array([32690, 31238, 34405, 34565, 34977, 34415,
36577, 36319, 35353, 34405, 31285, 31617])
k = np.array([265, 280, 307, 312, 317, 287,
320, 309, 225, 240, 232, 243])
Explanation: Exercise: An article in the New England Journal of Medicine reports results from a study that looked at the diagnosis rate of Attention Deficit Hyperactivity Disorder (ADHD) as a function of birth month: "Attention Deficit–Hyperactivity Disorder and Month of School Enrollment".
They found that children born in June, July, and August were substantially more likely to be diagnosed with ADHD, compared to children born in September, but only in states that use a September cutoff for children to enter kindergarten. In these states, children born in August start school almost a year younger than children born in September. The authors of the study suggest that the cause is "age-based variation in behavior that may be attributed to ADHD rather than to the younger age of the children".
Use the methods in this chapter to estimate the probability of diagnosis as a function of birth month.
The notebook for this chapter provides the data and some suggestions for getting started.
The paper includes this figure:
<img width="500" src="https://www.nejm.org/na101/home/literatum/publisher/mms/journals/content/nejm/2018/nejm_2018.379.issue-22/nejmoa1806828/20190131/images/img_xlarge/nejmoa1806828_f1.jpeg">
In my opinion, this representation of the data does not show the effect as clearly as it could.
But the figure includes the raw data, so we can analyze it ourselves.
Note: there is an error in the figure, confirmed by personal correspondence:
The May and June [diagnoses] are reversed. May should be 317 (not 287) and June should be 287 (not 317).
So here is the corrected data, where n is the number of children born in each month, starting with January, and k is the number of children diagnosed with ADHD.
End of explanation
x = np.arange(12)
n = np.roll(n, -8)
k = np.roll(k, -8)
Explanation: First, I'm going to "roll" the data so it starts in September rather than January.
End of explanation
adhd = pd.DataFrame(dict(x=x, k=k, n=n))
adhd['rate'] = adhd['k'] / adhd['n'] * 10000
adhd
Explanation: And I'll put it in a DataFrame with one row for each month and the diagnosis rate per 10,000.
End of explanation
def plot_adhd(adhd):
plt.plot(adhd['x'], adhd['rate'], 'o',
label='data', color='C0', alpha=0.4)
plt.axvline(5.5, color='gray', alpha=0.2)
plt.text(6, 64, 'Younger than average')
plt.text(5, 64, 'Older than average', horizontalalignment='right')
decorate(xlabel='Birth date, months after cutoff',
ylabel='Diagnosis rate per 10,000')
plot_adhd(adhd)
Explanation: Here's what the diagnosis rates look like.
End of explanation
qs = np.linspace(-5.2, -4.6, num=51)
prior_inter = make_uniform(qs, 'Intercept')
qs = np.linspace(0.0, 0.08, num=51)
prior_slope = make_uniform(qs, 'Slope')
Explanation: For the first 9 months, from September to May, we see what we would expect if some of the excess diagnoses are due to "age-based variation in behavior". For each month of difference in age, we see an increase in the number of diagnoses.
This pattern breaks down for the last three months, June, July, and August. This might be explained by random variation, but it also might be due to parental manipulation; if some parents hold back children born near the deadline, the observations for these month would include a mixture of children who are relatively old for their grade and therefore less likely to be diagnosed.
Unfortunately, the dataset includes only month of birth, not year, so we don't know the actual ages of these students when they started school. However, we can use the first nine months to estimate the effect of age on diagnosis rate; then we can think about what to do with the other three months.
Use the methods in this chapter to estimate the probability of diagnosis as a function of birth month.
Start with the following prior distributions.
End of explanation
# Solution
joint = make_joint(prior_inter, prior_slope)
joint.head()
# Solution
joint_pmf = Pmf(joint.stack())
joint_pmf.head()
# Solution
num_legit = 9
adhd1 = adhd.loc[0:num_legit-1]
adhd2 = adhd.loc[num_legit:]
adhd1
# Solution
adhd2
# Solution
from scipy.stats import binom
likelihood1 = joint_pmf.copy()
xs = adhd1['x']
ks = adhd1['k']
ns = adhd1['n']
for slope, inter in joint_pmf.index:
ps = expit(inter + slope * xs)
likes = binom.pmf(ks, ns, ps)
likelihood1[slope, inter] = likes.prod()
likelihood1.sum()
# Solution
# This update uses the binomial survival function to compute
# the probability that the number of cases *exceeds* `ks`.
likelihood2 = joint_pmf.copy()
xs = adhd2['x']
ks = adhd2['k']
ns = adhd2['n']
for slope, inter in joint_pmf.index:
ps = expit(inter + slope * xs)
likes = binom.sf(ks, ns, ps)
likelihood2[slope, inter] = likes.prod()
likelihood2.sum()
# Solution
posterior_pmf = joint_pmf * likelihood1
posterior_pmf.normalize()
# Solution
posterior_pmf.max_prob()
# Solution
posterior_pmf = joint_pmf * likelihood1 * likelihood2
posterior_pmf.normalize()
# Solution
posterior_pmf.max_prob()
# Solution
joint_posterior = posterior_pmf.unstack()
plot_contour(joint_posterior)
decorate(title='Joint posterior distribution')
# Solution
marginal_inter = marginal(joint_posterior, 0)
marginal_slope = marginal(joint_posterior, 1)
marginal_inter.mean(), marginal_slope.mean()
# Solution
marginal_inter.plot(color='C4')
decorate(xlabel='Intercept',
ylabel='PDF',
title='Posterior marginal distribution of intercept')
# Solution
marginal_slope.plot(color='C2')
decorate(xlabel='Slope',
ylabel='PDF',
title='Posterior marginal distribution of slope')
# Solution
sample = posterior_pmf.choice(101)
xs = adhd['x']
ps = np.empty((len(sample), len(xs)))
for i, (slope, inter) in enumerate(sample):
ps[i] = expit(inter + slope * xs)
ps.shape
# Solution
low, median, high = np.percentile(ps, [2.5, 50, 97.5], axis=0)
median
# Solution
plt.fill_between(xs, low*10000, high*10000,
color='C1', alpha=0.2)
plt.plot(xs, median*10000, label='model',
color='C1', alpha=0.5)
plot_adhd(adhd)
Explanation: Make a joint prior distribution and update it using the data for the first nine months.
Then draw a sample from the posterior distribution and use it to compute the median probability of diagnosis for each month and a 90% credible interval.
As a bonus exercise, do a second update using the data from the last three months, but treating the observed number of diagnoses as a lower bound on the number of diagnoses there would be if no children were kept back.
End of explanation |
2,320 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = self.sigmoid
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer
hidden_grad = hidden_outputs * (1 - hidden_outputs) # hidden layer gradients
# TODO: Update the weights
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T) # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * np.dot(hidden_errors * hidden_grad, inputs.T) # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
final_outputs = final_inputs # self.activation_function(final_inputs) # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 5000
learning_rate = 0.01
hidden_nodes = 20
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation |
2,321 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rate distributions
Step1: A function to simulate trajectories.
Step2: Simulation of a large number of events
Generate a large results table.
Step3: In this case, the info we are interested in is the rate column - this has not had noise added to it. The rates are drawn from a normal distribution centred at 80 nt/s, with a standard deviation of 40 nt/s. Each row is implicitly one second, and the distance travelled in each time step is therefore numerically equal to the rate.
Time-weighted rate distribution
Make a normal distribution with sigma and mu the same as those specified in the simulator.
Step4: Plot the rate distribution and overlay the above pdf.
Step5: A key point to note here is that rates are implicitly time-weighted. If you were to fit line segments and make each segment an equal-contributor to the rate distribution, then the distribution would look quite different.
Distance-weighted rate distribution
Step6: Not a very good fit. Distance weighting shifts the distribution to the right, and also narrows it. So what <i>is</i> a good fit?
Let's change sigma and mu to something more likely to fit well
Step7: This fit is OK for a guess, but how can we get there from first principles? Let's multiply by x and normalise, i.e. the same thing we did to the histogram.
Step8: Perfect, but so far this is merely a tautology | Python Code:
import matplotlib as mpl
import matplotlib.pyplot as plt
import random
import numpy as np
import beadpy
import pandas as pd
import math
%matplotlib inline
Explanation: Rate distributions: Time vs distance-weighted
End of explanation
def trajectory_simulator(pre_duration = 250, #Mean event start time
pre_sigma = 50, #Sigma of event start time distribution
post_duration = 250, #The bead stays on for this long at the end of the trajectory
mean_duration = 100, #Mean event duration
min_duration = 10, #Minimum event duration
mean_rate = 500, #Mean rate (distance units/timestep)
rate_sigma = 50, #Sigma of the rate distribution
noise_sigma = 500, #Mean sigma for the bead movement
noise_sigma_sigma = 100, #Sigma of the noise sigma distribution
pause_prob = 0.001, #Probability of entering a pause in a given timestep
pause_duration_prob = 0.2, #Probability of remaining paused in a given timestep once a pause has begun.
rate_change_prob = 0.1, #Probablity that the rate will change in a given timestep
DNA_length = 15000, #Length of the DNA - a hard limit on the event length
trajectory_number = 0):
length = int(np.random.exponential(mean_duration)) #Length is drawn from an exponential distribution.
while length < min_duration:
length = int(np.random.exponential(mean_duration)) #The length should be at least a certain value.
current_rate = 0
pre = int(np.random.normal(loc=pre_duration, scale = pre_sigma))
post = post_duration
rate = 0
ratesequence = [0]*pre
noise_sigmaval = int(np.random.normal(loc=noise_sigma, scale = noise_sigma_sigma))
position = [0]*pre
nucleotides = []
current_position = 0
for i in range(0,pre):
nucleotides.append(float(position[i]+np.random.normal(loc=0.0, scale = noise_sigmaval)))
for i in range(0,length):
randomnumber = random.random() #generate a random float between 0 and 1
if i == 0: #Start the event
rate = np.random.normal(loc=mean_rate, scale = rate_sigma)
elif not rate == 0: #When during an event/no pause.
if (randomnumber <= pause_prob): #Start a pause.
rate = 0
elif (randomnumber > pause_prob) & (randomnumber <= (pause_prob + rate_change_prob)): #Change the rate
rate = np.random.normal(loc=mean_rate, scale = rate_sigma)
else: #No rate change
rate = rate #just FYI!
elif (rate == 0) & (not i ==0): #When in a pause.
if (randomnumber < (1- pause_duration_prob)): #End the pause.
rate = np.random.normal(loc=mean_rate, scale = rate_sigma)
else:
rate = 0 #Continue the pause.
ratesequence.append(rate)
current_position = current_position + rate
position.append(current_position)
nucleotides.append(float(current_position+np.random.normal(loc=0.0, scale = noise_sigmaval)))
if current_position > DNA_length:
length = i
break
for i in range(0,post):
ratesequence.append(0)
position.append(current_position)
nucleotides.append(float(current_position+np.random.normal(loc=0.0, scale = noise_sigmaval)))
time = range(0,len(nucleotides))
results = pd.DataFrame({'time' : time,
'nucleotides' : nucleotides,
'rate' : ratesequence,
'position' : position})
results['trajectory'] = trajectory_number
return results
Explanation: A function to simulate trajectories.
End of explanation
phi29results = pd.DataFrame()
for j in range(0,1000):
temp = trajectory_simulator(pre_duration = 300,
pre_sigma = 20,
post_duration = 250,
mean_duration = 100,
min_duration = 10,
mean_rate = 80,
rate_sigma = 40,
noise_sigma = 100,
noise_sigma_sigma = 20,
pause_prob = 0.1,
pause_duration_prob = 0.5,
rate_change_prob = 0.01,
DNA_length = 15000,
trajectory_number = j)
phi29results = phi29results.append(temp)
phi29results.head()
Explanation: Simulation of a large number of events
Generate a large results table.
End of explanation
def pdf(sigma, mu, x_range):
x = x_range
y = []
for i in x:
y.append((1/(math.sqrt(2*3.14159*sigma**2)))*(math.exp(-1*(((i - (mu))**2) / (2*sigma**2)))))
return x,y
x,y = pdf(40, 80,np.arange(0,250,1.0))
Explanation: In this case, the info we are interested in is the rate column - this has not had noise added to it. The rates are drawn from a normal distribution centred at 80 nt/s, with a standard deviation of 40 nt/s. Each row is implicitly one second, and the distance travelled in each time step is therefore numerically equal to the rate.
Time-weighted rate distribution
Make a normal distribution with sigma and mu the same as those specified in the simulator.
End of explanation
plt.hist(phi29results.rate[phi29results.rate > 0], bins = 30, normed = True)
plt.plot(x,y,color="red", lw = 4)
plt.title("Time-weighted rate distribution")
plt.xlabel("Rate nt/s")
plt.ylabel("Time (fractional)")
Explanation: Plot the rate distribution and overlay the above pdf.
End of explanation
plt.hist(phi29results.rate[phi29results.rate > 0], bins = 30, weights = phi29results.rate[phi29results.rate > 0], normed = True)
plt.plot(x,y,color="red", lw = 4)
plt.title("distance-weighted rate distribution")
plt.xlabel("Rate nt/s")
plt.ylabel("Nucleotides synthesised (fractional)")
Explanation: A key point to note here is that rates are implicitly time-weighted. If you were to fit line segments and make each segment an equal-contributor to the rate distribution, then the distribution would look quite different.
Distance-weighted rate distribution
End of explanation
x,y = pdf(35, 98,np.arange(0,250,1.0))
plt.hist(phi29results.rate[phi29results.rate > 0], bins = 30, weights = phi29results.rate[phi29results.rate > 0], normed = True)
plt.plot(x,y,color="red",lw=4)
plt.title("distance-weighted rate distribution")
plt.xlabel("Rate nt/s")
plt.ylabel("Nucleotides synthesised (fractional)")
Explanation: Not a very good fit. Distance weighting shifts the distribution to the right, and also narrows it. So what <i>is</i> a good fit?
Let's change sigma and mu to something more likely to fit well:
End of explanation
x,y = pdf(40, 80,np.arange(0,250,1.0)) #regenerate the original pdf with sigma = 40 and mu = 80
a = []
for i in range(0,len(x)):
a.append(y[i]*i) #Multiply y by x
asum = sum(a)
z = []
for i in a:
z.append(i/asum) #Normalise
plt.plot(x,y,color="red")
plt.plot(x,z,color="green")
plt.hist(phi29results.rate[phi29results.rate > 0], bins = 30, weights = phi29results.rate[phi29results.rate > 0], normed = True)
plt.plot(x,z,color="green",lw=4)
plt.title("distance-weighted rate distribution")
plt.xlabel("Rate nt/s")
plt.ylabel("Nucleotides synthesised (fractional)")
Explanation: This fit is OK for a guess, but how can we get there from first principles? Let's multiply by x and normalise, i.e. the same thing we did to the histogram.
End of explanation
def weighted_pdf(sigma, mu, x_range):
x = x_range
a = []
for i in x:
a.append(((i/(math.sqrt(2*3.14159*sigma**2)))*(math.exp(-1*(((i - (mu))**2) / (2*sigma**2))))))
#Note that x is now in the numerator of the first part of the Gaussian function.
y = []
for i in a:
y.append(i/sum(a))
return x,y
x,y = pdf(40,80,np.arange(0,250,1.0))
x, z = weighted_pdf(40,80,np.arange(0,250,1.0))
plt.plot(x,y,color="red")
plt.plot(x,z,color="green")
Explanation: Perfect, but so far this is merely a tautology: Making the equivalent adjustment to the pdf and the histogram it describes means that they will continue to fit to each other. Let's summarise the the adjustment we have made in a function which describes our new pdf:
End of explanation |
2,322 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
6 - GSTools
With version 0.5 scikit-gstat offers an interface to the awesome gstools library. This way, you can use a Variogram estimated with scikit-gstat in gstools to perform random field generation, kriging and much, much more.
For a Variogram instance, there are three possibilities to export into gstools
Step1: In the example, gstools.variogram.vario_estimate is used to estimate the empirical variogram
Step2: And finally, the exact same code from the GSTools docs can be called
Step3: 6.1.2 bin_center=False
It is important to understand, that gstools and skgstat are handling lag bins different. While skgstat uses the upper limit, gstools assumes the bin center. This can have implications, if a model is fitted. Consider the example below, in which only the bin_center setting is different.
Step4: Notice the considerable gap between the two model functions. This can already lead to seroius differences, i.e. in Kriging.
6.1.3 Using other arguments
Now, with the example from the GSTools docs working, we can start chaning the arguments to create quite different empirical variograms.
Note
Step5: If you fit the gs.Stable with a nugget, it fits quite well. But keep in mind that this does not necessarily describe the original field very well and was just fitted for demonstration.
6.2 to_gstools
The second possible interface to gstools is the Variogram.to_gstools function. This will return one of the classes listed in the gstools documentation. The variogram parameters are extracted and passed to gstools. You should be able to use it, just like any other CovModel.
However, there are a few things to consider
Step6: Now export the model to gstools
Step7: Note
Step8: Keep in mind, that we did not call a Kriging procedure, but created another field.
Of course, we can do the same thing with the more customized model, created in 6.1.3
Step9: Notice how the spatial properties as well as the value range has changed. That's why it is important to estimate Variogram or CovModel carefully and not let the GIS do that for you somewhere hidden in the dark.
6.3 to_gs_krige
Finally, after carefully esitmating and fitting a variogram using SciKit-GStat, you can also export it directly into a GSTools Krige instance. We use the variogram as in the other sections | Python Code:
# import
import skgstat as skg
import gstools as gs
import numpy as np
import matplotlib.pyplot as plt
import plotly.offline as pyo
import warnings
pyo.init_notebook_mode()
warnings.filterwarnings('ignore')
# use the example from gstools
# generate a synthetic field with an exponential model
x = np.random.RandomState(19970221).rand(1000) * 100.
y = np.random.RandomState(20011012).rand(1000) * 100.
model = gs.Exponential(dim=2, var=2, len_scale=8)
srf = gs.SRF(model, mean=0, seed=19970221)
field = srf((x, y))
# combine x and y for use in skgstat
coords = np.column_stack((x, y))
Explanation: 6 - GSTools
With version 0.5 scikit-gstat offers an interface to the awesome gstools library. This way, you can use a Variogram estimated with scikit-gstat in gstools to perform random field generation, kriging and much, much more.
For a Variogram instance, there are three possibilities to export into gstools:
Variogram.get_empirical(bin_center=True) returns a pair of distance lag bins and experimental semi-variance values, like gstools.variogram.vario_estimate.
Variogram.to_gstools returns a parameterized CovModel derived from the Variogram.
Variogram.to_gs_krige returns a GSTools Krige instance based on the variogram
6.1 get_empirical
6.1.1 Reproducing the gstools example
You can reproduce the Getting Started example for variogram estimation from GSTools docs with scikit-gstat, and replace the calculation of the empirical variogram with skg.Variogram.
Note: This does only make sense if you want to use a distance metric, binning procedure or semi-variance estimator, that is not included in gstools or are bound to scikit-gstat for any other reason. Variogram will always perform a full model fitting cycle on instantiation, which could lead to some substantial overhead here.
This behavior might change in a future version of scikit-gstat.
End of explanation
V = skg.Variogram(coords, field, n_lags=21, estimator='matheron', maxlag=45, bin_func='even')
bin_center, gamma = V.get_empirical(bin_center=True)
Explanation: In the example, gstools.variogram.vario_estimate is used to estimate the empirical variogram:
```Python
estimate the variogram of the field
bin_center, gamma = gs.vario_estimate((x, y), field)
```
Here, we can use skg.Variogram. From the shown arguments, estimator and bin_func are using the default values:
End of explanation
%matplotlib inline
# fit the variogram with a stable model. (no nugget fitted)
fit_model = gs.Stable(dim=2)
fit_model.fit_variogram(bin_center, gamma, nugget=False)
# output
ax = fit_model.plot(x_max=max(bin_center))
ax.scatter(bin_center, gamma)
print(fit_model)
Explanation: And finally, the exact same code from the GSTools docs can be called:
End of explanation
bin_edges, _ = V.get_empirical(bin_center=False)
# fit the variogram with a stable model. (no nugget fitted)
edge_model = gs.Stable(dim=2)
_ = edge_model.fit_variogram(bin_edges, gamma, nugget=False)
fig, axes = plt.subplots(1,2, figsize=(12,4))
# plot first
fit_model.plot(ax=axes[1], label='center=True')
# plot second
edge_model.plot(ax=axes[1], label='center=False')
# bins
axes[0].scatter(bin_center, gamma, label='center=True')
axes[0].scatter(bin_edges, gamma, label='center=False')
axes[0].set_title('Empirical Variogram')
axes[1].set_title('Variogram Model')
axes[0].legend(loc='lower right')
print(fit_model)
print(edge_model)
Explanation: 6.1.2 bin_center=False
It is important to understand, that gstools and skgstat are handling lag bins different. While skgstat uses the upper limit, gstools assumes the bin center. This can have implications, if a model is fitted. Consider the example below, in which only the bin_center setting is different.
End of explanation
V = skg.Variogram(coords, field, n_lags=15, estimator='dowd', maxlag=45, bin_func='uniform', dist_func='cityblock')
bin_center, gamma = V.get_empirical(bin_center=True)
# fit the variogram with a stable model. (no nugget fitted)
fit_model = gs.Stable(dim=2)
fit_model.fit_variogram(bin_center, gamma, nugget=True)
# output
ax = fit_model.plot(x_max=max(bin_center))
ax.scatter(bin_center, gamma)
print(fit_model)
Explanation: Notice the considerable gap between the two model functions. This can already lead to seroius differences, i.e. in Kriging.
6.1.3 Using other arguments
Now, with the example from the GSTools docs working, we can start chaning the arguments to create quite different empirical variograms.
Note: This should just illustrate the available possibilities, the result is by no means producing a better estimate of the initially created Gaussian random field.
In this example different things will be changed:
use only 15 lag classes, but distribute the point pairs equally. Note the differing widths of the classes. (bin_func='uniform')
The Dowd estimator is used. (estimator='dowd')
The Taxicab metric (aka. Manhattan metric or cityblock metric) is used over Euklidean for no obvious reason. (dist_func='cityblock')
End of explanation
skg.plotting.backend('plotly')
V = skg.Variogram(coords, field, n_lags=21, estimator='matheron', model='exponential', maxlag=45, bin_func='even')
fig = V.plot(show=False)
pyo.iplot(fig)
Explanation: If you fit the gs.Stable with a nugget, it fits quite well. But keep in mind that this does not necessarily describe the original field very well and was just fitted for demonstration.
6.2 to_gstools
The second possible interface to gstools is the Variogram.to_gstools function. This will return one of the classes listed in the gstools documentation. The variogram parameters are extracted and passed to gstools. You should be able to use it, just like any other CovModel.
However, there are a few things to consider:
skgstat can only export isotropic models.
The 'harmonize' cannot be exported
6.2.1 exporting Variogram
In this example, the same Variogram from above is estimated, but we use the 'exponential' model. An exponential covariance function was used in the first place to create the field that was sampled.
End of explanation
exp_model = V.to_gstools()
print(exp_model)
# get the empirical for the plot as well
bins, gamma = V.get_empirical(bin_center=True)
ax = exp_model.plot(x_max=45)
ax.scatter(bins, gamma)
Explanation: Now export the model to gstools:
End of explanation
x = y = range(100)
new_field = gs.SRF(exp_model, seed=13062018)
new_field.structured([x, y])
new_field.plot()
Explanation: Note: It is important to understand, that skgstat and gstools handle coordinates slightly different. If you export the Variogram to a CovModel and you want to use the Variogram.coordinates, you must transpose them.
```Python
variogram is a skgstat.Variogram instance
model = variogram.to_gstools()
cond_pos = variogram.coordinates.T
use i.e. in Kriging
krige = gs.krige.Ordinary(model, cond_pos, variogram.values)
```
6.2.2 Spatial Random Field Generation
With a CovModel, we can use any of the great tools implemented in gstools. First, let's create another random field with the exponential model that we exported in the last section:
End of explanation
malformed = gs.SRF(fit_model, seed=24092013)
malformed.structured([x,y])
malformed.plot()
Explanation: Keep in mind, that we did not call a Kriging procedure, but created another field.
Of course, we can do the same thing with the more customized model, created in 6.1.3:
End of explanation
# export
krige = V.to_gs_krige(unbiased=True) # will result in ordinary kriging
print(krige)
# create a regular grid
x = y = range(100)
# interpolate
result, sigma = krige.structured((x, y))
fig, axes = plt.subplots(1, 2, figsize=(8, 4))
# plot
axes[0].imshow(result, origin='lower')
axes[1].imshow(sigma, origin='lower', cmap='RdYlGn_r')
# label
axes[0].set_title('Kriging')
axes[1].set_title('Error Variance')
plt.tight_layout()
Explanation: Notice how the spatial properties as well as the value range has changed. That's why it is important to estimate Variogram or CovModel carefully and not let the GIS do that for you somewhere hidden in the dark.
6.3 to_gs_krige
Finally, after carefully esitmating and fitting a variogram using SciKit-GStat, you can also export it directly into a GSTools Krige instance. We use the variogram as in the other sections:
End of explanation |
2,323 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explanation of observed subconvergence
Subconvergence has been observed when MESing operators which multiplies with $\frac{1}{J}$. In these cases, the error is dominant in the first inner point. Here we will try to explain this observation.
Step1: The centered difference
The centered finite differecnce approximation can be found by combining the forward and backward finite differences, evaluated half between grid points.
Step2: We see that the centered finite difference (CFD) approximation has an expected convergence order of $2$.
Mutliplying the FD approximation $\partial_x f$ with $1/J$ yields (in cylindrical coordinates)
Step3: In the first inner point $x=\frac{h}{2}$, so we get | Python Code:
%matplotlib notebook
from IPython.display import display
from sympy import Function, S, Eq
from sympy import symbols, init_printing, simplify, Limit
from sympy import sin, cos, tanh, exp, pi, sqrt
from boutdata.mms import x
# Import common
import os, sys
# If we add to sys.path, then it must be an absolute path
common_dir = os.path.abspath('./../../common')
# Sys path is a list of system paths
sys.path.append(common_dir)
from CELMAPy.MES import get_metric
init_printing()
Explanation: Explanation of observed subconvergence
Subconvergence has been observed when MESing operators which multiplies with $\frac{1}{J}$. In these cases, the error is dominant in the first inner point. Here we will try to explain this observation.
End of explanation
# Symbols to easen printing
symFW, symBW, symCFD = symbols('FW, BW, CFD')
x0, h = symbols('x0, h')
f = Function('f')
FW = f(x+h/2).series(x+h/2, x0=x0, n=4)
FW = FW.subs(x-x0,0)
display(Eq(symFW,FW))
BW = f(x-h/2).series(x-h/2, x0=x0, n=4)
BW = BW.subs(x-x0,0)
display(Eq(symBW,BW))
display(Eq(symFW-symBW,FW - BW))
CFD = simplify((FW-BW)/h)
display(Eq(symCFD, CFD))
Explanation: The centered difference
The centered finite differecnce approximation can be found by combining the forward and backward finite differences, evaluated half between grid points.
End of explanation
metric = get_metric()
invJCFD = CFD*(1/metric.J)
display(invJCFD)
Explanation: We see that the centered finite difference (CFD) approximation has an expected convergence order of $2$.
Mutliplying the FD approximation $\partial_x f$ with $1/J$ yields (in cylindrical coordinates)
End of explanation
# Cannot have to identical symbols in the order, so we do a workaround
firstInnerJ = (1/metric.J).subs(x,h/2)
invJCFDFirstInner = simplify(CFD*firstInnerJ)
display(invJCFDFirstInner)
Explanation: In the first inner point $x=\frac{h}{2}$, so we get
End of explanation |
2,324 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'thu', 'ciesm', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: THU
Source ID: CIESM
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
2,325 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis on Movie Reviews
Using Logistic Regression Model
0 - negative
1 - somewhat negative
2 - neutral
3 - somewhat positive
4 - positive
Load Libraries
Step1: Load & Read Datasets
Step2: Train Classifier
Step3: Create Submission | Python Code:
import nltk
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
Explanation: Sentiment Analysis on Movie Reviews
Using Logistic Regression Model
0 - negative
1 - somewhat negative
2 - neutral
3 - somewhat positive
4 - positive
Load Libraries
End of explanation
train = pd.read_csv('train.tsv', delimiter='\t')
test = pd.read_csv('test.tsv', delimiter='\t')
train.shape, test.shape
train.head()
test.head()
# unique sentiment labels
train.Sentiment.unique()
train.info()
train.Sentiment.value_counts()
train.Sentiment.value_counts() / train.Sentiment.count()
Explanation: Load & Read Datasets
End of explanation
X_train = train['Phrase']
y_train = train['Sentiment']
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression())
])
text_clf = text_clf.fit(X_train, y_train)
X_test = train['Phrase']
predicted = text_clf.predict(X_test)
print (np.mean(predicted == y_train))
test.info()
Explanation: Train Classifier
End of explanation
X_test = test['Phrase']
phraseIds = test['PhraseId']
predicted = text_clf.predict(X_test)
output = pd.DataFrame( data={"PhraseId":phraseIds, "Sentiment":predicted} )
#output.to_csv( "submission_logistic_regression.csv", index=False, quoting=3 )
Explanation: Create Submission
End of explanation |
2,326 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Creating a Clean Chart
Begin by importing the packages we'll use.
Step1: Data looks better naked
What in the world does that mean?
Slide and data presentation often refers back to Edward Tufte and his book "The Visual Display of Quantitative Information."
Define naked data this way
Step2: The simple pandas bar plot is sufficient for exploration...
Step3: But if we're showing this to the world, we can do better.
Let's make this chart prettier...
We need a color palette
These are the "Tableau 20" colors as RGB.
Step4: Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts.
Step5: You typically want your plot to be ~1.33x wider than tall.
Common sizes
Step6: Remove the plot frame lines. They are unnecessary chartjunk.
Step7: Ensure that the axis ticks only show up on the bottom and left of the plot.
Ticks on the right and top of the plot are generally unnecessary chartjunk.
Step8: Set the labels
Step9: Limit the range of the plot to only where the data is.
Avoid unnecessary whitespace.
Step10: Make sure your axis ticks are large enough to be easily read.
You don't want your viewers squinting to read your plot.
Step11: Provide tick lines across the plot to help your viewers trace along the axis ticks.
Make sure that the lines are light and small so they don't obscure the primary data lines.
Step12: Remove the tick marks.
They are unnecessary with the tick lines we just plotted.
Step13: Now that the plot is prepared, it's time to actually plot the data!
Step14: matplotlib's title() call centers the title on the plot, but not the graph,
I used the text() call to customize where the title goes.
Make the title big enough so it spans the entire plot, but don't make it so big that it requires two lines to show.
Note that if the title is descriptive enough, it is unnecessary to include axis labels.
They are self-evident, in this plot's case.
Step15: Always include your data source(s) and copyright notice!
And for your data sources, tell your viewers exactly where the data came from,
preferably with a direct link to the data.
Just telling your viewers that you used data from the "U.S. Census Bureau" is completely useless
Step16: Finally, save the figure as a PNG.
You can also save it as a PDF, JPEG, etc.
Just change the file extension in this call.
bbox_inches="tight" removes all the extra whitespace on the edges of your plot. | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import pylab as pyl
# This is an example of an iPython magic command.
# If we don't use this, then we can't see our matplotlib plots in our notebook
%matplotlib inline
Explanation: 1. Creating a Clean Chart
Begin by importing the packages we'll use.
End of explanation
dfLetterFrequency = pd.read_csv('../data/letter_frequency.csv', header=None, index_col=0, names=['Frequency'])
Explanation: Data looks better naked
What in the world does that mean?
Slide and data presentation often refers back to Edward Tufte and his book "The Visual Display of Quantitative Information."
Define naked data this way:
Data-ink is the non-erasable core of the graphic, the non-redundant ink arranged in response to variation in the numbers represented
If we remove all non-data-ink and redundant data-ink, within reason, we should be left with an informative graphic that reflects sound graphical design
“Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away”
– Antoine de Saint-Exupery
Build plots using simple data...
Load our data first
For this exercise, we'll use a simple dataset: letter frequency in the English language.
End of explanation
dfLetterFrequency.plot(kind='bar', figsize=(10,6))
Explanation: The simple pandas bar plot is sufficient for exploration...
End of explanation
tableau20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120),
(44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150),
(148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148),
(227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199),
(188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)]
Explanation: But if we're showing this to the world, we can do better.
Let's make this chart prettier...
We need a color palette
These are the "Tableau 20" colors as RGB.
End of explanation
for i in range(len(tableau20)):
r, g, b = tableau20[i]
tableau20[i] = (r / 255., g / 255., b / 255.)
N = 26
ind = np.arange(N) # the x locations for the groups
width = 0.8 # the width of the bars
Explanation: Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts.
End of explanation
pyl.figure(figsize=(12, 9))
Explanation: You typically want your plot to be ~1.33x wider than tall.
Common sizes: (10, 7.5) and (12, 9)
End of explanation
ax = pyl.subplot(111)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
Explanation: Remove the plot frame lines. They are unnecessary chartjunk.
End of explanation
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
Explanation: Ensure that the axis ticks only show up on the bottom and left of the plot.
Ticks on the right and top of the plot are generally unnecessary chartjunk.
End of explanation
ax.set_xticks(ind + 0.5 * width)
ax.set_xticklabels(dfLetterFrequency.index.values)
Explanation: Set the labels
End of explanation
pyl.ylim(0, 14)
pyl.xlim(0,26)
Explanation: Limit the range of the plot to only where the data is.
Avoid unnecessary whitespace.
End of explanation
pyl.yticks(range(0, 14, 2), [str(x) + "%" for x in range(0, 14, 2)], fontsize=14)
pyl.xticks(fontsize=14)
Explanation: Make sure your axis ticks are large enough to be easily read.
You don't want your viewers squinting to read your plot.
End of explanation
for y in range(0, 14, 2):
plt.plot(range(0, 26), [y] * len(range(0, 26)), ":", lw=0.5, color="black", alpha=0.3)
Explanation: Provide tick lines across the plot to help your viewers trace along the axis ticks.
Make sure that the lines are light and small so they don't obscure the primary data lines.
End of explanation
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
Explanation: Remove the tick marks.
They are unnecessary with the tick lines we just plotted.
End of explanation
plt.bar(ind, dfLetterFrequency.Frequency.values, width, color=tableau20[0], alpha=0.5)
Explanation: Now that the plot is prepared, it's time to actually plot the data!
End of explanation
pyl.text(6, 13.5, "Letter Frequency in English Writing", fontsize=17, ha="center")
Explanation: matplotlib's title() call centers the title on the plot, but not the graph,
I used the text() call to customize where the title goes.
Make the title big enough so it spans the entire plot, but don't make it so big that it requires two lines to show.
Note that if the title is descriptive enough, it is unnecessary to include axis labels.
They are self-evident, in this plot's case.
End of explanation
pyl.text(0, -1, "Data source: Cryptological Mathematics, Robert Lewand.", fontsize=10)
Explanation: Always include your data source(s) and copyright notice!
And for your data sources, tell your viewers exactly where the data came from,
preferably with a direct link to the data.
Just telling your viewers that you used data from the "U.S. Census Bureau" is completely useless:
The U.S. Census Bureau provides all kinds of data.
How are your viewers supposed to know which data set you used?
End of explanation
savefig("../outputs/letter_frequency.png", bbox_inches="tight");
dataviz = plt.gcf()
Explanation: Finally, save the figure as a PNG.
You can also save it as a PDF, JPEG, etc.
Just change the file extension in this call.
bbox_inches="tight" removes all the extra whitespace on the edges of your plot.
End of explanation |
2,327 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step5: Exercise
Step6: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step7: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step8: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step9: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step10: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step11: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step12: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step13: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step14: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step15: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step16: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('reviews.txt', 'r') as f:
reviews = f.read()
with open('labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
# Filter out that review with 0 length
reviews_ints = [each for each in reviews_ints if len(each) > 0]
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
seq_len = 200
features = np.zeros((len(reviews), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
2,328 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression test suite
Step1: IMF notes
Step2: The total number of stars $N_{tot}$ is then
Step3: With a yield ejected of $0.1 Msun$, the total amount ejected is
Step4: compared to the simulation
Step5: Compare both results
Step6: Test of distinguishing between massive and AGB sources
Step7: Calculating yield ejection over time
For plotting, take the lifetimes/masses from the yield grid
Step8: Simulation results in the plot above should agree with semi-analytical calculations.
Test of parameter imf_bdys
Step9: Select imf_bdys=[1,5]
Step10: Results
Step11: Test of parameter imf_type
Step12: Chabrier
Step13: Simulation should agree with semi-analytical calculations for Chabrier IMF.
Kroupa
Step14: Simulation results compared with semi-analytical calculations for Kroupa IMF.
Test of parameter sn1a_on
Step15: Test of parameter sn1a_rate (DTD)
Step16: Small test
Step17: Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (exp) implementation.
Compare number of WD's in range
Step18: Wiersmagauss
Step19: Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (Gauss) implementation.
Compare number of WD's in range
Step20: SNIa implementation
Step21: Check trend
Step22: Test of parameter tend, dt and special_timesteps
First constant timestep size of 1e7
Step23: First timestep size of 1e7, then in log space to tend with a total number of steps of 200; Note
Step24: Choice of dt should not change final composition
Step25: Test of parameter mgal - the total mass of the SSP
Test the total isotopic and elemental ISM matter at first and last timestep.
Step26: Test of SN rate
Step27: Rate does not depend on timestep type
Step28: Test of parameter transitionmass
Step29: 2 starbursts
Step30: imf_yield_range - include yields only in this mass range | Python Code:
#from imp import *
#s=load_source('sygma','/home/nugrid/nugrid/SYGMA/SYGMA_online/SYGMA_dev/sygma.py')
#%pylab nbagg
import sys
import sygma as s
print s.__file__
reload(s)
s.__file__
#import matplotlib
#matplotlib.use('nbagg')
import matplotlib.pyplot as plt
#matplotlib.use('nbagg')
import numpy as np
from scipy.integrate import quad
from scipy.interpolate import UnivariateSpline
import os
# Trigger interactive or non-interactive depending on command line argument
__RUNIPY__ = sys.argv[0]
if __RUNIPY__:
%matplotlib inline
else:
%pylab nbagg
Explanation: Regression test suite: Test of basic SSP GCE features
Test of SSP with artificial yields,pure h1 yields, provided in NuGrid tables (no PopIII tests here). Focus are basic GCE features.
You can find the documentation <a href="doc/sygma.html">here</a>.
Before starting the test make sure that use the standard yield input files.
Outline:
$\odot$ Evolution of ISM fine
$\odot$ Sources of massive and AGB stars distinguished
$\odot$ Test of final mass of ISM for different IMF boundaries
$\odot$ Test of Salpeter, Chabrier, Kroupa IMF by checking the evolution of ISM mass (incl. alphaimf)
$\odot$ Test if SNIa on/off works
$\odot$ Test of the three SNIa implementations, the evolution of SN1a contributions
$\odot$ Test of parameter tend, dt and special_timesteps
$\odot$ Test of parmeter mgal
$\odot$ Test of parameter transitionmass
TODO: test non-linear yield fitting (hard set in code right now, no input parameter provided)
End of explanation
k_N=1e11*0.35/ (1**-0.35 - 30**-0.35) #(I)
Explanation: IMF notes:
The IMF allows to calculate the number of stars $N_{12}$ in the mass interval [m1,m2] with
(I) $N_{12}$ = k_N $\int _{m1}^{m2} m^{-2.35} dm$
Where k_N is the normalization constant. It can be derived from the total amount of mass of the system $M_{tot}$
since the total mass $M_{12}$ in the mass interval above can be estimated with
(II) $M_{12}$ = k_N $\int _{m1}^{m2} m^{-1.35} dm$
With a total mass interval of [1,30] and $M_{tot}=1e11$ the $k_N$ can be derived:
$1e11 = k_N/0.35 * (1^{-0.35} - 30^{-0.35})$
End of explanation
N_tot=k_N/1.35 * (1**-1.35 - 30**-1.35) #(II)
print N_tot
Explanation: The total number of stars $N_{tot}$ is then:
End of explanation
Yield_tot=0.1*N_tot
print Yield_tot/1e11
Explanation: With a yield ejected of $0.1 Msun$, the total amount ejected is:
End of explanation
import sygma as s
reload(s)
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,imf_type='salpeter',imf_bdys=[1,30],iniZ=0.02,hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
#% matplotlib inline
import read_yields as ry
path = os.environ['SYGMADIR']+'/yield_tables/agb_and_massive_stars_nugrid_MESAonly_fryer12delay.txt'
#path='/home/christian/NuGrid/SYGMA_PROJECT/NUPYCEE/new/nupycee.bitbucket.org/yield_tables/isotope_yield_table.txt'
ytables = ry.read_nugrid_yields(path,excludemass=[32,60])
zm_lifetime_grid=s1.zm_lifetime_grid_current #__interpolate_lifetimes_grid()
#return [[metallicities Z1,Z2,...], [masses], [[log10(lifetimesofZ1)],
# [log10(lifetimesofZ2)],..] ]
#s1.__find_lifetimes()
#minm1 = self.__find_lifetimes(round(self.zmetal,6),mass=[minm,maxm], lifetime=lifetimemax1)
Explanation: compared to the simulation:
End of explanation
print Yield_tot_sim
print Yield_tot
print 'ratio should be 1 : ',Yield_tot_sim/Yield_tot
Explanation: Compare both results:
End of explanation
Yield_agb= ( k_N/1.35 * (1**-1.35 - 8.**-1.35) ) * 0.1
Yield_massive= ( k_N/1.35 * (8.**-1.35 - 30**-1.35) ) * 0.1
print 'Should be 1:',Yield_agb/s1.history.ism_iso_yield_agb[-1][0]
print 'Should be 1:',Yield_massive/s1.history.ism_iso_yield_massive[-1][0]
print 'Test total number of SNII agree with massive star yields: ',sum(s1.history.sn2_numbers)*0.1/Yield_massive
print sum(s1.history.sn2_numbers)
s1.plot_totmasses(source='agb')
s1.plot_totmasses(source='massive')
s1.plot_totmasses(source='all')
s1.plot_totmasses(source='sn1a')
Explanation: Test of distinguishing between massive and AGB sources:
Boundaries between AGB and massive for Z=0 (1e-4) at 8 (transitionmass parameter)
End of explanation
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,\
imf_bdys=[1,30],iniZ=0,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
s1.plot_mass(specie='H',label='H, sim',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(m,k_N):
return ( k_N/1.35 * (m**-1.35 - 30.**-1.35) ) * 0.1
yields1=[]
for m1 in m:
yields1.append(yields(m1,k_N))
plt.plot(ages,yields1,marker='+',linestyle='',markersize=15,label='H, semi')
plt.legend(loc=4)
Explanation: Calculating yield ejection over time
For plotting, take the lifetimes/masses from the yield grid:
$
Ini Mass & Age [yrs]
1Msun = 5.67e9
1.65 = 1.211e9
2 = 6.972e8
3 = 2.471e8
4 = 1.347e8
5 = 8.123e7
6 = 5.642e7
7 = 4.217e7
12 = 1.892e7
15 = 1.381e7
20 = 9.895e6
25 = 7.902e6
$
End of explanation
k_N=1e11*0.35/ (5**-0.35 - 20**-0.35)
N_tot=k_N/1.35 * (5**-1.35 - 20**-1.35)
Yield_tot=0.1*N_tot
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='salpeter',\
imf_bdys=[5,20],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print 'Sould be 1:' ,Yield_tot_sim/Yield_tot
Explanation: Simulation results in the plot above should agree with semi-analytical calculations.
Test of parameter imf_bdys: Selection of different initial mass intervals
Select imf_bdys=[5,20]
End of explanation
k_N=1e11*0.35/ (1**-0.35 - 5**-0.35)
N_tot=k_N/1.35 * (1**-1.35 - 5**-1.35)
Yield_tot=0.1*N_tot
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,\
imf_bdys=[1,5],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',\
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
Explanation: Select imf_bdys=[1,5]
End of explanation
print 'Sould be 1: ',Yield_tot_sim/Yield_tot
Explanation: Results:
End of explanation
alphaimf = 1.5 #Set test alphaimf
k_N=1e11*(alphaimf-2)/ (-1**-(alphaimf-2) + 30**-(alphaimf-2))
N_tot=k_N/(alphaimf-1) * (-1**-(alphaimf-1) + 30**-(alphaimf-1))
Yield_tot=0.1*N_tot
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='alphaimf',alphaimf=1.5,imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print 'Should be 1 :',Yield_tot/Yield_tot_sim
Explanation: Test of parameter imf_type: Selection of different IMF types
power-law exponent : alpha_imf
The IMF allows to calculate the number of stars $N_{12}$ in the mass interval [m1,m2] with
$N_{12}$ = k_N $\int _{m1}^{m2} m^{-alphaimf} dm$
Where k_N is the normalization constant. It can be derived from the total amount of mass of the system $M_{tot}$
since the total mass $M_{12}$ in the mass interval above can be estimated with
$M_{12}$ = k_N $\int _{m1}^{m2} m^{-(alphaimf-1)} dm$
With a total mass interval of [1,30] and $M_{tot}=1e11$ the $k_N$ can be derived:
$1e11 = k_N/(alphaimf-2) * (1^{-(alphaimf-2)} - 30^{-(alphaimf-2)})$
End of explanation
def imf_times_m(mass):
if mass<=1:
return 0.158 * np.exp( -np.log10(mass/0.079)**2 / (2.*0.69**2))
else:
return mass*0.0443*mass**(-2.3)
k_N= 1e11/ (quad(imf_times_m,0.01,30)[0] )
N_tot=k_N/1.3 * 0.0443* (1**-1.3 - 30**-1.3)
Yield_tot=N_tot * 0.1
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='chabrier',imf_bdys=[0.01,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print Yield_tot
print Yield_tot_sim
print 'Should be 1 :',Yield_tot/Yield_tot_sim
plt.figure(11)
s1.plot_mass(fig=11,specie='H',label='H',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(m,k_N):
return ( k_N/1.3 * 0.0443*(m**-1.3 - 30.**-1.3) ) * 0.1
yields1=[]
for m1 in m:
yields1.append(yields(m1,k_N))
plt.plot(ages,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
Explanation: Chabrier:
Change interval now from [0.01,30]
M<1: $IMF(m) = \frac{0.158}{m} * \exp{ \frac{-(log(m) - log(0.08))^2}{2*0.69^2}}$
else: $IMF(m) = m^{-2.3}$
End of explanation
def imf_times_m(mass):
p0=1.
p1=0.08**(-0.3+1.3)
p2=0.5**(-1.3+2.3)
p3= 1**(-2.3+2.3)
if mass<0.08:
return mass*p0*mass**(-0.3)
elif mass < 0.5:
return mass*p1*mass**(-1.3)
else: #mass>=0.5:
return mass*p1*p2*mass**(-2.3)
k_N= 1e11/ (quad(imf_times_m,0.01,30)[0] )
p1=0.08**(-0.3+1.3)
p2=0.5**(-1.3+2.3)
N_tot=k_N/1.3 * p1*p2*(1**-1.3 - 30**-1.3)
Yield_tot=N_tot * 0.1
reload(s)
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='kroupa',imf_bdys=[0.01,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print 'Should be 1: ',Yield_tot/Yield_tot_sim
plt.figure(111)
s1.plot_mass(fig=111,specie='H',label='H',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(m,k_N):
return ( k_N/1.3 *p1*p2* (m**-1.3 - 30.**-1.3) ) * 0.1
yields1=[]
for m1 in m:
yields1.append(yields(m1,k_N))
plt.plot(ages,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
Explanation: Simulation should agree with semi-analytical calculations for Chabrier IMF.
Kroupa:
M<0.08: $IMF(m) = m^{-0.3}$
M<0.5 : $IMF(m) = m^{-1.3}$
else : $IMF(m) = m^{-2.3}$
End of explanation
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=False,sn1a_rate='maoz',imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='maoz',imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
print (s1.history.ism_elem_yield_1a[0]),(s1.history.ism_elem_yield_1a[-1])
print (s1.history.ism_elem_yield[0]),(s1.history.ism_elem_yield[-1])
print (s2.history.ism_elem_yield_1a[0]),(s2.history.ism_elem_yield_1a[-1])
print (s2.history.ism_elem_yield[0]),(s2.history.ism_elem_yield[-1])
print (s1.history.ism_elem_yield[-1][0] + s2.history.ism_elem_yield_1a[-1][0])/s2.history.ism_elem_yield[-1][0]
s2.plot_mass(fig=33,specie='H-1',source='sn1a') #plot s1 data (without sn) cannot be plotted -> error, maybe change plot function?
Explanation: Simulation results compared with semi-analytical calculations for Kroupa IMF.
Test of parameter sn1a_on: on/off mechanism
End of explanation
#import read_yields as ry
import sygma as s
reload(s)
plt.figure(99)
#interpolate_lifetimes_grid=s22.__interpolate_lifetimes_grid
#ytables=ry.read_nugrid_yields('yield_tables/isotope_yield_table_h1.txt')
#zm_lifetime_grid=interpolate_lifetimes_grid(ytables,iolevel=0) 1e7
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='exp',
imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s1.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
plt.plot(grid_masses,grid_lifetimes,label='spline fit grid points (SYGMA)')
plt.xlabel('Mini/Msun')
plt.ylabel('log lifetime')
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
plt.plot(np.array(m),np.log10(np.array(ages)),marker='+',markersize=20,label='input yield grid',linestyle='None')
plt.plot(10**spline_lifetime(np.log10(ages)),np.log10(ages),linestyle='--',label='spline fit SNIa')
plt.legend()
#plt.yscale('log')
#print grid_lifetimes
#print grid_masses
#10**spline_lifetime(np.log10(7.902e6))
Explanation: Test of parameter sn1a_rate (DTD): Different SN1a rate implementatinos
Calculate with SNIa and look at SNIa contribution only. Calculated for each implementation from $410^7$ until $1.510^{10}$ yrs
DTD taken from Vogelsberger 2013 (sn1a_rate='vogelsberger')
$\frac{N_{1a}}{Msun} = \int _t^{t+\Delta t} 1.310^{-3} * (\frac{t}{410^7})^{-1.12} * \frac{1.12 -1}{410^7}$ for $t>410^7 yrs$
def dtd(t):
return 1.3e-3(t/4e7)-1.12 * ((1.12-1)/4e7)
n1a_msun= quad(dtd,4e7,1.5e10)[0]
Yield_tot=n1a_msun1e11*0.1 * 7 #special factor
print Yield_tot
reload(s)
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='vogelsberger',imf_type='salpeter',imf_bdys=[1,30],iniZ=-1,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt', sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]
print 'Should be 1: ',Yield_tot/Yield_tot_sim
s1.plot_mass(specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(t):
def dtd(t):
return 1.3e-3(t/4e7)-1.12 * ((1.12-1)/4e7)
return quad(dtd,4e7,t)[0]1e11*0.1 * 7 #special factor
yields1=[]
ages1=[]
for m1 in m:
t=ages[m.index(m1)]
if t>4e7:
yields1.append(yields(t))
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
Simulation results should agree with semi-analytical calculations for the SN1 yields.
Exponential DTD taken from Wiersma09 (sn1a_rate='wiersmaexp') (maybe transitionmass should replace 8Msun?)
$\frac{N_{1a}}{Msun} = \int t ^{t+\Delta t} f{wd}(t) exp(-t/\tau)/\tau$ with
if $M_z(t) >3$ :
$f_{wd}(t) = (\int _{M(t)}^8 IMF(m) dm)$
else:
$f_{wd}(t) = 0$
with $M(t) = max(3, M_z(t))$ and $M_z(t)$ being the mass-lifetime function.
NOTE: This mass-lifetime function needs to be extracted from the simulation (calculated in SYGMA, see below)
The following performs the simulation but also takes the mass-metallicity-lifetime grid from this simulation.
With the mass-lifetime spline function calculated the integration can be done further down. See also the fit for this function below.
End of explanation
#following inside function wiersma09_efolding
#if timemin ==0:
# timemin=1
from scipy.integrate import dblquad
def spline1(x):
#x=t
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
#if self.imf_bdys[0]>3:
# minm_prog1a=self.imf_bdys[0]
return max(minm_prog1a,10**spline_lifetime(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
#if maximum progenitor mass is smaller than 8Msun due to IMF range:
#if 8>self.imf_bdys[1]:
# maxm_prog1a=self.imf_bdys[1]
if mlim>maxm_prog1a:
return 0
else:
#Delay time distribution function (DTD)
tau= 2e9
mmin=0
mmax=0
inte=0
#follwing is done in __imf()
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#print 'IMF test',norm*m**-2.35
#imf normalized to 1Msun
return norm*m**-2.35* np.exp(-t/tau)/tau
a= 0.01 #normalization parameter
#if spline(np.log10(t))
#a=1e-3/()
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
# in principle since normalization is set: nb_1a_per_m the above calculation is not necessary anymore
Yield_tot=n1a*1e11*0.1 *1 #7 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be : ', Yield_tot_sim/Yield_tot
s1.plot_mass(specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
a= 0.01 #normalization parameter
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
Explanation: Small test: Initial mass vs. lifetime from the input yield grid compared to the fit in the the Mass-Metallicity-lifetime plane (done by SYGMA) for Z=0.02.
A double integration has to be performed in order to solve the complex integral from Wiersma:
End of explanation
sum(s1.wd_sn1a_range1)/sum(s1.wd_sn1a_range)
s1.plot_sn_distr(xaxis='time',fraction=False)
Explanation: Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (exp) implementation.
Compare number of WD's in range
End of explanation
reload(s)
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='gauss',imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
from scipy.integrate import dblquad
def spline1(x):
#x=t
return max(3.,10**spline(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline(np.log10(t))
#print 'mlim',mlim
if mlim>8.:
#print t
#print mlim
return 0
else:
#mmin=max(3.,massfunc(t))
#mmax=8.
#imf=self.__imf(mmin,mmax,1)
#Delay time distribution function (DTD)
tau= 1e9 #3.3e9 #characteristic delay time
sigma=0.66e9#0.25*tau
#sigma=0.2#narrow distribution
#sigma=0.5*tau #wide distribution
mmin=0
mmax=0
inte=0
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#imf normalized to 1Msun
return norm*m**-2.35* 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-(t-tau)**2/(2*sigma**2))
#a= 0.0069 #normalization parameter
#if spline(np.log10(t))
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
Yield_tot=n1a*1e11*0.1 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be 1: ', Yield_tot_sim/Yield_tot
s2.plot_mass(fig=988,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
Explanation: Wiersmagauss
End of explanation
sum(s2.wd_sn1a_range1)/sum(s2.wd_sn1a_range)
Explanation: Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (Gauss) implementation.
Compare number of WD's in range
End of explanation
import sygma as s
reload(s)
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e8,tend=1.3e10,sn1a_rate='maoz',imf_type='salpeter',
imf_bdys=[1,30],special_timesteps=-1,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
from scipy.interpolate import UnivariateSpline
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
from scipy.integrate import quad
def spline1(t):
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
return max(minm_prog1a,10**spline_lifetime(np.log10(t)))
#funciton giving the total (accummulatitive) number of WDs at each timestep
def wd_number(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
if mlim>maxm_prog1a:
return 0
else:
mmin=0
mmax=0
inte=0
#normalized to 1msun!
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
return norm*m**-2.35 #self.__imf(mmin,mmax,inte,m)
def maoz_sn_rate(m,t):
return wd_number(m,t)* 4.0e-13 * (t/1.0e9)**-1
def maoz_sn_rate_int(t):
return quad( maoz_sn_rate,spline1(t),8,args=t)[0]
#in this formula, (paper) sum_sn1a_progenitors number of
maxm_prog1a=8
longtimefornormalization=1.3e10 #yrs
fIa=0.00147
fIa=1e-3
#A = (fIa*s2.number_stars_born[1]) / quad(maoz_sn_rate_int,0,longtimefornormalization)[0]
A = 1e-3 / quad(maoz_sn_rate_int,0,longtimefornormalization)[0]
print 'Norm. constant A:',A
n1a= A* quad(maoz_sn_rate_int,0,1.3e10)[0]
Yield_tot=n1a*1e11*0.1 #specialfactor
print Yield_tot_sim
print Yield_tot
print 'Should be 1: ', Yield_tot_sim/Yield_tot
Explanation: SNIa implementation: Maoz12 $t^{-1}$
End of explanation
s2.plot_mass(fig=44,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
#yields= a* dblquad(wdfrac,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1
yields= A*quad(maoz_sn_rate_int,0,t)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
plt.legend(loc=3)
Explanation: Check trend:
End of explanation
import sygma as s
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',
stellar_param_on=False)
print 'Should be 0: ',s1.history.age[0]
print 'Should be 1: ',s1.history.age[-1]/1.3e10
print 'Should be 1: ',s1.history.timesteps[0]/1e7
print 'Should be 1: ',s1.history.timesteps[-1]/1e7
print 'Should be 1: ',sum(s1.history.timesteps)/1.3e10
Explanation: Test of parameter tend, dt and special_timesteps
First constant timestep size of 1e7
End of explanation
import sygma as s
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.5e9,special_timesteps=200,imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
print 'Should be 0: ',s2.history.age[0]
print 'Should be 1: ',s2.history.age[-1]/1.5e9
print 'Should be 201: ',len(s2.history.age)
print 'Should be 1: ',s2.history.timesteps[0]/1e7
#print 'in dt steps: ',s2.history.timesteps[1]/1e7,s1.history.timesteps[2]/1e7,'..; larger than 1e7 at step 91!'
print 'Should be 200: ',len(s2.history.timesteps)
print 'Should be 1: ',sum(s2.history.timesteps)/1.5e9
plt.figure(55)
plt.plot(s1.history.age[1:],s1.history.timesteps,label='linear (constant) scaled',marker='+')
plt.plot(s2.history.age[1:],s2.history.timesteps,label='log scaled',marker='+')
plt.yscale('log');plt.xscale('log')
plt.xlabel('age/years');plt.ylabel('timesteps/years');plt.legend(loc=4)
Explanation: First timestep size of 1e7, then in log space to tend with a total number of steps of 200; Note: changed tend
End of explanation
s3=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
s4=s.sygma(iolevel=0,mgal=1e11,dt=1.3e10,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
s5=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=200,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
s6=s.sygma(iolevel=0,mgal=1e11,dt=1.3e10,tend=1.3e10,special_timesteps=200,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
#print s3.history.ism_iso_yield[-1][0] == s4.history.ism_iso_yield[-1][0] why false?
print 'should be 1 ',s3.history.ism_iso_yield[-1][0]/s4.history.ism_iso_yield[-1][0]
#print s3.history.ism_iso_yield[-1][0],s4.history.ism_iso_yield[-1][0]
print 'should be 1',s5.history.ism_iso_yield[-1][0]/s6.history.ism_iso_yield[-1][0]
#print s5.history.ism_iso_yield[-1][0],s6.history.ism_iso_yield[-1][0]
Explanation: Choice of dt should not change final composition:
for special_timesteps:
End of explanation
s1=s.sygma(iolevel=0,mgal=1e7,dt=1e7,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s2=s.sygma(iolevel=0,mgal=1e8,dt=1e8,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s3=s.sygma(iolevel=0,mgal=1e9,dt=1e9,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
print 'At timestep 0: ',sum(s1.history.ism_elem_yield[0])/1e7,sum(s2.history.ism_elem_yield[0])/1e8,sum(s3.history.ism_elem_yield[0])/1e9
print 'At timestep 0: ',sum(s1.history.ism_iso_yield[0])/1e7,sum(s2.history.ism_iso_yield[0])/1e8,sum(s3.history.ism_iso_yield[0])/1e9
print 'At last timestep, should be the same fraction: ',sum(s1.history.ism_elem_yield[-1])/1e7,sum(s2.history.ism_elem_yield[-1])/1e8,sum(s3.history.ism_elem_yield[-1])/1e9
print 'At last timestep, should be the same fraction: ',sum(s1.history.ism_iso_yield[-1])/1e7,sum(s2.history.ism_iso_yield[-1])/1e8,sum(s3.history.ism_iso_yield[-1])/1e9
Explanation: Test of parameter mgal - the total mass of the SSP
Test the total isotopic and elemental ISM matter at first and last timestep.
End of explanation
reload(s)
s1=s.sygma(iolevel=0,mgal=1e11,dt=7e6,tend=1e8,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s2=s.sygma(iolevel=0,mgal=1e11,dt=7e6,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',
pop3_table='yield_tables/popIII_h1.txt')
s3=s.sygma(iolevel=0,mgal=1e11,dt=1e6,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s4=s.sygma(iolevel=0,mgal=1e11,dt=3e7,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s1.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 1',label2='SNII, rate 1',marker1='o',marker2='s',shape2='-',markevery=1)
s2.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='p',markevery=1,shape2='-.')
s4.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='+',markevery=1,shape2=':',color2='y')
s3.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='x',markevery=1,shape2='--')
plt.xlim(6e6,7e7)
#plt.xlim(6.5e6,4e7)
plt.vlines(7e6,1e2,1e9)
plt.ylim(1e2,1e4)
print s1.history.sn2_numbers[1]/s1.history.timesteps[0]
print s2.history.sn2_numbers[1]/s2.history.timesteps[0]
#print s1.history.timesteps[:5]
#print s2.history.timesteps[:5]
s3=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt',
stellar_param_on=False)
s4=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',
pop3_table='yield_tables/popIII_h1.txt',stellar_param_on=False)
Explanation: Test of SN rate: depend on timestep size: shows always mean value of timestep; larger timestep> different mean
End of explanation
s3.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='SN1a, rate',label2='SNII, rate',marker1='o',marker2='s',markevery=1)
s4.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='SN1a, number',label2='SNII number',marker1='d',marker2='p')
plt.xlim(3e7,1e10)
s1.plot_sn_distr(fig=77,rate=True,marker1='o',marker2='s',markevery=5)
s2.plot_sn_distr(fig=77,rate=True,marker1='x',marker2='^',markevery=1)
#s1.plot_sn_distr(rate=False)
#s2.plot_sn_distr(rate=True)
#s2.plot_sn_distr(rate=False)
plt.xlim(1e6,1.5e10)
#plt.ylim(1e2,1e4)
Explanation: Rate does not depend on timestep type:
End of explanation
import sygma as s; reload(s)
s1=s.sygma(iolevel=0,imf_bdys=[1.65,30],transitionmass=8,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s2=s.sygma(iolevel=0,imf_bdys=[1.65,30],transitionmass=10,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim_8=s1.history.ism_iso_yield_agb[-1][0]
Yield_tot_sim_10=s2.history.ism_iso_yield_agb[-1][0]
alphaimf=2.35
k_N=1e11*(alphaimf-2)/ (-1.65**-(alphaimf-2) + 30**-(alphaimf-2))
N_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 8**-(alphaimf-1))
Yield_tot_8=0.1*N_tot
N_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 10**-(alphaimf-1))
Yield_tot_10=0.1*N_tot
#N_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 5**-(alphaimf-1))
#Yield_tot_5=0.1*N_tot
print '1:',Yield_tot_sim_8/Yield_tot_8
print '1:',Yield_tot_sim_10/Yield_tot_10
#print '1:',Yield_tot_sim_5/Yield_tot_5
Explanation: Test of parameter transitionmass : Transition from AGB to massive stars
Check if transitionmass is properly set
End of explanation
s1=s.sygma(starbursts=[0.1,0.1],iolevel=1,mgal=1e11,dt=1e7,imf_type='salpeter',
imf_bdys=[1,30],iniZ=0.02,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
Explanation: 2 starbursts
End of explanation
s0=s.sygma(iolevel=0,iniZ=0.0001,imf_bdys=[0.01,100],imf_yields_range=[1,100],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Explanation: imf_yield_range - include yields only in this mass range
End of explanation |
2,329 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Power Spectral Density
Introduction
Methods
This notebook consists of two methods to carry Spectral Analysis.
The first one is based on covariance called pcovar, which comes from Spectrum
Step1: 2. Load nino3 SSTA series
Please keep in mind that the nino3 SSTA series lies between 1970 and 1999 <br>
Recall ex2
2.1 Load data
Step2: 2.2 Have a quick plot
Step3: 3. Estimates the power spectral density (PSD )
3.1 pcovar method
3.1.1 Create PSD
Step4: 3.1.2 Visualize using embeded plot
Step5: 3.1.3 Visualize by a customized way
Access the data and properties of a object of pcovar
Step6: 3.2 welch method
Step7: 4. Have a comparison | Python Code:
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
from matplotlib import mlab
from spectrum import pcovar
from pylab import rcParams
rcParams['figure.figsize'] = 15, 6
Explanation: Power Spectral Density
Introduction
Methods
This notebook consists of two methods to carry Spectral Analysis.
The first one is based on covariance called pcovar, which comes from Spectrum: a Spectral Analysis Library in Python. This library that contains tools to estimate Power Spectral Densities based on Fourier transform, Parametric methods or eigenvalues analysis. See more from http://pyspectrum.readthedocs.io/en/latest/index.html.
Install can be done ==> conda install spectrum
The second is the welch method that comes from the package of scipy.signal. The library also contains many kinds of method. See more from https://docs.scipy.org/doc/scipy/reference/signal.html.
In fact, both matplotlib.mlab and Spectrumalso implements the welch method. However, They do not appear flexible as one from scipy.signal. An common error looks like "ValueError: The len(window) must be the same as the shape of x for the chosen axis".
Data
The 30-years nino3 SSTA series from a previous notebook will be used as an example.
1. Load basic libraries
End of explanation
npzfile = np.load('data/ssta.nino3.30y.npz')
npzfile.files
ssta_series = npzfile['ssta_series']
ssta_series.shape
Explanation: 2. Load nino3 SSTA series
Please keep in mind that the nino3 SSTA series lies between 1970 and 1999 <br>
Recall ex2
2.1 Load data
End of explanation
plt.plot(ssta_series)
Explanation: 2.2 Have a quick plot
End of explanation
nw = 48 # order of an autoregressive prediction model for the signal, used in estimating the PSD.
nfft = 256 # NFFT (int) – total length of the final data sets (padded with zero if needed
fs = 1 # default value
p = pcovar(ssta_series, nw, nfft, fs)
Explanation: 3. Estimates the power spectral density (PSD )
3.1 pcovar method
3.1.1 Create PSD
End of explanation
p.plot(norm=True)
#help(p.plot)
Explanation: 3.1.2 Visualize using embeded plot
End of explanation
# process frequencies and psd
f0 = np.array(p.frequencies())
pxx0 = p.psd/np.max(p.psd) # noralize the psd values
plt.plot(1.0/f0[1:47]/12, pxx0[1:47])
plt.title('NINO 3 Spectrum via pcovar');
plt.xlabel('Years')
Explanation: 3.1.3 Visualize by a customized way
Access the data and properties of a object of pcovar
End of explanation
n = 150
alpha = 0.5
noverlap = 75
nfft = 256 #default value
fs = 1 #default value
win = signal.tukey(n, alpha)
ssta = ssta_series.reshape(360) # convert vector
f1, pxx1 = signal.welch(ssta, nfft=nfft, fs=fs, window=win, noverlap=noverlap)
# process frequencies and psd
pxx1 = pxx1/np.max(pxx1) # noralize the psd values
plt.plot(1.0/f1[1:47]/12, pxx1[1:47], label='welch')
plt.title('NINO 3 Spectrum via welch');
plt.xlabel('Years')
Explanation: 3.2 welch method
End of explanation
plt.plot(1.0/f0[1:47]/12, pxx0[1:47], label='pcov')
plt.plot(1.0/f1[1:47]/12, pxx1[1:47], label='welch')
plt.title('NINO 3 Spectrum');
plt.legend()
plt.xlabel('Years')
Explanation: 4. Have a comparison
End of explanation |
2,330 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Compare-weighted-and-unweighted-mean-temperature" data-toc-modified-id="Compare-weighted-and-unweighted-mean-temperature-1"><span class="toc-item-num">1 </span>Compare weighted and unweighted mean temperature</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Data" data-toc-modified-id="Data-1.0.1"><span class="toc-item-num">1.0.1 </span>Data</a></span></li><li><span><a href="#Creating-weights" data-toc-modified-id="Creating-weights-1.0.2"><span class="toc-item-num">1.0.2 </span>Creating weights</a></span></li><li><span><a href="#Weighted-mean" data-toc-modified-id="Weighted-mean-1.0.3"><span class="toc-item-num">1.0.3 </span>Weighted mean</a></span></li><li><span><a href="#Plot
Step1: Data
Load the data, convert to celsius, and resample to daily values
Step2: Plot the first timestep
Step3: Creating weights
For a for a rectangular grid the cosine of the latitude is proportional to the grid cell area.
Step4: Weighted mean
Step5: Plot | Python Code:
%matplotlib inline
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Compare-weighted-and-unweighted-mean-temperature" data-toc-modified-id="Compare-weighted-and-unweighted-mean-temperature-1"><span class="toc-item-num">1 </span>Compare weighted and unweighted mean temperature</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Data" data-toc-modified-id="Data-1.0.1"><span class="toc-item-num">1.0.1 </span>Data</a></span></li><li><span><a href="#Creating-weights" data-toc-modified-id="Creating-weights-1.0.2"><span class="toc-item-num">1.0.2 </span>Creating weights</a></span></li><li><span><a href="#Weighted-mean" data-toc-modified-id="Weighted-mean-1.0.3"><span class="toc-item-num">1.0.3 </span>Weighted mean</a></span></li><li><span><a href="#Plot:-comparison-with-unweighted-mean" data-toc-modified-id="Plot:-comparison-with-unweighted-mean-1.0.4"><span class="toc-item-num">1.0.4 </span>Plot: comparison with unweighted mean</a></span></li></ul></li></ul></li></ul></div>
Compare weighted and unweighted mean temperature
Author: Mathias Hauser
We use the air_temperature example dataset to calculate the area-weighted temperature over its domain. This dataset has a regular latitude/ longitude grid, thus the gridcell area decreases towards the pole. For this grid we can use the cosine of the latitude as proxy for the grid cell area.
End of explanation
ds = xr.tutorial.load_dataset("air_temperature")
# to celsius
air = ds.air - 273.15
# resample from 6-hourly to daily values
air = air.resample(time="D").mean()
air
Explanation: Data
Load the data, convert to celsius, and resample to daily values
End of explanation
projection = ccrs.LambertConformal(central_longitude=-95, central_latitude=45)
f, ax = plt.subplots(subplot_kw=dict(projection=projection))
air.isel(time=0).plot(transform=ccrs.PlateCarree(), cbar_kwargs=dict(shrink=0.7))
ax.coastlines()
Explanation: Plot the first timestep:
End of explanation
weights = np.cos(np.deg2rad(air.lat))
weights.name = "weights"
weights
Explanation: Creating weights
For a for a rectangular grid the cosine of the latitude is proportional to the grid cell area.
End of explanation
air_weighted = air.weighted(weights)
air_weighted
weighted_mean = air_weighted.mean(("lon", "lat"))
weighted_mean
Explanation: Weighted mean
End of explanation
weighted_mean.plot(label="weighted")
air.mean(("lon", "lat")).plot(label="unweighted")
plt.legend()
Explanation: Plot: comparison with unweighted mean
Note how the weighted mean temperature is higher than the unweighted.
End of explanation |
2,331 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise
Step13: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size = n_seqs * n_steps
n_batches = len(arr)//batch_size
# Keep only enough characters to make full batches
arr = arr[:n_batches * batch_size]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/[email protected]" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
batches = get_batches(encoded, 10, 10)
x, y = next(batches)
encoded.shape
x.shape
encoded
print('x\n', x[:10, :])
print('\ny\n', y[:10, :])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, (batch_size, num_steps), name='inputs')
targets = tf.placeholder(tf.int32, (batch_size, num_steps), name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
def lstm_cell(lstm_size, keep_prob):
cell = tf.contrib.rnn.BasicLSTMCell(lstm_size, reuse=tf.get_variable_scope().reuse)
return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# # Use a basic LSTM cell
# lstm = tf.contrib.rnn.BasicLSTMCell(batch_size, reuse=tf.get_variable_scope().reuse)
# # Add dropout to the cell outputs
# drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell(lstm_size, keep_prob) for _ in range(num_layers)], state_is_tuple=True)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
# https://stackoverflow.com/questions/42669578/tensorflow-1-0-valueerror-attempt-to-reuse-rnncell-with-a-different-variable-s
# def lstm_cell():
# cell = tf.contrib.rnn.NASCell(state_size, reuse=tf.get_variable_scope().reuse)
# return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=0.8)
# rnn_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)], state_is_tuple = True)
# outputs, current_state = tf.nn.dynamic_rnn(rnn_cells, x, initial_state=rnn_tuple_state)
# MultiRNNCell([BasicLSTMCell(...) for _ in range(num_layers)])
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise: Below, implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output = tf.concat(lstm_output, axis=1)
# Reshape seq_output to a 2D tensor with lstm_size columns
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.add(tf.matmul(x, softmax_w), softmax_b)
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='prediction')
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state, scope='layer')
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
2,332 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is adapted from
Step1: Init SparkContext
Step2: A simple parameter server can be implemented as a Python class in a few lines of code.
EXERCISE
Step3: A worker can be implemented as a simple Python function that repeatedly gets the latest parameters, computes an update to the parameters, and sends the update to the parameter server.
Step4: As the worker tasks are executing, you can query the parameter server from the driver and see the parameters changing in the background.
Step5: Sharding a Parameter Server
As the number of workers increases, the volume of updates being sent to the parameter server will increase. At some point, the network bandwidth into the parameter server machine or the computation down by the parameter server may be a bottleneck.
Suppose you have $N$ workers and $1$ parameter server, and suppose each of these is an actor that lives on its own machine. Furthermore, suppose the model size is $M$ bytes. Then sending all of the parameters from the workers to the parameter server will mean that $N * M$ bytes in total are sent to the parameter server. If $N = 100$ and $M = 10^8$, then the parameter server must receive ten gigabytes, which, assuming a network bandwidth of 10 gigabits per second, would take 8 seconds. This would be prohibitive.
On the other hand, if the parameters are sharded (that is, split) across K parameter servers, K is 100, and each parameter server lives on a separate machine, then each parameter server needs to receive only 100 megabytes, which can be done in 80 milliseconds. This is much better.
EXERCISE
Step6: The code below implements a worker that does the following.
1. Gets the latest parameters from all of the parameter server shards.
2. Concatenates the parameters together to form the full parameter vector.
3. Computes an update to the parameters.
4. Partitions the update into one piece for each parameter server.
5. Applies the right update to each parameter server shard.
Step7: EXERCISE | Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import ray
import time
Explanation: This notebook is adapted from:
https://github.com/ray-project/tutorial/tree/master/examples/sharded_parameter_server.ipynb
Sharded Parameter Servers
GOAL: The goal of this exercise is to use actor handles to implement a sharded parameter server example for distributed asynchronous stochastic gradient descent.
Before doing this exercise, make sure you understand the concepts from the exercise on Actor Handles.
Parameter Servers
A parameter server is simply an object that stores the parameters (or "weights") of a machine learning model (this could be a neural network, a linear model, or something else). It exposes two methods: one for getting the parameters and one for updating the parameters.
In a typical machine learning training application, worker processes will run in an infinite loop that does the following:
1. Get the latest parameters from the parameter server.
2. Compute an update to the parameters (using the current parameters and some data).
3. Send the update to the parameter server.
The workers can operate synchronously (that is, in lock step), in which case distributed training with multiple workers is algorithmically equivalent to serial training with a larger batch of data. Alternatively, workers can operate independently and apply their updates asynchronously. The main benefit of asynchronous training is that a single slow worker will not slow down the other workers. The benefit of synchronous training is that the algorithm behavior is more predictable and reproducible.
End of explanation
from zoo.common.nncontext import init_spark_on_local, init_spark_on_yarn
import numpy as np
import os
hadoop_conf_dir = os.environ.get('HADOOP_CONF_DIR')
if hadoop_conf_dir:
sc = init_spark_on_yarn(
hadoop_conf=hadoop_conf_dir,
conda_name=os.environ.get("ZOO_CONDA_NAME", "zoo"), # The name of the created conda-env
num_executors=2,
executor_cores=4,
executor_memory="2g",
driver_memory="2g",
driver_cores=1,
extra_executor_memory_for_ray="3g")
else:
sc = init_spark_on_local(cores = 8, conf = {"spark.driver.memory": "2g"})
# It may take a while to ditribute the local environment including python and java to cluster
import ray
from zoo.ray import RayContext
ray_ctx = RayContext(sc=sc, object_store_memory="4g")
ray_ctx.init()
#ray.init(num_cpus=30, include_webui=False, ignore_reinit_error=True)
Explanation: Init SparkContext
End of explanation
dim = 10
@ray.remote
class ParameterServer(object):
def __init__(self, dim):
self.parameters = np.zeros(dim)
def get_parameters(self):
return self.parameters
def update_parameters(self, update):
self.parameters += update
ps = ParameterServer.remote(dim)
Explanation: A simple parameter server can be implemented as a Python class in a few lines of code.
EXERCISE: Make the ParameterServer class an actor.
End of explanation
@ray.remote
def worker(ps, dim, num_iters):
for _ in range(num_iters):
# Get the latest parameters.
parameters = ray.get(ps.get_parameters.remote())
# Compute an update.
update = 1e-3 * parameters + np.ones(dim)
# Update the parameters.
ps.update_parameters.remote(update)
# Sleep a little to simulate a real workload.
time.sleep(0.5)
# Test that worker is implemented correctly. You do not need to change this line.
ray.get(worker.remote(ps, dim, 1))
# Start two workers.
worker_results = [worker.remote(ps, dim, 100) for _ in range(2)]
Explanation: A worker can be implemented as a simple Python function that repeatedly gets the latest parameters, computes an update to the parameters, and sends the update to the parameter server.
End of explanation
print(ray.get(ps.get_parameters.remote()))
Explanation: As the worker tasks are executing, you can query the parameter server from the driver and see the parameters changing in the background.
End of explanation
@ray.remote
class ParameterServerShard(object):
def __init__(self, sharded_dim):
self.parameters = np.zeros(sharded_dim)
def get_parameters(self):
return self.parameters
def update_parameters(self, update):
self.parameters += update
total_dim = (10 ** 8) // 8 # This works out to 100MB (we have 25 million
# float64 values, which are each 8 bytes).
num_shards = 2 # The number of parameter server shards.
assert total_dim % num_shards == 0, ('In this exercise, the number of shards must '
'perfectly divide the total dimension.')
# Start some parameter servers.
ps_shards = [ParameterServerShard.remote(total_dim // num_shards) for _ in range(num_shards)]
assert hasattr(ParameterServerShard, 'remote'), ('You need to turn ParameterServerShard into an '
'actor (by using the ray.remote keyword).')
Explanation: Sharding a Parameter Server
As the number of workers increases, the volume of updates being sent to the parameter server will increase. At some point, the network bandwidth into the parameter server machine or the computation down by the parameter server may be a bottleneck.
Suppose you have $N$ workers and $1$ parameter server, and suppose each of these is an actor that lives on its own machine. Furthermore, suppose the model size is $M$ bytes. Then sending all of the parameters from the workers to the parameter server will mean that $N * M$ bytes in total are sent to the parameter server. If $N = 100$ and $M = 10^8$, then the parameter server must receive ten gigabytes, which, assuming a network bandwidth of 10 gigabits per second, would take 8 seconds. This would be prohibitive.
On the other hand, if the parameters are sharded (that is, split) across K parameter servers, K is 100, and each parameter server lives on a separate machine, then each parameter server needs to receive only 100 megabytes, which can be done in 80 milliseconds. This is much better.
EXERCISE: The code below defines a parameter server shard class. Modify this class to make ParameterServerShard an actor. We will need to revisit this code soon and increase num_shards.
End of explanation
@ray.remote
def worker_task(total_dim, num_iters, *ps_shards):
# Note that ps_shards are passed in using Python's variable number
# of arguments feature. We do this because currently actor handles
# cannot be passed to tasks inside of lists or other objects.
for _ in range(num_iters):
# Get the current parameters from each parameter server.
parameter_shards = [ray.get(ps.get_parameters.remote()) for ps in ps_shards]
assert all([isinstance(shard, np.ndarray) for shard in parameter_shards]), (
'The parameter shards must be numpy arrays. Did you forget to call ray.get?')
# Concatenate them to form the full parameter vector.
parameters = np.concatenate(parameter_shards)
assert parameters.shape == (total_dim,)
# Compute an update.
update = np.ones(total_dim)
# Shard the update.
update_shards = np.split(update, len(ps_shards))
# Apply the updates to the relevant parameter server shards.
for ps, update_shard in zip(ps_shards, update_shards):
ps.update_parameters.remote(update_shard)
# Test that worker_task is implemented correctly. You do not need to change this line.
ray.get(worker_task.remote(total_dim, 1, *ps_shards))
Explanation: The code below implements a worker that does the following.
1. Gets the latest parameters from all of the parameter server shards.
2. Concatenates the parameters together to form the full parameter vector.
3. Computes an update to the parameters.
4. Partitions the update into one piece for each parameter server.
5. Applies the right update to each parameter server shard.
End of explanation
num_workers = 4
# Start some workers. Try changing various quantities and see how the
# duration changes.
start = time.time()
ray.get([worker_task.remote(total_dim, 5, *ps_shards) for _ in range(num_workers)])
print('This took {} seconds.'.format(time.time() - start))
Explanation: EXERCISE: Experiment by changing the number of parameter server shards, the number of workers, and the size of the data.
NOTE: Because these processes are all running on the same machine, network bandwidth will not be a limitation and sharding the parameter server will not help. To see the difference, you would need to run the application on multiple machines. There are still regimes where sharding a parameter server can help speed up computation on the same machine (by parallelizing the computation that the parameter server processes have to do). If you want to see this effect, you should implement a synchronous training application. In the asynchronous setting, the computation is staggered and so speeding up the parameter server usually does not matter.
End of explanation |
2,333 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lending Club
Step1: Abstract
Lending club offers an exciting alternative to the stock market by providing loans that others can invest in. They claim a 4% overall default rate and give a grade to loans that represents the chance of a loan ending up in default. In this project we found that the default rate is 18% instead of 4% and that only the grades A, B and C are profitable on average of which A and B give on average the highest return-on-investment (around 4.5%). Furthermore, we found that adding more features than only grade to predict loans ending in default is only marginal beneficial and logistic regression with all selected features performed the best (AUC 0.71). Features that were important for this prediction were found to be interest rate, annual income, term and debt-to-income. Of which a higher annual income gave a lower chance on default, while with he others the relationship was reversed. We further predicted grade to find the features that are important but that are already incorporated in grade. In this case Random Forest performed the best, but it predicted mostly grade A, therefore only the precision on the other grades was good (around 0.8). Features that were found to be important were either based on the amount that was borrowed or the amount of debt the borrower already had
Step2: Preprocessing
We want to give advice to investors which loans they should invest in. Therefore we selected for the prediction only the features that are known before the investors pick the loans they want to invest in. Also we deleted features that are not useful for prediction like 'id' and features that have all the same values. There was only one 'joint' loan application, while all others where individual loans. Hence we deleted this one loan also. If a feature had more than 10% missing features we deleted this feature from the features used for prediction. Moreover rows that had a missing value in one of the remaining features were deleted. The features 'earliest credit line' and 'issue date' were transformed to one feature, namely the number of days between the earliest credit line and the issue date of the loan. This was previously done by O’Rourke (2016). The values in the feature annual income were divided by 1000 and rounded-up in order to get more similar values and outliers (above 200,000) were transformed to 200,000. After these transformation we are left with 252,771 loans and 23 features and the percentage of 'charged off' loans is still 18%. We kept our self-created 'roi' feature for data exploration purposes, but this is not a feature we will use for prediction and it will be excluded later.
Step3: The selected features
Step4: In sklearn the features have to be numerical that we input in this algorithm, so we need to convert the categorical features to numeric. To do this ordered categorical features will have adjacent numbers and unordered features will get an order as best as possible during conversion to numeric, for instance geographical. These transformations could be done with a labelencoder from sklearn for instance, but we want to keep any order that is in the data, which might help with the prediction. With more features this would not be manageable but with this amount of features it is still doable. Also there cannot be nan/inf/-inf values, hence these will be made 0's. With this algorithm we will also have to scale and normalize the features.
Non-numeric features were converted as follows
Step5: Classification
We selected two algorithms to use for this project based on that they preformed the best on Lending Club datasets
Step6: Performance metrics
We will use a few metrics to test the performance of our classifier on the test set. First we will use confusion matrices and their statistics. A confusion matrix shows how many true negatives (TN), false positives (FP), false negatives (FN) and true positives (TP) there are. Secondly, we will use the F1-score. This is implemented as 'f1_weighted' in sklearn. This score can be interpreted as a weighted average of the precision and recall. Precision is defined as TP / (TP + FP), while recall is defined as TP / (TP + FN). The F1-score is supposed to deal better with classes of unequal size, as is the case in this project, than the standard accuracy metric, which could become really high if the algorithm only predicts the dominant class. Thirdly, we will show Receiver Operating Characteristic (ROC) curves, which deal very well with unequal sized classes. The Area Under the Curve (AUC)-score of the ROC-plot is always 0.5 for random result and above 0.5 for a better than random result with 1.0 as maximum score. And lastly, we will use the definition of Chang et al. (2015) for return-of-investment on a loan to represent how profitable a loan is. Their definition is ROI = (Total payment received by investors / Total amount committed by investors) − 1.
Results
Exploration
Lending Club claims that the default rate of their loans 4% is. We checked this out in the complete set we used for this project. And we found a 'charged off' rate (their default loan status) to be 5%. Hence a little higher than that they claimed, but not too much. But in this set there are a lot of loans that are still ongoing. For these loans you do not know whether they will end up in 'fully paid' or 'charged off'. Therefore we focus on loans that are closed. Of these loans a much higher percentage ends up in 'charged off', namely 18%.
Step7: Lending Club gives grades (A-G) to their loans so potential investors can see which of these loans are not so risky (A) and which are the riskier loans (G). To make it still worthwhile to invest in the riskier loans, investors supposedly get more interest on these loans. From the figure below we see that indeed the interest grade is higher for riskier loans, but that there are a few exceptions.
Step8: Apart from grade and interest rate there are of course other characteristics of the loans. A few of them are shown below. We see that loans are in the range of almost 0 until 35,000. Hence Lending Club loans seem to be an alternative for personal loans and credit cards and not mortgages. The loans are either 36 months (3 years) or 60 months (5 years) and mostly 3 years. The purpose of the loan is mostly debt consolidation and credit card. Therefore it seems to be mostly people that already have debts. The annual income was cut-off at 200,000 but lies mostly between 25,000 and 100,000.
Step9: We can speculate that some of the characteristics of loans have an influence on the loan ending up in 'charged off'. A first logical check is of course whether the Lending Club 'grade' is already visually a factor in loans ending up in this status. As we can see from the figure below, the 'grade' is very well correlated to the 'charged off proportion' of the loans. Only between F and G the difference is smaller. Hence Lending Club has built a pretty good algorithm to predict the 'charged off' status. Also higher interest loans seem to end up in 'charged off' more often as expected. Furthermore, with purpose the influence is not clearly visible. But with dti (debt-to-income) the difference is significant. This means the more debt a person has compared to their income, the more chance of the loan ending in 'charged off'. Lastly, with home ownership status the difference is visually present and also in numbers 'rent' has the highest charged off proportion, then 'own' and then 'mortgage'.
Step10: Another interesting question is whether it is profitable to invest in loans from Lending Club and whether the 'grade' is has influence on profitability. For this purpose we show the return-of-investment (ROI) overall and per grade. As is seen below the loans have an average of only 1.4% profit. And if we look per grade, only A-C results in profit on average. Loans that end up in 'charged off' are on average very bad for the profits since you will likely loose part of the principal as well. In the A-C categories the loans end up in 'charged off' less times and are therefore on average more profitable even though the loans in the riskier categories deliver more interest returns. The higher interest (more than 20% in the riskiest grades) does not compensate enough for the high 'charged off' ratio, which is around 40% in the riskiest grades as we saw before.
Step11: Prediction
Predicting status 'charged off'
As we saw in the exploration part, the 'grade' is already a pretty good characteristic to predict 'charged off' rate. Therefore we will first see whether adding any additional features is actually useful. Subsequently we will see if we can recreate 'grade' from the features to see which features are still useful but incorporated in 'grade'. In the methods is described which features were selected. We excluded all features not known at the start of the loan and features that are not predictive like the id of the loan or have a lot of missing values. Twenty-three of the features remain in this way. Logistic Regression and Random Forest, two algorithms that have performed well in the past on this dataset, will be used for the prediction. For optimal performance of the Logistic Regression algorithm, the C-parameter can be tuned (the inverse of the regularization strength) on the training set. This is only necessary in the case of using multiple features, because regularization is not useful in the case of one feature (grade in this case). The found optimal value for the C-parameter on the training set is 10.
Step12: We trained both our classifiers on both only 'grade' and all features. And with Logistic Regression we also trained one with top-5 features as selected by SelectKBest from sklearn. This is because Logistic Regression sometimes performs better with less features. We see that all F1-scores are around 0.75. Using all features instead of only grade gives only a very marginal increase of around 1% and using 5 features gives not increase. The best performing algorithm based on the F1-score is Logistic regression with all features. But the differences were very small. When looking at the confusion matrices it is clear that all algorithms mostly predict 'Fully Paid', since this is the dominant class (82%) accuracy scores will look pretty well, while the algorithm is actually not that great as can be seen from the confusion matrices. The F1-score metric was chosen based on the fact that it can deal better with unequal classes, but even that reports an score of 0.74 when Random Forest predicts all loans to be 'Fully Paid'. AUC is in this case a better metric, since also with uneven classes random remains 0.5. The algorithms with only grade give an AUC of 0.66. While the Logistic Regression with all features gives a score of 0.71 and Random Forest of 0.7. The top-5 features algorithm is in between those with 0.68. Hence again adding all features gives a little better performance (0.4-0.5) and Logistic Regression with all features performs the best. In the ROC-plot this is also displayed.
Step13: So adding features does lead to a little better performance. Therefore it is interesting to see which features are mostly used for this increase. The important features for logistic regression can be found by seeing which coefficients are used for the features. The bigger the coefficient the more the model uses this feature for prediction. For our best performing model, Logistic Regression with all features, the top-5 features with the biggest coefficients are
Step14: Re-creating grade
We saw from the section before that we only slightly outperform an algorithm with only grade by adding more features. Therefore we will see which features are predictive of grade and are in that way important. First a Logistic Regression algorithm is trained to predict the grades. We see that it mostly predicts everything as grade A. And the other grades are also not that well predicted except for G, but there are only 2 loans predicted as G. Random Forest on the other hand performs a little better. It also predicts most things as A, but we see some promising coloring on the diagonal of the confusion matrix plot (predicting the right grade) and the precision for these grades is around 0.8. The feature importance in Random Forest as implemented by sklearn as total decrease in node impurity (weighted by the probability of reaching that node (which is approximated by the proportion of samples reaching that node)) averaged over all trees of the ensemble. The most important features are found to be | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn import preprocessing
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.feature_selection import SelectKBest, mutual_info_classif
from sklearn.metrics import roc_curve, auc, accuracy_score, f1_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import LabelBinarizer
from sklearn.multiclass import OneVsRestClassifier
from scipy.stats import ttest_ind
import matplotlib.dates as mdates
from pandas_confusion import ConfusionMatrix
import statsmodels.api as sm
sns.set_style('white')
Explanation: Lending Club
End of explanation
loans = pd.read_csv('../data/loan.csv')
loans['roi'] = ((loans['total_rec_int'] + loans['total_rec_prncp']
+ loans['total_rec_late_fee'] + loans['recoveries']) / loans['funded_amnt']) - 1
print('loans:',loans.shape)
print(loans['loan_status'].unique())
print('percentage charged off in all loans:',
round(sum(loans['loan_status']=='Charged Off')/len(loans['loan_status'])*100), '\n')
# selecting loans that went to full term
closed_loans = loans[loans['loan_status'].isin(['Fully Paid', 'Charged Off'])]
print('closed_loans:',closed_loans.shape)
print('precentage closed loans of total loans:', round(closed_loans.shape[0] / loans.shape[0] * 100, 1))
print('percentage charged off in closed loans:',
round(sum(closed_loans['loan_status']=='Charged Off')/len(closed_loans['loan_status'])*100))
Explanation: Abstract
Lending club offers an exciting alternative to the stock market by providing loans that others can invest in. They claim a 4% overall default rate and give a grade to loans that represents the chance of a loan ending up in default. In this project we found that the default rate is 18% instead of 4% and that only the grades A, B and C are profitable on average of which A and B give on average the highest return-on-investment (around 4.5%). Furthermore, we found that adding more features than only grade to predict loans ending in default is only marginal beneficial and logistic regression with all selected features performed the best (AUC 0.71). Features that were important for this prediction were found to be interest rate, annual income, term and debt-to-income. Of which a higher annual income gave a lower chance on default, while with he others the relationship was reversed. We further predicted grade to find the features that are important but that are already incorporated in grade. In this case Random Forest performed the best, but it predicted mostly grade A, therefore only the precision on the other grades was good (around 0.8). Features that were found to be important were either based on the amount that was borrowed or the amount of debt the borrower already had: revolving line utilization rate (the amount of credit used compared to all credit), installment (monthly payment), revolving balance (all credit), loan amount and debt-to-income. We recommend investors to invest in loans with grades A and B and to on top of that look for loans with short terms, loans of lower amounts, borrowers with little other debts and high incomes.
Introduction
Crowd funding has become a new and exciting way to get capital and to invest. Lending club has jumped into the trend by offering loans with fixed interest rates and terms that the public can choose to invest in. Lending club screens the loans that are applied for and only 10% gets approved and is subsequently offered to the public. By investing a small proportion in many different loans investors can diversify their portfolio and in this way keep the default risk to a minimum (which is estimated by lending club to be 4%). For their services lending club asks a fee of 1%. This is an interesting way for investors to get profit on their investment since it supposedly gives more stable returns than the stock market and higher interest rates than a savings account. The profits depend on the interest rate and the default rate. Therefore it is interesting to see whether certain characteristics of the loan or the buyer give a bigger chance of default. Hence this might help investors to upgrade their profits.
Lending Club has provided the public with their records via their website. A previous dataset was released that holds the records from 2007-2011 and there has also been a Kaggle contest with a preprocessed Lending Club dataset in the past. In April 2016 Lending Club has provided their 2007-2015 dataset through Kaggle as dataset, not as contest. This is the dataset we will be working on in this project. Nevertheless, previous work has usually been done on one of the earlier releases of their dataset. While most of the earlier work has been focused on predicting good loans from bad loans, which we also will be focusing on, most have incorporated also the current loans. This holds a problem, since loans with a 'late' status could still recover and end in 'fully paid'. And 'current' loans could still end in the status 'charged off' (Lending Club’s default status). This is why we will focus only on loans that are closed and are therefore have either status 'fully paid' or 'charged off'. The consequence is that previous work that has incorporated these current loans is not very comparable.
To predict whether a loan will end in 'charged off' we will use machine learning algorithms. According to previous work (Pandey and Srinivasan, 2014; Tsai et al.), both Logistic Regression and Random Forest have been found to work best. Although work that incorporates no external datasets usually ends up with an Area Under the Curve (AUC)-score of around 0.7. Which is not really great, but better than chance. The most important feature is usually found to be 'grade'. This is a measure for risk assessment of the loans given by Lending Club itself. The categories are A-G including subcategories like A1 etc. The idea is that the closer to G the higher the chance on default. Usually the interest rate is also higher for the riskier loans in order to make these loans still attractive for investors.
In this project, we will first focus on exploring the data. We will see whether Lending Club is right about their claimed 4% default rate. Subsequently, we will look into whether loans with higher grades have indeed higher interest rates and higher default rates. And we will close the exploration part with how profitable the loans with the different grade categories actually are on average. Hereafter we will move on to the prediction part. Where we will use Random Forest and Logistic Regression to predict the 'charged off' from the 'fully paid' loans. We will see if an algorithm with just grade performs just as good as an algorithm with all features. Hence that adding features gives no benefit from the metric Lending Club already provides. Furthermore, we will try to recreate grade from the features, to see whether Lending Club provides the features they use for their algorithm and which features are important because they are used to create grade. And finally, we will give some recommendations to the investors of Lending Club.
Methods
Dataset
For this project the Lending Club dataset from Kaggle was used (https://www.kaggle.com/wendykan/lending-club-loan-data). This file contains complete loan data for loans issued between 2007 and 2015. There are 887,379 loans in the file and 74 features. A self-created feature ROI was added (Chang et al., 2015), but this is not part of the dataset. A couple of features have to do with the loan (32) and a couple of them have to do with the borrower (42). The feature we are interested in to predict is 'loan status'. In this case we are only interested in loans that went to full term. Hence we selected the loans that had either status 'fully paid' or 'charged off'. Statuses 'issued', 'current', 'default', 'late (31-120 days)', 'late (16-30 days)' and 'in grace period' are loans that are still ongoing for which you cannot be certain yet how they will end up. 'Does not meet credit policy' loans would not be issued today, so are not useful for future investors. In all the loans 5% has the status 'charged off'. After selecting only the loans that went to full term, we are left with 252,971 loans. This is 28.5% of the number of loans we started with. Of these 18% have the status 'charged off'.
End of explanation
include = ['term', 'int_rate', 'installment', 'grade', 'sub_grade', 'emp_length', 'home_ownership',
'annual_inc', 'purpose', 'zip_code', 'addr_state', 'delinq_2yrs', 'earliest_cr_line', 'inq_last_6mths',
'mths_since_last_delinq', 'mths_since_last_record', 'open_acc', 'pub_rec', 'revol_bal', 'revol_util', 'total_acc',
'mths_since_last_major_derog', 'acc_now_delinq', 'loan_amnt', 'open_il_6m', 'open_il_12m',
'open_il_24m', 'mths_since_rcnt_il', 'total_bal_il', 'dti', 'open_acc_6m', 'tot_cur_bal',
'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc', 'all_util', 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl',
'inq_last_12m', 'issue_d', 'loan_status', 'roi']
# exclude the one joint application
closed_loans = closed_loans[closed_loans['application_type'] == 'INDIVIDUAL']
# make id index
closed_loans.index = closed_loans.id
# include only the features above
closed_loans = closed_loans[include]
# exclude features with more than 10% missing values
columns_not_missing = (closed_loans.isnull().apply(sum, 0) / len(closed_loans)) < 0.1
closed_loans = closed_loans.loc[:,columns_not_missing[columns_not_missing].index]
# delete rows with NANs
closed_loans = closed_loans.dropna()
# calculate nr of days between earliest creditline and issue date of the loan
# delete the two original features
closed_loans['earliest_cr_line'] = pd.to_datetime(closed_loans['earliest_cr_line'])
closed_loans['issue_d'] = pd.to_datetime(closed_loans['issue_d'])
closed_loans['days_since_first_credit_line'] = closed_loans['issue_d'] - closed_loans['earliest_cr_line']
closed_loans['days_since_first_credit_line'] = closed_loans['days_since_first_credit_line'] / np.timedelta64(1, 'D')
closed_loans = closed_loans.drop(['earliest_cr_line', 'issue_d'], axis=1)
# round-up annual_inc and cut-off outliers annual_inc at 200.000
closed_loans['annual_inc'] = np.ceil(closed_loans['annual_inc'] / 1000)
closed_loans.loc[closed_loans['annual_inc'] > 200, 'annual_inc'] = 200
print(closed_loans.shape)
print('percentage charged off in closed loans:',
round(sum(closed_loans['loan_status']=='Charged Off') / len(closed_loans['loan_status']) * 100))
Explanation: Preprocessing
We want to give advice to investors which loans they should invest in. Therefore we selected for the prediction only the features that are known before the investors pick the loans they want to invest in. Also we deleted features that are not useful for prediction like 'id' and features that have all the same values. There was only one 'joint' loan application, while all others where individual loans. Hence we deleted this one loan also. If a feature had more than 10% missing features we deleted this feature from the features used for prediction. Moreover rows that had a missing value in one of the remaining features were deleted. The features 'earliest credit line' and 'issue date' were transformed to one feature, namely the number of days between the earliest credit line and the issue date of the loan. This was previously done by O’Rourke (2016). The values in the feature annual income were divided by 1000 and rounded-up in order to get more similar values and outliers (above 200,000) were transformed to 200,000. After these transformation we are left with 252,771 loans and 23 features and the percentage of 'charged off' loans is still 18%. We kept our self-created 'roi' feature for data exploration purposes, but this is not a feature we will use for prediction and it will be excluded later.
End of explanation
closed_loans.columns
Explanation: The selected features:
- term: the number of payments on the loan. Values are in months and can be either
36 or 60
- int_rate: interest rate
- installment: height monthly pay
- grade: A-G, A low risk, G high risk
- sub_grade: A1-G5
- emp_length: 0-10 years (10 stands for >=10)
- home_ownership: 'RENT', 'OWN', 'MORTGAGE', 'OTHER', 'NONE' and 'ANY'
- annual_inc: annual income stated by borrower, divided by 1000 and rounded-up, 200 stand for >=200,000
- purpose: 'credit_card', 'car', 'small_business', 'other', 'wedding', 'debt_consolidation', 'home_improvement', 'major_purchase', 'medical', 'moving', 'vacation', 'house', 'renewable_energy' and 'educational'
- zip_code: first 3 numbers followed by 2 times x
- addr_state: two letters representing the state the borrower lives in
- delinq_2yrs: the number of 30+ days past-due incidences of delinquency in the borrower's credit file for the past 2 years
- inq_last_6mths: the number of inquiries by creditors during the past 6 months
- open_acc: the number of open credit lines in the borrower’s credit file
- pub_rec: number of derogatory public records
- revol_bal: total credit revolving balance
- revol_util: revolving line utilization rate, or the amount of credit the borrower is using
relative to all available revolving credit
- total_acc: the total number of credit lines currently in the borrower’s credit file
- acc_now_delinq: the number of accounts on which the borrower is now delinquent
- loan_amnt: the listed amount of the loan applied for by the borrower
- dti: a ratio calculated using the borrower’s total monthly debt payments on the
total debt obligations, excluding mortgage and the requested LC loan, divided
by the borrower’s self-reported monthly income
- loan_status: the listed amount of the loan applied for by the borrower
- days_since_first_credit_line: self created feature, days between earliest credit line and issue date
End of explanation
# features that are not float or int, so not to be converted:
# ordered:
# sub_grade, emp_length, zip_code, term
# unordered:
# home_ownership, purpose, addr_state (ordered geographically)
closed_loans_predict = closed_loans.copy()
# term
closed_loans_predict['term'] = closed_loans_predict['term'].apply(lambda x: int(x.split(' ')[1]))
# grade
closed_loans_predict['grade'] = closed_loans_predict['grade'].astype('category')
grade_dict = {'A': 1, 'B': 2, 'C': 3, 'D': 4, 'E': 5, 'F': 6, 'G': 7}
closed_loans_predict['grade'] = closed_loans_predict['grade'].apply(lambda x: grade_dict[x])
# emp_length
emp_length_dict = {'n/a':0,
'< 1 year':0,
'1 year':1,
'2 years':2,
'3 years':3,
'4 years':4,
'5 years':5,
'6 years':6,
'7 years':7,
'8 years':8,
'9 years':9,
'10+ years':10}
closed_loans_predict['emp_length'] = closed_loans_predict['emp_length'].apply(lambda x: emp_length_dict[x])
# zipcode
closed_loans_predict['zip_code'] = closed_loans_predict['zip_code'].apply(lambda x: int(x[0:3]))
# subgrade
closed_loans_predict['sub_grade'] = (closed_loans_predict['grade']
+ closed_loans_predict['sub_grade'].apply(lambda x: float(list(x)[1])/10))
# house
house_dict = {'NONE': 0, 'OTHER': 0, 'ANY': 0, 'RENT': 1, 'MORTGAGE': 2, 'OWN': 3}
closed_loans_predict['home_ownership'] = closed_loans_predict['home_ownership'].apply(lambda x: house_dict[x])
# purpose
purpose_dict = {'other': 0, 'small_business': 1, 'renewable_energy': 2, 'home_improvement': 3,
'house': 4, 'educational': 5, 'medical': 6, 'moving': 7, 'car': 8,
'major_purchase': 9, 'wedding': 10, 'vacation': 11, 'credit_card': 12,
'debt_consolidation': 13}
closed_loans_predict['purpose'] = closed_loans_predict['purpose'].apply(lambda x: purpose_dict[x])
# states
state_dict = {'AK': 0, 'WA': 1, 'ID': 2, 'MT': 3, 'ND': 4, 'MN': 5,
'OR': 6, 'WY': 7, 'SD': 8, 'WI': 9, 'MI': 10, 'NY': 11,
'VT': 12, 'NH': 13, 'MA': 14, 'CT': 15, 'RI': 16, 'ME': 17,
'CA': 18, 'NV': 19, 'UT': 20, 'CO': 21, 'NE': 22, 'IA': 23,
'KS': 24, 'MO': 25, 'IL': 26, 'IN': 27, 'OH': 28, 'PA': 29,
'NJ': 30, 'KY': 31, 'WV': 32, 'VA': 33, 'DC': 34, 'MD': 35,
'DE': 36, 'AZ': 37, 'NM': 38, 'OK': 39, 'AR': 40, 'TN': 41,
'NC': 42, 'TX': 43, 'LA': 44, 'MS': 45, 'AL': 46, 'GA': 47,
'SC': 48, 'FL': 49, 'HI': 50}
closed_loans_predict['addr_state'] = closed_loans_predict['addr_state'].apply(lambda x: state_dict[x])
# make NA's, inf and -inf 0
closed_loans_predict = closed_loans_predict.fillna(0)
closed_loans_predict = closed_loans_predict.replace([np.inf, -np.inf], 0)
Explanation: In sklearn the features have to be numerical that we input in this algorithm, so we need to convert the categorical features to numeric. To do this ordered categorical features will have adjacent numbers and unordered features will get an order as best as possible during conversion to numeric, for instance geographical. These transformations could be done with a labelencoder from sklearn for instance, but we want to keep any order that is in the data, which might help with the prediction. With more features this would not be manageable but with this amount of features it is still doable. Also there cannot be nan/inf/-inf values, hence these will be made 0's. With this algorithm we will also have to scale and normalize the features.
Non-numeric features were converted as follows:
- grade/sub_grade: order of the letters was kept
- emp_length: nr of years
- zipcode: numbers kept of zipcode (geographical order)
- term: in months
- home_ownership: from none/any/other to rent to mortgage to owned
- purpose: from purposes that might make money to purposes that only cost money
- addr_state: ordered geographically from west to east, top to bottom (https://theusa.nl/staten/)
End of explanation
# split data in train (70%) and test set (30%)
X_train, X_test, y_train, y_test = train_test_split(closed_loans_predict.drop(['loan_status', 'roi'], axis=1),
closed_loans_predict['loan_status'],
test_size=0.3, random_state=123)
# scaling and normalizing the features
X_train_scaled = preprocessing.scale(X_train)
scaler = preprocessing.StandardScaler().fit(X_train)
X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns)
# scale test set with scaling used in train set
X_test_scaled = scaler.transform(X_test)
X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_test.columns)
Explanation: Classification
We selected two algorithms to use for this project based on that they preformed the best on Lending Club datasets: Logistic Regression and Random Forest. The Logistic Regression classifier is a simple classifier that uses a sigmoidal curve to predict from the features to which class the sample belongs. It has one parameter to tune namely the C-parameter. This parameter is the inverse of the regularization strength and smaller values specify stronger regularization. We will be using l1/lasso-regularization in the case of multiple features. With this algorithm we will also have to scale and normalize the features. Sometimes this algorithm has been found to perform better with fewer features on a Lending Club dataset.
Random Forest is a more complicated algorithm that scores well in a lot of cases. This algorithm makes various decision trees from subsets of the samples and uses at each split only a fraction of the features to prevent overfitting. The Random Forest algorithm is known to be not very sensitive to the values of its parameters: the number of features used at each split and the number of trees in the forest. Nevertheless, the default of sklearn is so low that we will raise the number of trees to 100. The algorithm has feature selection already built-in (at each split) and scaling/normalization is also not necessary.
For the classification we will split the data in a train (70%) and a test set (30%). The test set is used to evaluate the performance of our classifier.
End of explanation
print('percentage charged off in all loans:',
round(sum(loans['loan_status']=='Charged Off')/len(loans['loan_status'])*100))
print('percentage charged off in closed loans:',
round(sum(closed_loans['loan_status']=='Charged Off') / len(closed_loans['loan_status']) * 100))
Explanation: Performance metrics
We will use a few metrics to test the performance of our classifier on the test set. First we will use confusion matrices and their statistics. A confusion matrix shows how many true negatives (TN), false positives (FP), false negatives (FN) and true positives (TP) there are. Secondly, we will use the F1-score. This is implemented as 'f1_weighted' in sklearn. This score can be interpreted as a weighted average of the precision and recall. Precision is defined as TP / (TP + FP), while recall is defined as TP / (TP + FN). The F1-score is supposed to deal better with classes of unequal size, as is the case in this project, than the standard accuracy metric, which could become really high if the algorithm only predicts the dominant class. Thirdly, we will show Receiver Operating Characteristic (ROC) curves, which deal very well with unequal sized classes. The Area Under the Curve (AUC)-score of the ROC-plot is always 0.5 for random result and above 0.5 for a better than random result with 1.0 as maximum score. And lastly, we will use the definition of Chang et al. (2015) for return-of-investment on a loan to represent how profitable a loan is. Their definition is ROI = (Total payment received by investors / Total amount committed by investors) − 1.
Results
Exploration
Lending Club claims that the default rate of their loans 4% is. We checked this out in the complete set we used for this project. And we found a 'charged off' rate (their default loan status) to be 5%. Hence a little higher than that they claimed, but not too much. But in this set there are a lot of loans that are still ongoing. For these loans you do not know whether they will end up in 'fully paid' or 'charged off'. Therefore we focus on loans that are closed. Of these loans a much higher percentage ends up in 'charged off', namely 18%.
End of explanation
closed_loans['grade'] = closed_loans['grade'].astype('category', ordered=True)
sns.boxplot(data=closed_loans, x='grade', y='int_rate', color='turquoise')
Explanation: Lending Club gives grades (A-G) to their loans so potential investors can see which of these loans are not so risky (A) and which are the riskier loans (G). To make it still worthwhile to invest in the riskier loans, investors supposedly get more interest on these loans. From the figure below we see that indeed the interest grade is higher for riskier loans, but that there are a few exceptions.
End of explanation
sns.distplot(closed_loans['loan_amnt'], kde=False, bins=50)
plt.show()
sns.countplot(closed_loans['term'], color='turquoise')
plt.show()
sns.countplot(closed_loans['purpose'], color='turquoise')
plt.xticks(rotation=90)
plt.show()
ax = sns.distplot(closed_loans['annual_inc'], bins=100, kde=False)
plt.xlim([0,200])
ax.set(xlabel='annual income (x 1000)')
plt.show()
Explanation: Apart from grade and interest rate there are of course other characteristics of the loans. A few of them are shown below. We see that loans are in the range of almost 0 until 35,000. Hence Lending Club loans seem to be an alternative for personal loans and credit cards and not mortgages. The loans are either 36 months (3 years) or 60 months (5 years) and mostly 3 years. The purpose of the loan is mostly debt consolidation and credit card. Therefore it seems to be mostly people that already have debts. The annual income was cut-off at 200,000 but lies mostly between 25,000 and 100,000.
End of explanation
grade_status = closed_loans.reset_index().groupby(['grade', 'loan_status'])['id'].count()
risk_grades = dict.fromkeys(closed_loans['grade'].unique())
for g in risk_grades.keys():
risk_grades[g] = grade_status.loc[(g, 'Charged Off')] / (grade_status.loc[(g, 'Charged Off')] + grade_status.loc[(g, 'Fully Paid')])
risk_grades = pd.DataFrame(risk_grades, index=['proportion_unpaid_loans'])
sns.stripplot(data=risk_grades, color='darkgray', size=15)
plt.show()
sns.distplot(closed_loans[closed_loans['loan_status']=='Charged Off']['int_rate'])
sns.distplot(closed_loans[closed_loans['loan_status']=='Fully Paid']['int_rate'])
plt.show()
purpose_paid = closed_loans.reset_index().groupby(['purpose', 'loan_status'])['id'].count()
sns.barplot(data=pd.DataFrame(purpose_paid).reset_index(), x='purpose', y='id', hue='loan_status')
plt.xticks(rotation=90)
plt.show()
sns.boxplot(data=closed_loans, x='loan_status', y='dti')
plt.show()
print(ttest_ind(closed_loans[closed_loans['loan_status']=='Fully Paid']['dti'],
closed_loans[closed_loans['loan_status']=='Charged Off']['dti']))
print((closed_loans[closed_loans['loan_status']=='Fully Paid']['dti']).mean())
print((closed_loans[closed_loans['loan_status']=='Charged Off']['dti']).mean())
home_paid = closed_loans.reset_index().groupby(['home_ownership', 'loan_status'])['id'].count()
sns.barplot(data=pd.DataFrame(home_paid).reset_index(), x='home_ownership', y='id', hue='loan_status')
plt.xticks(rotation=90)
plt.show()
print(home_paid)
print('mortgage:', home_paid['MORTGAGE'][0] / (home_paid['MORTGAGE'][0] + home_paid['MORTGAGE'][1]))
print('own:', home_paid['OWN'][0] / (home_paid['OWN'][0] + home_paid['OWN'][1]))
print('rent:', home_paid['RENT'][0] / (home_paid['RENT'][0] + home_paid['RENT'][1]))
Explanation: We can speculate that some of the characteristics of loans have an influence on the loan ending up in 'charged off'. A first logical check is of course whether the Lending Club 'grade' is already visually a factor in loans ending up in this status. As we can see from the figure below, the 'grade' is very well correlated to the 'charged off proportion' of the loans. Only between F and G the difference is smaller. Hence Lending Club has built a pretty good algorithm to predict the 'charged off' status. Also higher interest loans seem to end up in 'charged off' more often as expected. Furthermore, with purpose the influence is not clearly visible. But with dti (debt-to-income) the difference is significant. This means the more debt a person has compared to their income, the more chance of the loan ending in 'charged off'. Lastly, with home ownership status the difference is visually present and also in numbers 'rent' has the highest charged off proportion, then 'own' and then 'mortgage'.
End of explanation
roi = closed_loans.groupby('grade')['roi'].mean()
print(roi)
print(closed_loans['roi'].mean())
sns.barplot(data=roi.reset_index(), x='grade', y='roi', color='gray')
plt.show()
roi = closed_loans.groupby(['grade', 'loan_status'])['roi'].mean()
sns.barplot(data=roi.reset_index(), x='roi', y='grade', hue='loan_status', orient='h')
plt.show()
sns.countplot(data=closed_loans, x='grade', hue='loan_status')
plt.show()
Explanation: Another interesting question is whether it is profitable to invest in loans from Lending Club and whether the 'grade' is has influence on profitability. For this purpose we show the return-of-investment (ROI) overall and per grade. As is seen below the loans have an average of only 1.4% profit. And if we look per grade, only A-C results in profit on average. Loans that end up in 'charged off' are on average very bad for the profits since you will likely loose part of the principal as well. In the A-C categories the loans end up in 'charged off' less times and are therefore on average more profitable even though the loans in the riskier categories deliver more interest returns. The higher interest (more than 20% in the riskiest grades) does not compensate enough for the high 'charged off' ratio, which is around 40% in the riskiest grades as we saw before.
End of explanation
# parameter tuning Logistic Regression
dict_Cs = {'C': [0.001, 0.1, 1, 10, 100]}
clf = GridSearchCV(LogisticRegression(penalty='l1'), dict_Cs, 'f1_weighted', cv=10)
clf.fit(X_train_scaled, y_train)
print(clf.best_params_)
print(clf.best_score_)
Explanation: Prediction
Predicting status 'charged off'
As we saw in the exploration part, the 'grade' is already a pretty good characteristic to predict 'charged off' rate. Therefore we will first see whether adding any additional features is actually useful. Subsequently we will see if we can recreate 'grade' from the features to see which features are still useful but incorporated in 'grade'. In the methods is described which features were selected. We excluded all features not known at the start of the loan and features that are not predictive like the id of the loan or have a lot of missing values. Twenty-three of the features remain in this way. Logistic Regression and Random Forest, two algorithms that have performed well in the past on this dataset, will be used for the prediction. For optimal performance of the Logistic Regression algorithm, the C-parameter can be tuned (the inverse of the regularization strength) on the training set. This is only necessary in the case of using multiple features, because regularization is not useful in the case of one feature (grade in this case). The found optimal value for the C-parameter on the training set is 10.
End of explanation
# Logistic Regression only grade
clf = LogisticRegression(penalty='l1', C=10)
clf.fit(X_train_scaled.loc[:,['grade']], y_train)
prediction = clf.predict(X_test_scaled.loc[:,['grade']])
# F1-score
print('f1_score:', f1_score(y_test, prediction, average='weighted'))
# AUC
y_score = clf.predict_proba(X_test_scaled.loc[:,['grade']])
fpr1, tpr1, thresholds = roc_curve(np.array(y_test), y_score[:,0], pos_label='Charged Off')
auc1 = round(auc(fpr1, tpr1), 2)
print('auc:', auc1)
# Confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test), prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
# Logistic Regression all features
clf = LogisticRegression(penalty='l1', C=10)
clf.fit(X_train_scaled, y_train)
prediction = clf.predict(X_test_scaled)
# F1-score
print(f1_score(y_test, prediction, average='weighted'))
# AUC
y_score = clf.predict_proba(X_test_scaled)
fpr2, tpr2, thresholds = roc_curve(np.array(y_test), y_score[:,0], pos_label='Charged Off')
auc2 = round(auc(fpr2, tpr2), 2)
print('auc:', auc2)
# Confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test), prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
# Logistic Regression top-5 features selected with Select-K-Best
new_X = (SelectKBest(mutual_info_classif, k=5)
.fit_transform(X_train_scaled, y_train))
clf = LogisticRegression(penalty='l1', C=10)
clf.fit(new_X, y_train)
prediction = clf.predict(X_test_scaled.loc[:, ['term', 'int_rate', 'installment', 'grade', 'sub_grade']])
# F1-score
print(f1_score(y_test, prediction, average='weighted'))
# AUC
y_score = clf.predict_proba(X_test_scaled.loc[:, ['term', 'int_rate', 'installment', 'grade', 'sub_grade']])
fpr3, tpr3, thresholds = roc_curve(np.array(y_test), y_score[:,0], pos_label='Charged Off')
auc3 = round(auc(fpr3, tpr3), 2)
print('auc:', auc3)
# Confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test), prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
# Random Forest only grade
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_train.loc[:,['grade']], y_train)
prediction = clf.predict(X_test.loc[:,['grade']])
# F1-score
print(f1_score(y_test, prediction, average='weighted'))
# AUC
y_score = clf.predict_proba(X_test.loc[:,['grade']])
fpr4, tpr4, thresholds = roc_curve(np.array(y_test), y_score[:,0], pos_label='Charged Off')
auc4 = round(auc(fpr4, tpr4), 2)
print('auc:', auc4)
# Confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test), prediction)
print(confusion_matrix)
confusion_matrix.plot()
# Random Forest all features
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_train, y_train)
prediction = clf.predict(X_test)
# F1-score
print(f1_score(y_test, prediction, average='weighted'))
# AUC
y_score = clf.predict_proba(X_test)
fpr5, tpr5, thresholds = roc_curve(np.array(y_test), y_score[:,0], pos_label='Charged Off')
auc5 = round(auc(fpr5, tpr5), 2)
print('auc:', auc5)
# Confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test), prediction)
print(confusion_matrix)
confusion_matrix.plot()
# ROC-plot with AUC scores.
plt.plot(fpr1, tpr1, label='Logreg grade (auc = %0.2f)' % auc1, linewidth=4)
plt.plot(fpr2, tpr2, label='Logreg all (auc = %0.2f)' % auc2, linewidth=4)
plt.plot(fpr3, tpr3, label='Logreg top-5 (auc = %0.2f)' % auc3, linewidth=4)
plt.plot(fpr4, tpr4, label='RF grade (auc = %0.2f)' % auc4, linewidth=4)
plt.plot(fpr5, tpr5, label='RF all (auc = %0.2f)' % auc5, linewidth=4)
plt.legend(loc="lower right")
plt.show()
Explanation: We trained both our classifiers on both only 'grade' and all features. And with Logistic Regression we also trained one with top-5 features as selected by SelectKBest from sklearn. This is because Logistic Regression sometimes performs better with less features. We see that all F1-scores are around 0.75. Using all features instead of only grade gives only a very marginal increase of around 1% and using 5 features gives not increase. The best performing algorithm based on the F1-score is Logistic regression with all features. But the differences were very small. When looking at the confusion matrices it is clear that all algorithms mostly predict 'Fully Paid', since this is the dominant class (82%) accuracy scores will look pretty well, while the algorithm is actually not that great as can be seen from the confusion matrices. The F1-score metric was chosen based on the fact that it can deal better with unequal classes, but even that reports an score of 0.74 when Random Forest predicts all loans to be 'Fully Paid'. AUC is in this case a better metric, since also with uneven classes random remains 0.5. The algorithms with only grade give an AUC of 0.66. While the Logistic Regression with all features gives a score of 0.71 and Random Forest of 0.7. The top-5 features algorithm is in between those with 0.68. Hence again adding all features gives a little better performance (0.4-0.5) and Logistic Regression with all features performs the best. In the ROC-plot this is also displayed.
End of explanation
clf = LogisticRegression(penalty='l1', C=10)
clf.fit(X_train_scaled, y_train)
coefs = clf.coef_
# find index of top 5 highest coefficients, aka most used features for prediction
positions = abs(coefs[0]).argsort()[-5:][::-1]
features = list(X_train_scaled.columns[positions])
print(features)
print(coefs[0][positions])
print(clf.classes_)
# use statsmodels logistic regression to get p-values for the top-5 most used features
logit = sm.Logit(y_train == 'Charged Off', np.array(X_train_scaled.loc[:, features]))
result = logit.fit()
print(result.summary())
Explanation: So adding features does lead to a little better performance. Therefore it is interesting to see which features are mostly used for this increase. The important features for logistic regression can be found by seeing which coefficients are used for the features. The bigger the coefficient the more the model uses this feature for prediction. For our best performing model, Logistic Regression with all features, the top-5 features with the biggest coefficients are: interest rate, annual income, subgrade, term and dti. The first and last two have a negative coefficient and the other two a positive one. It seems that the algorithm choose 'fully paid' as the positive class. Therefore a negative coefficient for interest rate means that the higher the interest rate the smaller the chance on 'fully paid'. This makes sense since grade is related to interest rate and the higher the grade, the higher the chance on 'charged off'. A shorter time period (term) gives less chance on 'charged off'. This also seems logical. And the less debt-to-income the less chance a loans ends up in ‘charged off’, which also makes sense. Grade is not in the top-5 features but the redundant feature subgrade is, only the strange thing is that the algorithm gave it a positive coefficient, hence the higher the subgrade, the more chance on 'fully paid'. This makes no sense. Annual income on the other hand is logical, since more annual income giving a bigger chance on 'fully paid' seems plausible. Subsequently these features were put in a logistic regression model of the package statsmodels to get p-values for the features. Here the signs of the coefficients are exactly reversed, so it seems to have chosen 'charged off' as the positive class. All the features for which the sign logically makes sense are significant, only subgrade is not. Hence this fits with what seems logical that the sign of subgrade is a mistake.
End of explanation
# split data in train (70%) and test set (30%) stratify by loan_status
X_train, X_test, y_train, y_test = train_test_split(closed_loans_predict.drop(['grade', 'sub_grade', 'int_rate', 'roi', 'loan_status']
, axis=1),
closed_loans['grade'], test_size=0.3,
random_state=123, stratify=closed_loans['loan_status'])
# scaling and normalizing the features
X_train_scaled = preprocessing.scale(X_train)
scaler = preprocessing.StandardScaler().fit(X_train)
X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns)
# scale test set with scaling used in train set
X_test_scaled = scaler.transform(X_test)
X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_test.columns)
# binarize the labels for multiclass onevsall prediction
lb = LabelBinarizer()
grades = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
lb.fit(grades)
y_train_2 = lb.transform(y_train)
# Logistic Regression predicting grade from the other features (excluding interest rate and subgrade)
clf = OneVsRestClassifier(LogisticRegression(penalty='l1'))
predict_y = clf.fit(X_train_scaled, y_train_2).predict(X_test_scaled)
predict_y = lb.inverse_transform(predict_y)
# confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test, dtype='<U1'), predict_y)
confusion_matrix.plot()
confusion_matrix.print_stats()
# Random Forest predicting grade from the other features (excluding interest rate and subgrade)
clf = OneVsRestClassifier(RandomForestClassifier(n_estimators=100))
predict_y = clf.fit(X_train, y_train_2).predict(X_test)
predict_y = lb.inverse_transform(predict_y)
# confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test, dtype='<U1'), predict_y)
confusion_matrix.plot()
confusion_matrix.print_stats()
# important features
features = []
for i,j in enumerate(grades):
print('\n',j)
feat_imp = clf.estimators_[i].feature_importances_
positions = abs(feat_imp).argsort()[-5:][::-1]
features.extend(list(X_train.columns[positions]))
print(X_train.columns[positions])
print(feat_imp[positions])
print(pd.Series(features).value_counts())
# Excluding loans with grade A
# split data in train (70%) and test set (30%) stratify by loan_status
no_A_loans = closed_loans_predict[closed_loans['grade']!='A']
X_train, X_test, y_train, y_test = train_test_split(no_A_loans.drop(['grade', 'sub_grade', 'int_rate', 'roi', 'loan_status']
, axis=1),
closed_loans[closed_loans['grade']!='A']['grade'], test_size=0.3,
random_state=123, stratify=closed_loans[closed_loans['grade']!='A']['loan_status'])
# scaling and normalizing the features
X_train_scaled = preprocessing.scale(X_train)
scaler = preprocessing.StandardScaler().fit(X_train)
X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns)
# scale test set with scaling used in train set
X_test_scaled = scaler.transform(X_test)
X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_test.columns)
# binarize the labels for multiclass onevsall prediction
lb = LabelBinarizer()
grades = ['B', 'C', 'D', 'E', 'F', 'G']
lb.fit(grades)
y_train_2 = lb.transform(y_train)
# Excluding loans with grade A
# Random Forest predicting grade from the other features (excluding interest rate and subgrade)
clf = OneVsRestClassifier(RandomForestClassifier(n_estimators=100))
predict_y = clf.fit(X_train, y_train_2).predict(X_test)
predict_y = lb.inverse_transform(predict_y)
# confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test, dtype='<U1'), predict_y)
confusion_matrix.plot()
confusion_matrix.print_stats()
# important features
features = []
for i,j in enumerate(grades):
print('\n',j)
feat_imp = clf.estimators_[i].feature_importances_
positions = abs(feat_imp).argsort()[-5:][::-1]
features.extend(list(X_train.columns[positions]))
print(X_train.columns[positions])
print(feat_imp[positions])
print(pd.Series(features).value_counts())
Explanation: Re-creating grade
We saw from the section before that we only slightly outperform an algorithm with only grade by adding more features. Therefore we will see which features are predictive of grade and are in that way important. First a Logistic Regression algorithm is trained to predict the grades. We see that it mostly predicts everything as grade A. And the other grades are also not that well predicted except for G, but there are only 2 loans predicted as G. Random Forest on the other hand performs a little better. It also predicts most things as A, but we see some promising coloring on the diagonal of the confusion matrix plot (predicting the right grade) and the precision for these grades is around 0.8. The feature importance in Random Forest as implemented by sklearn as total decrease in node impurity (weighted by the probability of reaching that node (which is approximated by the proportion of samples reaching that node)) averaged over all trees of the ensemble. The most important features are found to be: revolving line utilization rate (the amount of credit used compared to all credit), installment (monthly payment), revolving balance (all credit), loan amount and debt-to-income. So the grade seems to be mostly based on the amount borrowed (loan amount and installment) and the debt the borrower already has (revolving_util, revolving_bal, dti). It makes sense that these things might be important. Nevertheless, the recreated algorithm is by far not as good as the one of Lending Club. If we leave out loans with grade A than our algorithm just predicts most things as grade B, while the rest of the grades are a lot more accurate. Either Lending Club trained a way better algorithm and/or Lending Club does not make all characteristics of the loans they use for their algorithm public knowledge.
End of explanation |
2,334 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div class="jumbotron text-left"><b>
This tutorial describes how to use the SMT toolbox to do some Bayesian Optimization (EGO method) to solve unconstrained optimization problem
<div>
Rémy Priem and Nathalie BARTOLI ONERA/DTIS/M2CI - April 2020
<p class="alert alert-success" style="padding
Step1: Here, the training data are the points xdata=[0,7,25].
Step2: Build the GP model with a square exponential kernel with SMT toolbox knowing $(x_{data}, y_{data})$.
Step3: Bayesian optimization is defined by Jonas Mockus in (Mockus, 1975) as an optimization technique based upon the minimization of the expected deviation from the extremum of the studied function.
The objective function is treated as a black-box function. A Bayesian strategy sees the objective as a random function and places a prior over it. The prior captures our beliefs about the behavior of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criterion) that determines what the next query point should be.
One of the earliest bodies of work on Bayesian optimisation that we are aware of is (Kushner, 1962 ; Kushner, 1964). Kushner used Wiener processes for one-dimensional problems. Kushner’s decision model was based on maximizing the probability of improvement, and included a parameter that controlled the trade-off between ‘more global’ and ‘more local’ optimization, in the same spirit as the Exploration/Exploitation trade-off.
Meanwhile, in the former Soviet Union, Mockus and colleagues developed a multidimensional Bayesian optimization method using linear combinations of Wiener fields, some of which was published in English in (Mockus, 1975). This paper also describes an acquisition function that is based on myopic expected improvement of the posterior, which has been widely adopted in Bayesian optimization as the Expected Improvement function.
In 1998, Jones used Gaussian processes together with the expected improvement function to successfully perform derivative-free optimization and experimental design through an algorithm called Efficient Global Optimization, or EGO (Jones, 1998).
Efficient Global Optimization
In what follows, we describe the Efficient Global Optimization (EGO) algorithm, as published in (Jones, 1998).
Let $F$ be an expensive black-box function to be minimized. We sample $F$ at the different locations $X = {x_1, x_2,\ldots,x_n}$ yielding the responses $Y = {y_1, y_2,\ldots,y_n}$. We build a Kriging model (also called Gaussian process) with a mean function $\mu$ and a variance function $\sigma^{2}$.
The next step is to compute the criterion EI. To do this, let us denote
Step4: Now we compute the EGO method and compare it to other infill criteria
- SBO (surrogate based optimization)
Step5: ## Use the EGO from SMT
Step7: Choose your criterion to perform the optimization
Step8: We can now compare the results by using only the mean information provided by surrogate model approximation | Python Code:
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
plt.ion()
def fun(point):
return np.atleast_2d((point-3.5)*np.sin((point-3.5)/(np.pi)))
X_plot = np.atleast_2d(np.linspace(0, 25, 10000)).T
Y_plot = fun(X_plot)
lines = []
fig = plt.figure(figsize=[5,5])
ax = fig.add_subplot(111)
true_fun, = ax.plot(X_plot,Y_plot)
lines.append(true_fun)
ax.set_title('$x \sin{x}$ function')
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.show()
#dimension of the problem
ndim = 1
Explanation: <div class="jumbotron text-left"><b>
This tutorial describes how to use the SMT toolbox to do some Bayesian Optimization (EGO method) to solve unconstrained optimization problem
<div>
Rémy Priem and Nathalie BARTOLI ONERA/DTIS/M2CI - April 2020
<p class="alert alert-success" style="padding:1em">
To use SMT models, please follow this link : https://github.com/SMTorg/SMT/blob/master/README.md. The documentation is available here: http://smt.readthedocs.io/en/latest/
</p>
The reference paper is available
here https://www.sciencedirect.com/science/article/pii/S0965997818309360?via%3Dihub
or as a preprint: http://mdolab.engin.umich.edu/content/python-surrogate-modeling-framework-derivatives
<div class="alert alert-info fade in" id="d110">
<p>In this notebook, two examples are presented to illustrate Bayesian Optimization</p>
<ol> - a 1D-example (xsinx function) where the algorithm is explicitely given and the use of different criteria is presented </ol>
<ol> - a 2D-exemple (Rosenbrock function) where the EGO algorithm from SMT is used </ol>
</div>
# Bayesian Optimization
End of explanation
x_data = np.atleast_2d([0,7,25]).T
y_data = fun(x_data)
Explanation: Here, the training data are the points xdata=[0,7,25].
End of explanation
from smt.surrogate_models import KPLS, KRG, KPLSK
########### The Kriging model
# The variable 'theta0' is a list of length ndim.
t = KRG(theta0=[1e-2]*ndim,print_prediction = False, corr='squar_exp')
#Training
t.set_training_values(x_data,y_data)
t.train()
# Prediction of the points for the plot
Y_GP_plot = t.predict_values(X_plot)
Y_GP_plot_var = t.predict_variances(X_plot)
fig = plt.figure(figsize=[5,5])
ax = fig.add_subplot(111)
true_fun, = ax.plot(X_plot,Y_plot)
data, = ax.plot(x_data,y_data,linestyle='',marker='o')
gp, = ax.plot(X_plot,Y_GP_plot,linestyle='--',color='g')
sig_plus = Y_GP_plot+3*np.sqrt(Y_GP_plot_var)
sig_moins = Y_GP_plot-3*np.sqrt(Y_GP_plot_var)
un_gp = ax.fill_between(X_plot.T[0],sig_plus.T[0],sig_moins.T[0],alpha=0.3,color='g')
lines = [true_fun,data,gp,un_gp]
ax.set_title('$x \sin{x}$ function')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.legend(lines,['True function','Data','GPR prediction','99 % confidence'])
plt.show()
Explanation: Build the GP model with a square exponential kernel with SMT toolbox knowing $(x_{data}, y_{data})$.
End of explanation
from scipy.stats import norm
from scipy.optimize import minimize
def EI(GP,points,f_min):
pred = GP.predict_values(points)
var = GP.predict_variances(points)
args0 = (f_min - pred)/np.sqrt(var)
args1 = (f_min - pred)*norm.cdf(args0)
args2 = np.sqrt(var)*norm.pdf(args0)
if var.size == 1 and var == 0.0: # can be use only if one point is computed
return 0.0
ei = args1 + args2
return ei
Y_GP_plot = t.predict_values(X_plot)
Y_GP_plot_var = t.predict_variances(X_plot)
Y_EI_plot = EI(t,X_plot,np.min(y_data))
fig = plt.figure(figsize=[10,10])
ax = fig.add_subplot(111)
true_fun, = ax.plot(X_plot,Y_plot)
data, = ax.plot(x_data,y_data,linestyle='',marker='o')
gp, = ax.plot(X_plot,Y_GP_plot,linestyle='--',color='g')
sig_plus = Y_GP_plot+3*np.sqrt(Y_GP_plot_var)
sig_moins = Y_GP_plot-3*np.sqrt(Y_GP_plot_var)
un_gp = ax.fill_between(X_plot.T[0],sig_plus.T[0],sig_moins.T[0],alpha=0.3,color='g')
ax1 = ax.twinx()
ei, = ax1.plot(X_plot,Y_EI_plot,color='red')
lines = [true_fun,data,gp,un_gp,ei]
ax.set_title('$x \sin{x}$ function')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax1.set_ylabel('ei')
fig.legend(lines,['True function','Data','GPR prediction','99 % confidence','Expected Improvement'],loc=[0.13,0.64])
plt.show()
Explanation: Bayesian optimization is defined by Jonas Mockus in (Mockus, 1975) as an optimization technique based upon the minimization of the expected deviation from the extremum of the studied function.
The objective function is treated as a black-box function. A Bayesian strategy sees the objective as a random function and places a prior over it. The prior captures our beliefs about the behavior of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criterion) that determines what the next query point should be.
One of the earliest bodies of work on Bayesian optimisation that we are aware of is (Kushner, 1962 ; Kushner, 1964). Kushner used Wiener processes for one-dimensional problems. Kushner’s decision model was based on maximizing the probability of improvement, and included a parameter that controlled the trade-off between ‘more global’ and ‘more local’ optimization, in the same spirit as the Exploration/Exploitation trade-off.
Meanwhile, in the former Soviet Union, Mockus and colleagues developed a multidimensional Bayesian optimization method using linear combinations of Wiener fields, some of which was published in English in (Mockus, 1975). This paper also describes an acquisition function that is based on myopic expected improvement of the posterior, which has been widely adopted in Bayesian optimization as the Expected Improvement function.
In 1998, Jones used Gaussian processes together with the expected improvement function to successfully perform derivative-free optimization and experimental design through an algorithm called Efficient Global Optimization, or EGO (Jones, 1998).
Efficient Global Optimization
In what follows, we describe the Efficient Global Optimization (EGO) algorithm, as published in (Jones, 1998).
Let $F$ be an expensive black-box function to be minimized. We sample $F$ at the different locations $X = {x_1, x_2,\ldots,x_n}$ yielding the responses $Y = {y_1, y_2,\ldots,y_n}$. We build a Kriging model (also called Gaussian process) with a mean function $\mu$ and a variance function $\sigma^{2}$.
The next step is to compute the criterion EI. To do this, let us denote:
$$f_{min} = \min {y_1, y_2,\ldots,y_n}.$$
The Expected Improvement funtion (EI) can be expressed:
$$E[I(x)] = E[\max(f_{min}-Y, 0)],$$
where $Y$ is the random variable following the distribution $\mathcal{N}(\mu(x), \sigma^{2}(x))$.
By expressing the right-hand side of EI expression as an integral, and applying some tedious integration by parts, one can express the expected improvement in closed form:
$$
E[I(x)] = (f_{min} - \mu(x))\Phi\left(\frac{f_{min} - \mu(x)}{\sigma(x)}\right) + \sigma(x) \phi\left(\frac{f_{min} - \mu(x)}{\sigma(x)}\right)
$$
where $\Phi(\cdot)$ and $\phi(\cdot)$ are respectively the cumulative and probability density functions of $\mathcal{N}(0,1)$.
Next, we determine our next sampling point as :
\begin{align}
x_{n+1} = \arg \max_{x} \left(E[I(x)]\right)
\end{align}
We then test the response $y_{n+1}$ of our black-box function $F$ at $x_{n+1}$, rebuild the model taking into account the new information gained, and research the point of maximum expected improvement again.
We summarize here the EGO algorithm:
EGO(F, $n_{iter}$) # Find the best minimum of $\operatorname{F}$ in $n_{iter}$ iterations
For ($i=0:n_{iter}$)
$mod = {model}(X, Y)$ # surrogate model based on sample vectors $X$ and $Y$
$f_{min} = \min Y$
$x_{i+1} = \arg \max {EI}(mod, f_{min})$ # choose $x$ that maximizes EI
$y_{i+1} = {F}(x_{i+1})$ # Probe the function at most promising point $x_{i+1}$
$X = [X,x_{i+1}]$
$Y = [Y,y_{i+1}]$
$i = i+1$
$f_{min} = \min Y$
Return : $f_{min}$ # This is the best known solution after $n_{iter}$ iterations
Now we want to optimize this function by using Bayesian Optimization and comparing
- Surrogate Based optimization (SBO)
- Expected Improvement criterion (EI)
In a first step we compute the EI criterion
End of explanation
#surrogate Based optimization: min the Surrogate model by using the mean mu
def SBO(GP,point):
res = GP.predict_values(point)
return res
#lower confidence bound optimization: minimize by using mu - 3*sigma
def LCB(GP,point):
pred = GP.predict_values(point)
var = GP.predict_variances(point)
res = pred-3.*np.sqrt(var)
return res
IC = 'EI'
import matplotlib.image as mpimg
import matplotlib.animation as animation
from IPython.display import HTML
plt.ioff()
x_data = np.atleast_2d([0,7,25]).T
y_data = fun(x_data)
n_iter = 15
gpr = KRG(theta0=[1e-2]*ndim,print_global = False)
for k in range(n_iter):
x_start = np.atleast_2d(np.random.rand(20)*25).T
f_min_k = np.min(y_data)
gpr.set_training_values(x_data,y_data)
gpr.train()
if IC == 'EI':
obj_k = lambda x: -EI(gpr,np.atleast_2d(x),f_min_k)[:,0]
elif IC =='SBO':
obj_k = lambda x: SBO(gpr,np.atleast_2d(x))
elif IC == 'LCB':
obj_k = lambda x: LCB(gpr,np.atleast_2d(x))
opt_all = np.array([minimize(lambda x: float(obj_k(x)), x_st, method='SLSQP', bounds=[(0,25)]) for x_st in x_start])
opt_success = opt_all[[opt_i['success'] for opt_i in opt_all]]
obj_success = np.array([opt_i['fun'] for opt_i in opt_success])
ind_min = np.argmin(obj_success)
opt = opt_success[ind_min]
x_et_k = opt['x']
y_et_k = fun(x_et_k)
y_data = np.atleast_2d(np.append(y_data,y_et_k)).T
x_data = np.atleast_2d(np.append(x_data,x_et_k)).T
Y_GP_plot = gpr.predict_values(X_plot)
Y_GP_plot_var = gpr.predict_variances(X_plot)
Y_EI_plot = -EI(gpr,X_plot,f_min_k)
fig = plt.figure(figsize=[10,10])
ax = fig.add_subplot(111)
if IC == 'LCB' or IC == 'SBO':
ei, = ax.plot(X_plot,Y_EI_plot,color='red')
else:
ax1 = ax.twinx()
ei, = ax1.plot(X_plot,Y_EI_plot,color='red')
true_fun, = ax.plot(X_plot,Y_plot)
data, = ax.plot(x_data[0:k+3],y_data[0:k+3],linestyle='',marker='o',color='orange')
opt, = ax.plot(x_data[k+3],y_data[k+3],linestyle='',marker='*',color='r')
gp, = ax.plot(X_plot,Y_GP_plot,linestyle='--',color='g')
sig_plus = Y_GP_plot+3*np.sqrt(Y_GP_plot_var)
sig_moins = Y_GP_plot-3*np.sqrt(Y_GP_plot_var)
un_gp = ax.fill_between(X_plot.T[0],sig_plus.T[0],sig_moins.T[0],alpha=0.3,color='g')
lines = [true_fun,data,gp,un_gp,opt,ei]
ax.set_title('$x \sin{x}$ function')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.legend(lines,['True function','Data','GPR prediction','99 % confidence','Next point to Evaluate','Infill Criteria'])
plt.savefig('Optimisation %d' %k)
plt.close(fig)
ind_best = np.argmin(y_data)
x_opt = x_data[ind_best]
y_opt = y_data[ind_best]
print('Results : X = %s, Y = %s' %(x_opt,y_opt))
fig = plt.figure(figsize=[10,10])
ax = plt.gca()
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ims = []
for k in range(n_iter):
image_pt = mpimg.imread('Optimisation %d.png' %k)
im = plt.imshow(image_pt)
ims.append([im])
ani = animation.ArtistAnimation(fig, ims,interval=500)
HTML(ani.to_jshtml())
Explanation: Now we compute the EGO method and compare it to other infill criteria
- SBO (surrogate based optimization): directly using the prediction of the surrogate model ($\mu$)
- LCB (Lower Confidence bound): using the confidence interval : $\mu -3 \times \sigma$
- EI for expected Improvement (EGO)
End of explanation
from smt.applications.ego import EGO
from smt.sampling_methods import LHS
Explanation: ## Use the EGO from SMT
End of explanation
#define the rosenbrock function
def rosenbrock(x):
Evaluate objective and constraints for the Rosenbrock test case:
n,dim = x.shape
#parameters:
Opt =[]
Opt_point_scalar = 1
#construction of O vector
for i in range(0, dim):
Opt.append(Opt_point_scalar)
#Construction of Z vector
Z= np.zeros((n,dim))
for i in range(0,dim):
Z[:,i] = (x[:,i]-Opt[i]+1)
#Sum
sum1 = np.zeros((n,1))
for i in range(0,dim-1):
sum1[:,0] += 100*(((Z[:,i]**2)-Z[:,i+1])**2)+((Z[:,i]-1)**2)
return sum1
xlimits=np.array([[-2,2], [-2,2]])
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
#To plot the Rosenbrock function
num_plot = 50 #to plot rosenbrock
x = np.linspace(xlimits[0][0],xlimits[0][1],num_plot)
res = []
for x0 in x:
for x1 in x:
res.append(rosenbrock(np.array([[x0,x1]])))
res = np.array(res)
res = res.reshape((50,50)).T
X,Y = np.meshgrid(x,x)
fig = plt.figure(figsize=[10,10])
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, res, cmap=cm.coolwarm,
linewidth=0, antialiased=False,alpha=0.5)
plt.title(' Rosenbrock function')
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
criterion='EI' #'EI' or 'SBO' or 'LCB'
#number of points in the initial DOE
ndoe = 10 #(at least ndim+1)
#number of iterations with EGO
n_iter = 50
#Build the initial DOE, add the random_state option to have the reproducibility of the LHS points
sampling = LHS(xlimits=xlimits, random_state=1)
xdoe = sampling(ndoe)
#EGO call
ego = EGO(n_iter=n_iter, criterion=criterion, xdoe=xdoe, xlimits=xlimits)
x_opt, y_opt, ind_best, x_data, y_data = ego.optimize(fun=rosenbrock)
print('Xopt for Rosenbrock ', x_opt,y_opt, ' obtained using EGO criterion = ', criterion )
print('Check if the optimal point is Xopt= (1,1) with the Y value=0')
print('if not you can increase the number of iterations with n_iter but the CPU will increase also.')
print('---------------------------')
#To plot the Rosenbrock function
#3D plot
x = np.linspace(xlimits[0][0],xlimits[0][1],num_plot)
res = []
for x0 in x:
for x1 in x:
res.append(rosenbrock(np.array([[x0,x1]])))
res = np.array(res)
res = res.reshape((50,50)).T
X,Y = np.meshgrid(x,x)
fig = plt.figure(figsize=(10, 10))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, res, cmap=cm.coolwarm,
linewidth=0, antialiased=False,alpha=0.5)
#to add the points provided by EGO
ax.scatter(x_data[:ndoe,0],x_data[:ndoe,1],y_data[:ndoe],zdir='z',marker = '.',c='k',s=100, label='Initial DOE')
ax.scatter(x_data[ndoe:,0],x_data[ndoe:,1],y_data[ndoe:],zdir='z',marker = 'x',c='r', s=100, label= 'Added point')
ax.scatter(x_opt[0],x_opt[1],y_opt,zdir='z',marker = '*',c='g', s=100, label= 'EGO optimal point')
plt.title(' Rosenbrock function during EGO algorithm')
plt.xlabel('x1')
plt.ylabel('x2')
plt.legend()
plt.show()
#2D plot
#to add the points provided by EGO
plt.plot(x_data[:ndoe,0],x_data[:ndoe,1],'.', label='Initial DOE')
plt.plot(x_data[ndoe:,0],x_data[ndoe:,1],'x', c='r', label='Added point')
plt.plot(x_opt[:1],x_opt[1:],'*',c='g', label= 'EGO optimal point')
plt.plot([1], [1],'*',c='m', label= 'Optimal point')
plt.title(' Rosenbrock function during EGO algorithm')
plt.xlabel('x1')
plt.ylabel('x2')
plt.legend()
plt.show()
Explanation: Choose your criterion to perform the optimization: EI, SBO or LCB
Choose the size of the initial DOE
Choose the number of EGO iterations
Try with a 2D function : 2D Rosenbrock function
Rosenbrock Function in dimension N
$$
f(\mathbf{x}) = \sum_{i=1}^{N-1} 100 (x_{i+1} - x_i^2 )^2 + (1-x_i)^2 \quad \mbox{where} \quad \mathbf{x} = [x_1, \ldots, x_N] \in \mathbb{R}^N.
$$
$$x_i \in [-2,2]$$
End of explanation
criterion='SBO' #'EI' or 'SBO' or 'LCB'
#number of points in the initial DOE
ndoe = 10 #(at least ndim+1)
#number of iterations with EGO
n_iter = 50
#Build the initial DOE
sampling = LHS(xlimits=xlimits, random_state=1)
xdoe = sampling(ndoe)
#EGO call
ego = EGO(n_iter=n_iter, criterion=criterion, xdoe=xdoe, xlimits=xlimits)
x_opt, y_opt, ind_best, x_data, y_data = ego.optimize(fun=rosenbrock)
print('Xopt for Rosenbrock ', x_opt, y_opt, ' obtained using EGO criterion = ', criterion)
print('Check if the optimal point is Xopt=(1,1) with the Y value=0')
print('---------------------------')
Explanation: We can now compare the results by using only the mean information provided by surrogate model approximation
End of explanation |
2,335 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Project
Step1: Read in an Image
Step9: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are
Step10: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
Step11: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
Step12: Let's try the one with the solid white lane on the right first ...
Step14: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
Step16: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
Step18: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! | Python Code:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
Explanation: Self-Driving Car Engineer Nanodegree
Project: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips.
Import Packages
End of explanation
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
Explanation: Read in an Image
End of explanation
import math
def grayscale(img):
Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
Applies the Canny transform
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
Applies a Gaussian Noise kernel
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, roi_top, roi_bottom, min_slope, max_slope, color=[255, 0, 0], thickness=2):
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
#Initialize variables
sum_fit_left = 0
sum_fit_right = 0
number_fit_left = 0
number_fit_right = 0
for line in lines:
for x1,y1,x2,y2 in line:
#find the slope and offset of each line found (y=mx+b)
fit = np.polyfit((x1, x2), (y1, y2), 1)
#limit the slope to plausible left lane values and compute the mean slope/offset
if fit[0] >= min_slope and fit[0] <= max_slope:
sum_fit_left = fit + sum_fit_left
number_fit_left = number_fit_left + 1
#limit the slope to plausible right lane values and compute the mean slope/offset
if fit[0] >= -max_slope and fit[0] <= -min_slope:
sum_fit_right = fit + sum_fit_right
number_fit_right = number_fit_right + 1
#avoid division by 0
if number_fit_left > 0:
#Compute the mean of all fitted lines
mean_left_fit = sum_fit_left/number_fit_left
#Given two y points (bottom of image and top of region of interest), compute the x coordinates
x_top_left = int((roi_top - mean_left_fit[1])/mean_left_fit[0])
x_bottom_left = int((roi_bottom - mean_left_fit[1])/mean_left_fit[0])
#Draw the line
cv2.line(img, (x_bottom_left,roi_bottom), (x_top_left,roi_top), [255, 0, 0], 5)
else:
mean_left_fit = (0,0)
if number_fit_right > 0:
#Compute the mean of all fitted lines
mean_right_fit = sum_fit_right/number_fit_right
#Given two y points (bottom of image and top of region of interest), compute the x coordinates
x_top_right = int((roi_top - mean_right_fit[1])/mean_right_fit[0])
x_bottom_right = int((roi_bottom - mean_right_fit[1])/mean_right_fit[0])
#Draw the line
cv2.line(img, (x_bottom_right,roi_bottom), (x_top_right,roi_top), [255, 0, 0], 5)
else:
fit_right_mean = (0,0)
def hough_lines(img, roi_top, roi_bottom, min_slope, max_slope, rho, theta, threshold, min_line_len, max_line_gap):
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines, roi_top, roi_bottom, min_slope, max_slope, color=[255, 0, 0], thickness=4)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
return cv2.addWeighted(initial_img, α, img, β, λ)
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
import os
test_images = os.listdir("test_images/")
Explanation: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
def process_image1(img):
#Apply greyscale
gray_img = grayscale(img)
# Define a kernel size and Apply Gaussian blur
kernel_size = 5
blur_img = gaussian_blur(gray_img, kernel_size)
#Apply the Canny transform
low_threshold = 50
high_threshold = 150
canny_img = canny(blur_img, low_threshold, high_threshold)
#Region of interest (roi) horizontal percentages
roi_hor_perc_top_left = 0.4675
roi_hor_perc_top_right = 0.5375
roi_hor_perc_bottom_left = 0.11
roi_hor_perc_bottom_right = 0.95
#Region of interest vertical percentages
roi_vert_perc = 0.5975
#Apply a region of interest mask of the image
vertices = np.array([[(int(roi_hor_perc_bottom_left*img.shape[1]),img.shape[0]), (int(roi_hor_perc_top_left*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_top_right*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_bottom_right*img.shape[1]),img.shape[0])]], dtype=np.int32)
croped_img = region_of_interest(canny_img,vertices)
# Define the Hough img parameters
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 40 # minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
min_slope = 0.5 # minimum line slope
max_slope = 0.8 # maximum line slope
# Apply the Hough transform to get an image and the lines
hough_img = hough_lines(croped_img, int(roi_vert_perc*img.shape[0]), img.shape[0], min_slope, max_slope, rho, theta, threshold, min_line_length, max_line_gap)
# Return the image of the lines blended with the original
return weighted_img(img, hough_img, 0.7, 1.0)
#prepare directory to receive processed images
newpath = 'test_images/processed'
if not os.path.exists(newpath):
os.makedirs(newpath)
for file in test_images:
# skip files starting with processed
if file.startswith('processed'):
continue
image = mpimg.imread('test_images/' + file)
processed_img = process_image1(image)
#Extract file name
base = os.path.splitext(file)[0]
#break
mpimg.imsave('test_images/processed/processed-' + base +'.png', processed_img, format = 'png', cmap = plt.cm.gray)
print("Processed ", file)
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(img):
#Apply greyscale
gray_img = grayscale(img)
# Define a kernel size and Apply Gaussian blur
kernel_size = 5
blur_img = gaussian_blur(gray_img, kernel_size)
#Apply the Canny transform
low_threshold = 50
high_threshold = 150
canny_img = canny(blur_img, low_threshold, high_threshold)
#Region of interest (roi) horizontal percentages
roi_hor_perc_top_left = 0.4675
roi_hor_perc_top_right = 0.5375
roi_hor_perc_bottom_left = 0.11
roi_hor_perc_bottom_right = 0.95
#Region of interest vertical percentages
roi_vert_perc = 0.5975
#Apply a region of interest mask of the image
vertices = np.array([[(int(roi_hor_perc_bottom_left*img.shape[1]),img.shape[0]), (int(roi_hor_perc_top_left*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_top_right*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_bottom_right*img.shape[1]),img.shape[0])]], dtype=np.int32)
croped_img = region_of_interest(canny_img,vertices)
# Define the Hough img parameters
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 40 # minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
min_slope = 0.5 # minimum line slope
max_slope = 0.8 # maximum line slope
# Apply the Hough transform to get an image and the lines
hough_img = hough_lines(croped_img, int(roi_vert_perc*img.shape[0]), img.shape[0], min_slope, max_slope, rho, theta, threshold, min_line_length, max_line_gap)
# Return the image of the lines blended with the original
return weighted_img(img, hough_img, 0.7, 1.0)
Explanation: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
End of explanation
white_output = 'white.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
HTML(
<video width="960" height="540" controls>
<source src="{0}" >
</video>
.format(white_output))
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
yellow_output = 'yellow.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(yellow_output))
Explanation: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
challenge_output = 'extra.mp4'
clip2 = VideoFileClip('challenge.mp4')
challenge_clip = clip2.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(challenge_output))
Explanation: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation |
2,336 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute source power using DICS beamfomer
Compute a Dynamic Imaging of Coherent Sources (DICS) [1]_ filter from
single-trial activity to estimate source power across a frequency band.
References
.. [1] Gross et al. Dynamic imaging of coherent sources
Step1: Reading the raw data
Step2: Computing the cross-spectral density matrix at 4 evenly spaced frequencies
from 6 to 10 Hz. We use a decim value of 20 to speed up the computation in
this example at the loss of accuracy.
<div class="alert alert-danger"><h4>Warning</h4><p>The use of several sensor types with the DICS beamformer is
not heavily tested yet. Here we use verbose='error' to
suppress a warning along these lines.</p></div> | Python Code:
# Author: Marijn van Vliet <[email protected]>
# Roman Goj <[email protected]>
# Denis Engemann <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
Explanation: Compute source power using DICS beamfomer
Compute a Dynamic Imaging of Coherent Sources (DICS) [1]_ filter from
single-trial activity to estimate source power across a frequency band.
References
.. [1] Gross et al. Dynamic imaging of coherent sources: Studying neural
interactions in the human brain. PNAS (2001) vol. 98 (2) pp. 694-699
End of explanation
raw = mne.io.read_raw_fif(raw_fname)
raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
# Set picks
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
# Read epochs
event_id, tmin, tmax = 1, -0.2, 0.5
events = mne.read_events(event_fname)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, mag=4e-12))
evoked = epochs.average()
# Read forward operator
forward = mne.read_forward_solution(fname_fwd)
Explanation: Reading the raw data:
End of explanation
csd = csd_morlet(epochs, tmin=0, tmax=0.5, decim=20,
frequencies=np.linspace(6, 10, 4),
n_cycles=2.5) # short signals, must live with few cycles
# Compute DICS spatial filter and estimate source power.
filters = make_dics(epochs.info, forward, csd, reg=0.5, verbose='error')
print(filters)
stc, freqs = apply_dics_csd(csd, filters)
message = 'DICS source power in the 8-12 Hz frequency band'
brain = stc.plot(surface='inflated', hemi='rh', subjects_dir=subjects_dir,
time_label=message)
Explanation: Computing the cross-spectral density matrix at 4 evenly spaced frequencies
from 6 to 10 Hz. We use a decim value of 20 to speed up the computation in
this example at the loss of accuracy.
<div class="alert alert-danger"><h4>Warning</h4><p>The use of several sensor types with the DICS beamformer is
not heavily tested yet. Here we use verbose='error' to
suppress a warning along these lines.</p></div>
End of explanation |
2,337 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sqlite3 and MySQL demo
With the excellent ipython-sql jupyter extension installed, it becomes very easy to connect to SQL database backends. This notebook demonstrates how to do this.
Note that this is a Python 2 notebook.
First, we need to activate the extension
Step1: There are warnings, but that's okay - this happens a lot these days due to the whole ipython/jupyter renaming process. You can ignore them.
Get a database
Using the bash shell (not a notebook!), follow the instructions at the SW Carpentry db lessons discussion page to get the survey.db file. This is a sqlite3 database.
I recommend following up with the rest of the instructions on that page to explore sqlite3.
Connecting to a Sqlite3 database
This part is easy, just connect like so (assuming the survey.db file is in the same directory as this notebook)
Step2: You should be able to execute all the standard SQL queries from the lesson here now. Note that you can also do this on the command line.
Note specialized sqlite3 commands like ".schema" might not work.
Connecting to a MySQL database
Now that you've explored the survey.db sample database with sqlite3, let's try working with mysql
Step3: note if you get an error about MySQLdb not being installed here, enter this back in your bash shell
Step4: Now that we've created the database week3demo, we need to tell MySQL that we want to use it
Step5: But there's nothing in it
Step6: Creating a table
From here we need to create a first table. Let's recreate the Person table from the SW Carpentry db lesson, topic 1.
Step7: Inserting data
Okay then, let's insert the sample data
Step8: Selecting data
Okay, now we're cooking. There's data in the Person table, so we can start to SELECT it.
Step9: Accessing data from Python
One of the great things about ipython-sql is it marshalls all the data into Python objects for you. For example, to get the result data into a Python object, grab it from _
Step10: You can even assign it to a Pandas dataframe
Step11: Cleaning up
If you were just doing a little exploring and wish to clean up, it's easy to get rid of tables and databases.
NOTE
Step12: And to get rid of a whole database, use DROP DATABASE | Python Code:
%load_ext sql
Explanation: Sqlite3 and MySQL demo
With the excellent ipython-sql jupyter extension installed, it becomes very easy to connect to SQL database backends. This notebook demonstrates how to do this.
Note that this is a Python 2 notebook.
First, we need to activate the extension:
End of explanation
%sql sqlite:///survey.db
%sql SELECT * FROM Person;
Explanation: There are warnings, but that's okay - this happens a lot these days due to the whole ipython/jupyter renaming process. You can ignore them.
Get a database
Using the bash shell (not a notebook!), follow the instructions at the SW Carpentry db lessons discussion page to get the survey.db file. This is a sqlite3 database.
I recommend following up with the rest of the instructions on that page to explore sqlite3.
Connecting to a Sqlite3 database
This part is easy, just connect like so (assuming the survey.db file is in the same directory as this notebook):
End of explanation
%sql mysql://mysqluser:mysqlpass@localhost/
Explanation: You should be able to execute all the standard SQL queries from the lesson here now. Note that you can also do this on the command line.
Note specialized sqlite3 commands like ".schema" might not work.
Connecting to a MySQL database
Now that you've explored the survey.db sample database with sqlite3, let's try working with mysql:
End of explanation
%sql CREATE DATABASE week3demo;
Explanation: note if you get an error about MySQLdb not being installed here, enter this back in your bash shell:
% sudo pip install mysql-python
If it asks for your password, it's "vagrant".
After doing this, try executing the above cell again. You should see:
u'Connected: mysqluser@'
...if it works.
Creating a database
Now that we're connected, let's create a database.
End of explanation
%sql USE week3demo;
Explanation: Now that we've created the database week3demo, we need to tell MySQL that we want to use it:
End of explanation
%sql SHOW TABLES;
Explanation: But there's nothing in it:
End of explanation
%%sql
CREATE TABLE Person
(ident CHAR(10),
personal CHAR(25),
family CHAR(25));
%sql SHOW TABLES;
%sql DESCRIBE Person;
Explanation: Creating a table
From here we need to create a first table. Let's recreate the Person table from the SW Carpentry db lesson, topic 1.
End of explanation
%%sql
INSERT INTO Person VALUES
("dyer", "William", "Dyer"),
("pb", "Frank", "Pabodie"),
("lake", "Anderson", "Lake"),
("roe", "Valentina", "Roerich"),
("danforth", "Frank", "Danforth")
;
Explanation: Inserting data
Okay then, let's insert the sample data:
End of explanation
%sql SELECT * FROM Person;
%sql SELECT * FROM Person WHERE personal = "Frank";
Explanation: Selecting data
Okay, now we're cooking. There's data in the Person table, so we can start to SELECT it.
End of explanation
result = _
print result
Explanation: Accessing data from Python
One of the great things about ipython-sql is it marshalls all the data into Python objects for you. For example, to get the result data into a Python object, grab it from _:
End of explanation
df = result.DataFrame()
df
Explanation: You can even assign it to a Pandas dataframe:
End of explanation
%sql DROP TABLE Person;
%sql SHOW TABLES;
Explanation: Cleaning up
If you were just doing a little exploring and wish to clean up, it's easy to get rid of tables and databases.
NOTE: these are permanent actions. Only do them if you know you don't need them any longer.
To get rid of a table, use DROP TABLE:
End of explanation
%sql DROP DATABASE week3demo;
%sql SHOW DATABASES;
Explanation: And to get rid of a whole database, use DROP DATABASE:
End of explanation |
2,338 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: What is Biomedical Data Commons?
Data Commons is an open knowledge graph of structured data. It contains statements about real world objects such as
* The genome assembly hg38 is a reference genome for the species Homo sapiens.
* For the hg38 genome assembly chr17 has 83,257,441 base pairs.
* BRCA1 genomic coordinates are chr17
Step2: Example
Step3: Example
Step4: Congratulations! You've found the basic information on all the genome assemblies and species currently supported by Biomedical Data Commons. As you can see we currently support data from 8 model organisms across 14 different genome assemblies.
Example
Step5: Identify RUNX1 Genomic Coordinates
Great, now we see the type of information known about RUNX1. To identify the genetic variants within the gene region we need to know what the chromosome and ther genomic coordinates of RUNX1. Let's grab that information using get_property_values.
Step6: Find All The Genetic Variants Within RUNX1
We found the coordinates of RUNX1 in the hg38 genome
Step7: Using SPARQL and get_property_values to find genetic variants in RUNX1
Like genes, genetic variants also point to the chromosome on which they reside. Their positions on the chromosome is specified by hg38GenomicPosition. Using query we will identify the dcids of all genetic variants on chr21. query accepts the following parameter
- query [query_string[, select]] – Returns the results of executing a SPARQL query on the Data Commons graph.
query parameter is a SPARQL query that quickly searches and returns the data that matches the query on multiple parameters. There is no limit to the number of values that can be returned by a SPARQL query. In our query here we will be specifying that we want all genetic variants on chr21. Then we will format all the returned genetic variant dcids into a list and use get_property_values again to filter for genetic variants whose hg38GenomicPosition is within RUNX1.
Step8: Identify Which Genetic Variants Are In Coding Regions
We've identified 360 genetic varaints within RUNX1. However, these can be in introns or exons. Let's further restrict genetic variant list to ones in the coding region of RUNX1. To do this we need to identify the positions of the exons of RUNX1. We know that RUNX1 has a property called rnaTranscript. Let's use get_property_values to find out more information on the RUNX1 transcript.
Step9: Identify the properties of RNATranscripts
There are several dcids associated with this property which are pointing to nodes of class RNATranscript. Let's verify that this is indeed the case and then check the properties of RNATrancript using get_property_values.
Step10: Explore the difference between codingCoordinates and exonCoordinates
There are two properties that may be useful for us in identifying which genetic variants are in the coding region of RUNX1. Let's figure out which one that we'd like to use moving forward by grabbing the values associated with both these properties using get_property_values.
Step11: Find the exon coordinates reported for all RNA transcripts of RUNX1
From the values of codingCoordinates and exonCoordinates we observe that coding coordinates contains the range of base pairs spanning the entire coding region of RUNX1 including introns. Whereas exonCoordinates reports the genomic coordinates of all exons of RUNX1. We want to find all genetic variants in exons of RUNX1, so going forward we want to grab the exonCoordinates of transcripts. There are multiple RNA transcripts reported for RUNX1. Let's make a unique list of all exonCoordinates recorded for RUNX1 using get_property_values.
Step12: Identify the genetic variants in the exon coding regions
Now that we know all the possible reported exon coordinates for RUNX1 we can identify which genetic variants are in exons. Note that RUNX1 has 4 - 9 variants depending on the isoform. Many of these exon coordinates from transcripts of different isoforms are overlapping with each other, but not exact resulting in 41 unique coordinate ranges to be observed. We are interested in genetic variants within any reported exon coordinates in this example and will therefore use them all for our next filtering step. For filtering by overlap in position please remember that the range of the coordinates is [) in which the first number, but not the last number is considered as inside of the range.
Step13: Filter genetic variants for ones that have been clinically studied
Great! We've identified 345 genetic variants in exons that are worth further consideration. Let's narrow our candidate list further by identifying genetic variants with clinical data associated with them (reported in ClinVar). We can do this by checking if the genetic variant has the property clinVarID using get_property_values. For ones that have been clinically reported we are going to grab the following additional clinical information for the genetic variant
Step14: Filter For Pathogenic Genetic Variants
There are 345 genetic variants in the exons of RUNX1 that have recorded clinical information on them. Filter out the ones that are benign to establish our final candidate list of genetic variants that effect the function of RUNX1. We'll do this by filtering our pandas dataframe with the clinical information on the genetic variants. | Python Code:
# Install datacommons
!pip install --upgrade --quiet datacommons
Explanation: <a href="https://colab.research.google.com/github/datacommonsorg/api-python/blob/master/notebooks/analyzing_genomic_data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2022 Google LLC.
SPDX-License-Identifier: Apache-2.0
Analyzing Genomic Data with Biomedical Data Commons
Datacommons is intended for various data science tasks. This tutorial introduces the datacommons knowledge graph and discusses two tools to help integrate its data into your data science projects: (1) the datacommons browser and (2) the Python API. Before getting started, we will need to install the Python API package.
End of explanation
# Import Data Commons
import datacommons as dc
# Import other required libraries
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import pandas as pd
import requests
import json
Explanation: What is Biomedical Data Commons?
Data Commons is an open knowledge graph of structured data. It contains statements about real world objects such as
* The genome assembly hg38 is a reference genome for the species Homo sapiens.
* For the hg38 genome assembly chr17 has 83,257,441 base pairs.
* BRCA1 genomic coordinates are chr17:43,044,294-43,125,483 for genome assembly hg38.
In the graph, entities like the genome assembly hg38 are represented by nodes. Every node has a type corresponding to what the node represents. For example, Homo sapiens is a Species. Relations between entities are represented by edges between these nodes. For example, the statement "The genome assembly hg38 is a reference genome for the species Homo sapiens." is represented in the graph as two nodes: "hg38" and "HomoSapiens" with an edge labeled "is_species" pointing from "hg38" to "HomoSapiens". Data Commons follows the Schema.org data model and leverages schema.org schema to provide a common set of types and properties. To accomodate biological data this schema has been expanded to reflect the schema that has been used across biological databases created by the scientific community.
Data Commons Browser
The Data Commons browser provides a way to explore the data in a human-readable format. It is the best way to explore what is in Data Commons. Searching in the browser for an entity like BRACA1, takes you to a page about the entity, including properties like refSeqID and typeOfGene.
An important property for all entities is the dcid. The dcid (DataCommons identifier) is a unique identifier assigned to each entity in the knowledge graph. With this identifier, you will be able to search for and query information on the given entity in ways that we will discuss later. The dcid is listed at the top of the page next to "About: " and also in the list of properties.
Python API
The Python API provides functions for users to extract structured information from Data Commons programmatically and view them in different formats such as Python dict's and Pandas DataFrames. DataFrames allow access to all the data processing, analytical and visualization tools provided by packages such as Pandas, NumPy, SciPy, and Matplotlib. For more information check out the documentation on the Python API's modules.
Every notebook begins by loading the datacommons library as follows:
End of explanation
# Call get_property_values. The return value is a dict keyed by 'GenomeAssembly'.
genomeAssembly_dcids = dc.get_property_values(['GenomeAssembly'], 'typeOf', out=False)['GenomeAssembly']
# Display the frame
print(genomeAssembly_dcids)
Explanation: Example: Identify the Genome Assemblies Supported by Biomedical Data Commons
For this exercise we will identify the genome assemblies and their related species that are currently supported by Biomedical Data Commons. We will start by looking up the dcid for 'GenomeAssembly'.
Using get_property_value to Access Node Properties
Our first task for this tutorial will be to extract the genome assemblies from Data Commons using the Python API and view it in a Pandas DataFrame. For all properties, one can use get_property_values to get the associated values. Let's look up the dcid of GenomeAssembly. We would then like to initialize our Pandas dataframe for the dcid bio/GenomeAssembly.
For all properties, one can use get_property_values to get the associated values. We would like to know the instances of "GenomeAssembly" by getting the the typeOf instances that are oriented towards the "GenomeAssembly" identified by "bio/GenomeAssembly". get_property_values accepts the following parameters:
dcids - A list of dcids to get property values for.
prop - The property to get property values for.
out[=True] - An optional flag that indicates the property is oriented away from the given nodes if true.
value_type[=None] - An optional parameter which filters property values by the given type.
limit[=100] - An optional parameter which limits the total number of property values returned aggregated over all given nodes.
When the dcids are given as a Pandas Series, the returned list of property values is a Pandas Series where the i-th entry corresponds to property values associated with the i-th given dcid. Some properties, like containedInPlace, may have many property values. Consequently, the cells of the returned series will always contain a list of property values. Let's take a look:
End of explanation
# Intialize the Data Frame
df_genomeAssemblies = pd.DataFrame()
# Add genome assemblies name and dcid
df_genomeAssemblies['name'] = pd.Series(dc.get_property_values(genomeAssembly_dcids, 'name'))
df_genomeAssemblies.reset_index(level=0, inplace=True)
df_genomeAssemblies = df_genomeAssemblies.rename(columns={"index": "dcid"}).explode('name')
# Add Species dcid
df_genomeAssemblies['species_dcid'] = df_genomeAssemblies['dcid'].map(
dc.get_property_values(df_genomeAssemblies['dcid'], 'ofSpecies'))
df_genomeAssemblies = df_genomeAssemblies.explode('species_dcid')
# Add Species name
df_genomeAssemblies['species_name'] = df_genomeAssemblies['species_dcid'].map(
dc.get_property_values(df_genomeAssemblies['species_dcid'], 'name'))
df_genomeAssemblies = df_genomeAssemblies.explode('species_name')
print(df_genomeAssemblies)
Explanation: Example: List All Genome Assemblies in Human Readable Format
Let's continue learning about the genome assemblies supported by Biomedical Data Commons. We are next going to find the names and species of the genome assemblies that are associated with the list of dcids of genome assemblies that we found. We are going to display the information in a human readable table.
End of explanation
# Call get_property_labels
dc.get_property_labels(['bio/hg38_RUNX1'])
Explanation: Congratulations! You've found the basic information on all the genome assemblies and species currently supported by Biomedical Data Commons. As you can see we currently support data from 8 model organisms across 14 different genome assemblies.
Example: Analyze Genetic Variants within RUNX1
For this exercise, we will be analyzing genetic variants within the gene RUNX1. We will start by identifying the genetic variants within the gene region and then limit our list to those within the coding region of BRCA1 and then to those with known clinical significance. First, let's start by looking up the dcid for 'RUNX1'.
Note that 'Gene' defines the Data Commons type. Let's start by using get_property_labels to identify all the properties associated with RUNX1. get_property_labels accepts the following parameters:
- dcids [list of str] – A list of nodes identified by their dcids.
- out [bool, optional] – Whether or not the property points away from the given list of nodes.
The output of get_property_labels is a dict mapping dcids to lists of property labels. If out is True, then property labels correspond to edges directed away from given nodes. Otherwise, they correspond to edges directed towards the given nodes.
End of explanation
# Initialize the Data Frame
df_RUNX1 = pd.DataFrame({'gene': ['bio/hg38_RUNX1']})
# Grab the chromosome and genomic coordinates using get_property_values
df_RUNX1['chromosome'] = df_RUNX1['gene'].map(dc.get_property_values(df_RUNX1['gene'], 'inChromosome'))
df_RUNX1['genomicCoordinates'] = df_RUNX1['gene'].map(dc.get_property_values(df_RUNX1['gene'], 'genomicCoordinates'))
# display the genomic coordinates
print(df_RUNX1)
# define start and stop of RUNX1
start, stop = df_RUNX1['genomicCoordinates'][0][0].strip('Position').split('To')
start = int(start)
stop = int(stop)
Explanation: Identify RUNX1 Genomic Coordinates
Great, now we see the type of information known about RUNX1. To identify the genetic variants within the gene region we need to know what the chromosome and ther genomic coordinates of RUNX1. Let's grab that information using get_property_values.
End of explanation
# Identify all the properties of genetic variants
print(dc.get_property_values(['GeneticVariant'], 'domainIncludes', value_type='Property', out=False))
Explanation: Find All The Genetic Variants Within RUNX1
We found the coordinates of RUNX1 in the hg38 genome: chr21:34787800-35049344. Now that we know this let's find all the genetic variants of class GeneticVariant that occur in the region. But first let's establish the information that we know on genetic variants using get_property_values.
End of explanation
# Query for genetic variants associated with RUNX1
query = '''
SELECT ?gv ?p
WHERE {
?chr dcid "bio/hg38_chr21" .
?gv inChromosome ?chr .
?gv typeOf GeneticVariant .
?gv hg38GenomicPosition ?p
}
'''
print(query)
rows = dc.query(query)
dcids = set()
for row in rows:
dcids.add(row['?gv'])
dcids = list(dcids)
print(len(dcids))
# filter all genetic variants for the ones within RUNX1
RUNX1_geneticVariants = []
gen_positions = dc.get_property_values(dcids, 'hg38GenomicPosition')
data = pd.DataFrame(gen_positions).transpose()
data.reset_index(level=0, inplace=True)
data = data.rename(columns={"index": "dcid", 0: "position"})
data['position'] = pd.to_numeric(data['position'])
data = data[data['position'] >= start]
data = data[data['position'] < stop]
RUNX1_geneticVariants = list(set(data['dcid']))
# print the first few genetic variants
print(RUNX1_geneticVariants[:5])
# check how many genetic variants are in RUNX1
print(len(RUNX1_geneticVariants))
Explanation: Using SPARQL and get_property_values to find genetic variants in RUNX1
Like genes, genetic variants also point to the chromosome on which they reside. Their positions on the chromosome is specified by hg38GenomicPosition. Using query we will identify the dcids of all genetic variants on chr21. query accepts the following parameter
- query [query_string[, select]] – Returns the results of executing a SPARQL query on the Data Commons graph.
query parameter is a SPARQL query that quickly searches and returns the data that matches the query on multiple parameters. There is no limit to the number of values that can be returned by a SPARQL query. In our query here we will be specifying that we want all genetic variants on chr21. Then we will format all the returned genetic variant dcids into a list and use get_property_values again to filter for genetic variants whose hg38GenomicPosition is within RUNX1.
End of explanation
# Get property values for rnaTranscript of RUNX1
df_RUNX1['rnaTranscript'] = df_RUNX1['gene'].map(dc.get_property_values(df_RUNX1['gene'], 'hasRNATranscript'))
print(df_RUNX1)
Explanation: Identify Which Genetic Variants Are In Coding Regions
We've identified 360 genetic varaints within RUNX1. However, these can be in introns or exons. Let's further restrict genetic variant list to ones in the coding region of RUNX1. To do this we need to identify the positions of the exons of RUNX1. We know that RUNX1 has a property called rnaTranscript. Let's use get_property_values to find out more information on the RUNX1 transcript.
End of explanation
# Specify an rnaTranscript associated with RUNX1
RUNX1_transcript = df_RUNX1.iloc[0]['rnaTranscript'][0]
# Check what type the rnaTranscript is
print(dc.get_property_values([RUNX1_transcript], 'typeOf'))
# Identify properties of RNATranscript
dict_temp = dc.get_property_values(['RNATranscript'], 'domainIncludes', value_type='Property', out=False)
for prop in dict_temp['RNATranscript']:
print(prop)
Explanation: Identify the properties of RNATranscripts
There are several dcids associated with this property which are pointing to nodes of class RNATranscript. Let's verify that this is indeed the case and then check the properties of RNATrancript using get_property_values.
End of explanation
# Get the values for codingCoordinates and exonCoordiantes
temp_dict = dc.get_property_values([RUNX1_transcript], 'codingCoordinates')
print('Coding Coordinate Values:')
for value in temp_dict[RUNX1_transcript]:
print(value)
print('\n')
temp_dict = dc.get_property_values([RUNX1_transcript], 'exonCoordinates')
print('Exon Coordinate Values:')
for value in temp_dict[RUNX1_transcript]:
print(value)
Explanation: Explore the difference between codingCoordinates and exonCoordinates
There are two properties that may be useful for us in identifying which genetic variants are in the coding region of RUNX1. Let's figure out which one that we'd like to use moving forward by grabbing the values associated with both these properties using get_property_values.
End of explanation
# Initiate an empty set
RUNX1_exonCoordinates = set()
# Using get_property_values get all the exon coordinates for all rnaTranscripts
for rnaTranscript_dcid in df_RUNX1['rnaTranscript'][0]:
temp_dict = dc.get_property_values([rnaTranscript_dcid], 'exonCoordinates')
for item in temp_dict[rnaTranscript_dcid]:
RUNX1_exonCoordinates.add(item)
# check the firs few exons
RUNX1_exonCoordinates = list(RUNX1_exonCoordinates)
print(RUNX1_exonCoordinates[:5])
# check how many unique exon coordinates have been reported for RUNX1
print(len(RUNX1_exonCoordinates))
Explanation: Find the exon coordinates reported for all RNA transcripts of RUNX1
From the values of codingCoordinates and exonCoordinates we observe that coding coordinates contains the range of base pairs spanning the entire coding region of RUNX1 including introns. Whereas exonCoordinates reports the genomic coordinates of all exons of RUNX1. We want to find all genetic variants in exons of RUNX1, so going forward we want to grab the exonCoordinates of transcripts. There are multiple RNA transcripts reported for RUNX1. Let's make a unique list of all exonCoordinates recorded for RUNX1 using get_property_values.
End of explanation
# initialize empty list for storing genetic variants in RUNX1 exons
RUNX1_exon_geneticVariants = []
# for each genetic variant identify their hg38GenomicPosition and check if it's in an exon
for geneticVariant_dcid in RUNX1_geneticVariants:
position = int(dc.get_property_values([geneticVariant_dcid], 'hg38GenomicPosition')[geneticVariant_dcid][0])
for exonCoordinates in RUNX1_exonCoordinates:
start, stop = exonCoordinates.strip('Position').split('To')
start, stop = int(start), int(stop)
# filter for variants within RUNX1 exons
if position >= start and position < stop:
RUNX1_exon_geneticVariants.append(geneticVariant_dcid)
break
RUNX1_exon_geneticVariants = list(set(RUNX1_exon_geneticVariants))
# check how many of the genetic variants are in exons
print(len(RUNX1_exon_geneticVariants))
Explanation: Identify the genetic variants in the exon coding regions
Now that we know all the possible reported exon coordinates for RUNX1 we can identify which genetic variants are in exons. Note that RUNX1 has 4 - 9 variants depending on the isoform. Many of these exon coordinates from transcripts of different isoforms are overlapping with each other, but not exact resulting in 41 unique coordinate ranges to be observed. We are interested in genetic variants within any reported exon coordinates in this example and will therefore use them all for our next filtering step. For filtering by overlap in position please remember that the range of the coordinates is [) in which the first number, but not the last number is considered as inside of the range.
End of explanation
# initialize Empty Data Frame
column_names = ['name', 'clinVarAlleleID', 'diseaseName', 'clinicalSignificance', 'clinVarReviewStatus', 'dcid']
df_RUNX1_genVar_clinical = pd.DataFrame(columns=column_names)
for geneticVariant_dcid in RUNX1_exon_geneticVariants:
clinVarID = dc.get_property_values([geneticVariant_dcid], 'clinVarAlleleID')[geneticVariant_dcid][0]
if clinVarID.isdigit():
name = geneticVariant_dcid.strip('bio/')
diseaseName = dc.get_property_values( [geneticVariant_dcid], 'diseaseName')[geneticVariant_dcid][0]
clinicalSignificance = dc.get_property_values( [geneticVariant_dcid], 'clinicalSignificance')[geneticVariant_dcid][0]
clinVarReviewStatus = dc.get_property_values( [geneticVariant_dcid], 'clinVarReviewStatus')[geneticVariant_dcid][0]
df_RUNX1_genVar_clinical = df_RUNX1_genVar_clinical.append({'name': name, 'clinVarAlleleID': clinVarID, 'diseaseName': diseaseName, \
'clinicalSignificance': clinicalSignificance, 'clinVarReviewStatus': clinVarReviewStatus, \
'dcid': geneticVariant_dcid}, ignore_index=True)
# visualize the head of the dataframe containing the clinical info
print(df_RUNX1_genVar_clinical.head())
# see how many clinical genetic variants in RUNX1
print(df_RUNX1_genVar_clinical.shape[0])
Explanation: Filter genetic variants for ones that have been clinically studied
Great! We've identified 345 genetic variants in exons that are worth further consideration. Let's narrow our candidate list further by identifying genetic variants with clinical data associated with them (reported in ClinVar). We can do this by checking if the genetic variant has the property clinVarID using get_property_values. For ones that have been clinically reported we are going to grab the following additional clinical information for the genetic variant: diseaseName, clinicalSignificance, and clinVarReviewStatus.
End of explanation
# identify the clinical significance types for the genetic variants
print(df_RUNX1_genVar_clinical.clinicalSignificance.unique())
# filter genetic variants for those with pathogenicity
clinSig = ['ClinSigPathogenic']
df_final_geneticVariants = df_RUNX1_genVar_clinical[df_RUNX1_genVar_clinical.clinicalSignificance.isin(clinSig)]
# check how many genetic variants made the final cut
print(df_final_geneticVariants.shape[0])
# identify the diseases associated with these pathogenic variants
print(df_final_geneticVariants.diseaseName.unique())
# print the final genetic variant dataframe
print(df_final_geneticVariants)
Explanation: Filter For Pathogenic Genetic Variants
There are 345 genetic variants in the exons of RUNX1 that have recorded clinical information on them. Filter out the ones that are benign to establish our final candidate list of genetic variants that effect the function of RUNX1. We'll do this by filtering our pandas dataframe with the clinical information on the genetic variants.
End of explanation |
2,339 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HIV Methylation Age Advancement
Step1: Run Age Predictions on HIV Dataset
Step2: Hannum Model
Step3: Preforming a linear adjustment on the control data.
Step4: Horvath Model
Step5: Quality Control
Look at detection p-value and concordance of the models
Step6: Dropping 6 patients with Low Concordance
2/6 have bad probes on the sites used in the models
Step7: Merging Hannum and Horvath
Without filter.
Step8: With QC filter
Step9: Comparing Age Advancement vs. Age Acceleration
Step10: Comparing short vs. long term infected HIV+ subjects
Step11: Association of age residuals with HIV status.
Step12: Mean age resisuals broken down by HIV status. Not that HIV- is ~0 by construction as the full dataset is normalized by the controls.
Step13: Looking at Predicted Time of Onset
The idea of age acceleration, only really makes sense in this context as a person should age normally until the onset of the disease
Step14: All HIV Combined
Step15: This is just the same linear model as the plot above.
Step16: Short and Long Duration Split | Python Code:
import NotebookImport
from Setup.Imports import *
from Setup.MethylationAgeModels import *
from Setup.Read_HIV_Data import *
hiv = (duration=='Control').map({False: 'HIV+', True: 'HIV-'})
hiv.name = 'HIV Status'
hiv.value_counts()
Explanation: HIV Methylation Age Advancement
End of explanation
def model_plot(prediction):
fig, axs = subplots(1,2, figsize=(10,4), sharex=True, sharey=True)
plot_regression(age, prediction.ix[ti(hiv=='HIV-')], ax=axs[0])
plot_regression(age, prediction.ix[ti(hiv=='HIV+')], ax=axs[1])
axs[0].set_title('HIV-')
axs[1].set_title('HIV+')
Explanation: Run Age Predictions on HIV Dataset
End of explanation
pred = run_hannum_model(df_hiv)
#Do not import
model_plot(pred)
get_error(age, pred, denominator='x', groups=hiv)
Explanation: Hannum Model
End of explanation
reg = linear_regression(age, pred.ix[ti(duration=='Control')])
pred_adj = (pred - reg['intercept']) / reg['slope']
#Do not import
model_plot(pred_adj)
get_error(age, pred_adj, denominator='x', groups=hiv)
Explanation: Preforming a linear adjustment on the control data.
End of explanation
pred_horvath = run_horvath_model(df_hiv_n)
#Do not import
model_plot(pred_horvath)
reg = linear_regression(age, pred_horvath.ix[ti(duration=='Control')])
pred_horvath_adj = (pred_horvath - reg['intercept']) / reg['slope']
#Do not import
model_plot(pred_horvath_adj)
get_error(age, pred_horvath_adj, denominator='x', groups=hiv)
Explanation: Horvath Model
End of explanation
#Do not import
fig, axs = subplots(1,2, figsize=(9,4), sharex=True, sharey=True)
plot_regression(pred_adj.ix[ti(hiv=='HIV-')], pred_horvath_adj, ax=axs[0])
plot_regression(pred_adj.ix[ti(hiv=='HIV+')], pred_horvath_adj, ax=axs[1])
fig.tight_layout()
get_error(pred_adj, pred_horvath_adj, groups=hiv)
Explanation: Quality Control
Look at detection p-value and concordance of the models
End of explanation
diff = ((pred_adj - pred_horvath_adj) / ((pred_adj + pred_horvath_adj) * .5)).abs()
diff = diff.groupby(level=0).first()
diff.name = 'Absolute difference in models'
#Do not import
fig, axs = subplots(1,2, figsize=(10,4))
diff.hist(ax=axs[0])
axs[0].set_xlabel(diff.name)
violin_plot_pandas(duration, diff, ax=axs[1])
fig.tight_layout()
in_model = detection_p[detection_p.level_0.isin(hannum_model.index.union(horvath_model.index))]
(diff).ix[in_model.Sample_Name.unique()].dropna().order()
pt = ti(diff < .2)
o = ti(diff > .2)
o
#Do not import
fig, axs = subplots(1,2, figsize=(9,4), sharex=True, sharey=True)
plot_regression(pred_adj.ix[ti(hiv=='HIV-')].ix[pt], pred_horvath_adj, ax=axs[0])
plot_regression(pred_adj.ix[ti(hiv=='HIV+')].ix[pt], pred_horvath_adj, ax=axs[1])
series_scatter(pred_adj.ix[ti(hiv=='HIV-')].ix[o], pred_horvath_adj, ax=axs[1],
color=colors[0], ann=None)
series_scatter(pred_adj.ix[ti(hiv=='HIV+')].ix[o], pred_horvath_adj, ax=axs[0],
color=colors[0], ann=None)
fig.tight_layout()
fig.savefig(FIGDIR + 'hiv_model_agreement_filter.png', dpi=300)
#Do not import
v = detection_p.groupby('Sample_Name').size()
v.name = '# probes with poor detection'
series_scatter(diff, v)
detection_p[detection_p.Sample_Name.isin(o)].groupby('Sample_Name').size().order()
#pt = ti(((pred_adj - pred_horvath_adj)).abs().dropna() < 10)
Explanation: Dropping 6 patients with Low Concordance
2/6 have bad probes on the sites used in the models
End of explanation
pred_c = (pred_horvath_adj + pred_adj) / 2
pred_c = pred_c.ix[duration.index]
pred_c.name = 'Predicted Age (Combined)'
reg = linear_regression(age, pred_c.ix[ti(duration=='Control')])
pred_c = (pred_c - reg['intercept']) / reg['slope']
#Do not import
model_plot(pred_c)
get_error(age, pred_c, denominator='x', groups=hiv)
Explanation: Merging Hannum and Horvath
Without filter.
End of explanation
pred_c = (pred_horvath_adj + pred_adj) / 2
pred_c = pred_c.ix[duration.index].ix[pt]
pred_c.name = 'Predicted Age (Combined)'
reg = linear_regression(age, pred_c.ix[ti(duration=='Control')])
pred_c = (pred_c - reg['intercept']) / reg['slope']
#Do not import
model_plot(pred_c)
get_error(age, pred_c, denominator='x', groups=hiv)
#Do not import
fig, axs = subplots(1,3, figsize=(15,4), sharex=True, sharey=True)
plot_regression(age, pred_c.ix[ti(duration=='Control')], ax=axs[0])
plot_regression(age, pred_c.ix[ti(duration=='HIV Short')], ax=axs[1])
plot_regression(age, pred_c.ix[ti(duration=='HIV Long')], ax=axs[2])
axs[2].set_xlim(20,75)
axs[2].set_ylim(20,75);
get_error(age, pred_c, denominator='x', groups=duration)
Explanation: With QC filter
End of explanation
#Do not import
fig, axs = subplots(2,1, figsize=(5,7))
violin_plot_pandas(duration, pred_c / clinical.age,
order=['Control','HIV Short','HIV Long'],
ax=axs[0])
axs[0].set_ylabel('Age Acceleration (AMAR)')
violin_plot_pandas(duration, pred_c - clinical.age,
order=['Control','HIV Short','HIV Long'],
ax=axs[1])
axs[1].set_ylabel('Age Advance (Predicted - Actual)')
for ax in axs:
prettify_ax(ax)
fig.tight_layout()
Explanation: Comparing Age Advancement vs. Age Acceleration
End of explanation
kruskal_pandas(duration.ix[ti(hiv == 'HIV+')], (pred_c - age))
Explanation: Comparing short vs. long term infected HIV+ subjects
End of explanation
kruskal_pandas(hiv, (pred_c - age))
Explanation: Association of age residuals with HIV status.
End of explanation
(pred_c - age).dropna().groupby(hiv).mean()
Explanation: Mean age resisuals broken down by HIV status. Not that HIV- is ~0 by construction as the full dataset is normalized by the controls.
End of explanation
p2 = (pred_c - (clinical.age - (clinical['estimated duration hiv (months)'] / 12.)))
p2.name = 'Biological Time Since Onset'
a2 = (clinical['estimated duration hiv (months)'] / 12.)
a2.name = 'Actual Time Since Onset'
a2 = a2.ix[pt].dropna()
p2 = p2.ix[pt].dropna()
Explanation: Looking at Predicted Time of Onset
The idea of age acceleration, only really makes sense in this context as a person should age normally until the onset of the disease
End of explanation
#Do not import
fig, ax = subplots(figsize=(5,4))
plot_regression(a2, p2, ax=ax)
fig.tight_layout()
Explanation: All HIV Combined
End of explanation
#Do not import
p4 = p2
a2.name = 'chron_age'
p4.name = 'bio_age'
df = process_factors([a2, p4], standardize=False)
fmla = robjects.Formula('bio_age ~ chron_age')
m = robjects.r.lm(fmla, df)
s = robjects.r.summary(m)
print '\n\n'.join(str(s).split('\n\n')[-3:])
#Do not import
print robjects.r.confint(m)
Explanation: This is just the same linear model as the plot above.
End of explanation
#Do not import
fig, axs = subplots(1,2, figsize=(10,4))
plot_regression(a2.ix[ti(duration=='HIV Short')], p2, ax=axs[0])
axs[0].set_xbound(0,5)
plot_regression(a2.ix[ti(duration=='HIV Long')], p2, ax=axs[1])
Explanation: Short and Long Duration Split
End of explanation |
2,340 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Applying Deterministic Methods
Getting Started
This tutorial focuses on using deterministic methods to square a triangle.
Note that a lot of the examples shown here might not be applicable in a real world scenario, and is only meant to demonstrate some of the functionalities included in the package. The user should always exercise their best actuarial judgement, and follow any applicable laws, the Code of Professional Conduct, and applicable Actuarial Standards of Practice.
Be sure to make sure your packages are updated. For more info on how to update your pakages, visit Keeping Packages Updated.
Step1: The Chainladder Method
The basic chainladder method is entirely specified by its development pattern selections. For this reason, the Chainladder estimator takes no additional assumptions, i.e. no additional arguments. Let's start by loading an example dataset and creating an Triangle with Development patterns and a TailCurve. Recall, we can bundle these two estimators into a single Pipeline if we wish.
Step2: We can now use the basic Chainladder estimator to estimate ultimate_ values of our Triangle.
Step3: We can also view the ibnr_. Techincally the term IBNR is reserved for Incurred but not Reported, but the chainladder models use it to describe the difference between the ultimate and the latest evaluation period.
Step4: It is often useful to see the completed Triangle and this can be accomplished by inspecting the full_triangle_. As with most other estimator properties, the full_triangle_ is itself a Triangle and can be manipulated as such.
Step5: Notice the calendar year of our ultimates. While ultimates will generally be realized before this date, the chainladder package picks the highest allowable date available for its ultimate_ valuation.
Step6: We can further manipulate the "triangle", such as applying cum_to_incr().
Step7: Another useful property is full_expectation_. Similar to the full_triangle, it "squares" the Triangle, but replaces the known data with expected values implied by the model and development pattern.
Step8: With some clever arithmetic, we can use these objects to give us other useful information. For example, we can retrospectively review the actual Triangle against its modeled expectation.
Step9: We can also filter out the lower right part of the triangle with [genins_model.full_triangle_.valuation <= genins.valuation_date].
Step10: Getting comfortable with manipulating Triangles will greatly improve our ability to extract value out of the chainladder package. Here is another way of getting the same answer.
Step11: We can also filter out the lower right part of the triangle with [genins_model.full_triangle_.valuation <= genins.valuation_date] before applying the heatmap().
Step12: Can you figure out how to get the expected IBNR runoff in the upcoming year?
Step13: The Bornhuetter-Ferguson Method
The BornhuetterFerguson estimator is another deterministic method having many of the same attributes as the Chainladder estimator. It comes with one input assumption, the a priori (apriori). This is a scalar multiplier that will be applied to an exposure vector, which will produce an a priori ultimate estimate vector that we can use for the model.
The BornhuetterFerguson method
The BornhuetterFerguson estimator is another deterministic method having many of the same attributes as the Chainladder estimator. It comes with one assumption, the apriori. This is a scalar multiplier that is to be applied to an exposure vector to determine an apriori ultimate estimate of our model.
Since the CAS Loss Reserve Database has premium, we will use it as an example. Let's grab the paid loss and net earned premium for the commercial auto line of business.
Remember that apriori is a scaler, which we need to apply it to a vector of exposures. Let's assume that the a priori is 0.75, for 75% loss ratio.
Let's set an apriori Loss Ratio estimate of 75%
The BornhuetterFerguson method along with all other expected loss methods like CapeCod and Benktander (discussed later), need to take in an exposure vector. The exposure vector has to be a Triangle itself. Remember that the Triangle class supports single exposure vectors.
Step14: Having an apriori that takes on only a constant for all origins can be limiting; however, we can apply a varying vector on the exposure vector to get a varying apriori.
Step15: Having an apriori that takes on only a constant for all origins can be limiting. This shouldn't stop the practitioner from exploiting the fact that the apriori can be embedded directly in the exposure vector itself allowing full cusomization of the apriori.
Step16: If we need to create a new colume, such as AdjEarnedPrmNet with varying implied loss ratios. It is recommend that we perform any data modification in pandas instead of Triangle forms.
Let's perform the estimate using Chainladder and compare the results.
Step17: The Benktander Method
The Benktander method is similar to the BornhuetterFerguson method, but allows for the specification of one additional assumption, n_iters, the number of iterations to recalculate the ultimates. The Benktander method generalizes both the BornhuetterFerguson and the Chainladder estimator through this assumption.
When n_iters = 1, the result is equivalent to the BornhuetterFerguson estimator.
When n_iters is sufficiently large, the result converges to the Chainladder estimator.
Step18: Fitting the Benktander method looks identical to the other methods.
Step19: The Cape Cod Method
The CapeCod method is similar to the BornhuetterFerguson method, except its apriori is computed from the Triangle itself. Instead of specifying an apriori, decay and trend need to be specified.
decay is the rate that gives weights to earlier origin periods, this parameter is required by the Generalized Cape Cod Method, as discussed in Using Best Practices to Determine a Best Reserve Estimate by Struzzieri and Hussian. As the decay factor approaches 1 (the default value), the result approaches the traditional Cape Cod method. As the decay factor approaches 0, the result approaches the Chainladder method.
trend is the trend rate along the origin axis to reflect systematic inflationary impacts on the a priori.
Step20: When we fit a CapeCod method, we can see the apriori it computes with the given decay and trend assumptions. Since it is an array of estimated parameters, this CapeCod attribute is called the apriori_, with a trailing underscore.
Step21: With decay=1, each origin period gets the same apriori_ (this is the traditional Cape Cod). The apriori_ is calculated using the latest diagonal over the used-up exposure, where the used-up exposure is the exposure vector / CDF. Let's validate the calculation of the a priori.
Step22: With decay=0, the apriori_ for each origin period stands on its own.
Step23: Doing the same on our manually calculated apriori_ yields the same result.
Step24: Let's verify the result of this Cape Cod model's result with the Chainladder's.
Step25: We can examine the apriori_s to see whether there exhibit any trends over time.
Step26: Looks like there is a small positive trend, let's judgementally select the trend as 1%.
Step27: We can of course utilize both the trend and the decay parameters together. Adding trend to the CapeCod method is intended to adjust the apriori_s to a common level. Once at a common level, the apriori_ can be estimated from multiple origin periods using the decay factor.
Step28: Once estimated, it is necessary to detrend our apriori_s back to their untrended levels and these are contained in detrended_apriori_. It is the detrended_apriori_ that gets used in the calculation of ultimate_ losses.
Step29: The detrended_apriori_ is a much smoother estimate of the initial expected ultimate_. With the detrended_apriori_ in hand, the CapeCod method estimator behaves exactly like our the BornhuetterFerguson model.
Step30: Recap
All the deterministic estimators have ultimate_, ibnr_, full_expecation_ and full_triangle_ attributes that are themselves Triangles. These can be manipulated in a variety of ways to gain additional insights from our model. The expected loss methods take in an exposure vector, which itself is a Triangle through the sample_weight argument of the fit method. The CapeCod method has the additional attributes apriori_ and detrended_apriori_ to accommodate the selection of its trend and decay assumptions.
Finally, these estimators work very well with the transformers discussed in previous tutorials. Let's demonstrate the compositional nature of these estimators.
Step31: Let's calculate the age-to-age factors | Python Code:
# Black linter, optional
%load_ext lab_black
import pandas as pd
import numpy as np
import chainladder as cl
import matplotlib.pyplot as plt
import os
%matplotlib inline
print("pandas: " + pd.__version__)
print("numpy: " + np.__version__)
print("chainladder: " + cl.__version__)
Explanation: Applying Deterministic Methods
Getting Started
This tutorial focuses on using deterministic methods to square a triangle.
Note that a lot of the examples shown here might not be applicable in a real world scenario, and is only meant to demonstrate some of the functionalities included in the package. The user should always exercise their best actuarial judgement, and follow any applicable laws, the Code of Professional Conduct, and applicable Actuarial Standards of Practice.
Be sure to make sure your packages are updated. For more info on how to update your pakages, visit Keeping Packages Updated.
End of explanation
genins = cl.load_sample("genins")
genins_dev = cl.Pipeline(
[("dev", cl.Development()), ("tail", cl.TailCurve())]
).fit_transform(genins)
Explanation: The Chainladder Method
The basic chainladder method is entirely specified by its development pattern selections. For this reason, the Chainladder estimator takes no additional assumptions, i.e. no additional arguments. Let's start by loading an example dataset and creating an Triangle with Development patterns and a TailCurve. Recall, we can bundle these two estimators into a single Pipeline if we wish.
End of explanation
genins_model = cl.Chainladder().fit(genins_dev)
genins_model.ultimate_
Explanation: We can now use the basic Chainladder estimator to estimate ultimate_ values of our Triangle.
End of explanation
genins_model.ibnr_
Explanation: We can also view the ibnr_. Techincally the term IBNR is reserved for Incurred but not Reported, but the chainladder models use it to describe the difference between the ultimate and the latest evaluation period.
End of explanation
genins
genins_model.full_triangle_
genins_model.full_triangle_.dev_to_val()
Explanation: It is often useful to see the completed Triangle and this can be accomplished by inspecting the full_triangle_. As with most other estimator properties, the full_triangle_ is itself a Triangle and can be manipulated as such.
End of explanation
genins_model.full_triangle_.valuation_date
Explanation: Notice the calendar year of our ultimates. While ultimates will generally be realized before this date, the chainladder package picks the highest allowable date available for its ultimate_ valuation.
End of explanation
genins_model.full_triangle_.dev_to_val().cum_to_incr()
Explanation: We can further manipulate the "triangle", such as applying cum_to_incr().
End of explanation
genins_model.full_expectation_
Explanation: Another useful property is full_expectation_. Similar to the full_triangle, it "squares" the Triangle, but replaces the known data with expected values implied by the model and development pattern.
End of explanation
genins_model.full_triangle_ - genins_model.full_expectation_
Explanation: With some clever arithmetic, we can use these objects to give us other useful information. For example, we can retrospectively review the actual Triangle against its modeled expectation.
End of explanation
genins_model.full_triangle_[
genins_model.full_triangle_.valuation <= genins.valuation_date
] - genins_model.full_expectation_[
genins_model.full_triangle_.valuation <= genins.valuation_date
]
Explanation: We can also filter out the lower right part of the triangle with [genins_model.full_triangle_.valuation <= genins.valuation_date].
End of explanation
genins_AvE = genins - genins_model.full_expectation_
genins_AvE[genins_AvE.valuation <= genins.valuation_date]
Explanation: Getting comfortable with manipulating Triangles will greatly improve our ability to extract value out of the chainladder package. Here is another way of getting the same answer.
End of explanation
genins_AvE[genins_AvE.valuation <= genins.valuation_date].heatmap()
Explanation: We can also filter out the lower right part of the triangle with [genins_model.full_triangle_.valuation <= genins.valuation_date] before applying the heatmap().
End of explanation
cal_yr_ibnr = genins_model.full_triangle_.dev_to_val().cum_to_incr()
cal_yr_ibnr[cal_yr_ibnr.valuation.year == 2011]
Explanation: Can you figure out how to get the expected IBNR runoff in the upcoming year?
End of explanation
bf_model.fit(
comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
bf_model.ultimate_
Explanation: The Bornhuetter-Ferguson Method
The BornhuetterFerguson estimator is another deterministic method having many of the same attributes as the Chainladder estimator. It comes with one input assumption, the a priori (apriori). This is a scalar multiplier that will be applied to an exposure vector, which will produce an a priori ultimate estimate vector that we can use for the model.
The BornhuetterFerguson method
The BornhuetterFerguson estimator is another deterministic method having many of the same attributes as the Chainladder estimator. It comes with one assumption, the apriori. This is a scalar multiplier that is to be applied to an exposure vector to determine an apriori ultimate estimate of our model.
Since the CAS Loss Reserve Database has premium, we will use it as an example. Let's grab the paid loss and net earned premium for the commercial auto line of business.
Remember that apriori is a scaler, which we need to apply it to a vector of exposures. Let's assume that the a priori is 0.75, for 75% loss ratio.
Let's set an apriori Loss Ratio estimate of 75%
The BornhuetterFerguson method along with all other expected loss methods like CapeCod and Benktander (discussed later), need to take in an exposure vector. The exposure vector has to be a Triangle itself. Remember that the Triangle class supports single exposure vectors.
End of explanation
bf_model.fit(
comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
Explanation: Having an apriori that takes on only a constant for all origins can be limiting; however, we can apply a varying vector on the exposure vector to get a varying apriori.
End of explanation
b1 = cl.BornhuetterFerguson(apriori=0.75).fit(
comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
b2 = cl.BornhuetterFerguson(apriori=1.00).fit(
comauto["CumPaidLoss"],
sample_weight=0.75 * comauto["EarnedPremNet"].latest_diagonal,
)
b1.ultimate_ == b2.ultimate_
Explanation: Having an apriori that takes on only a constant for all origins can be limiting. This shouldn't stop the practitioner from exploiting the fact that the apriori can be embedded directly in the exposure vector itself allowing full cusomization of the apriori.
End of explanation
cl_model = cl.Chainladder().fit(comauto["CumPaidLoss"])
plt.plot(
bf_model.ultimate_.to_frame().index.year, bf_model.ultimate_.to_frame(), label="BF",
)
plt.plot(
cl_model.ultimate_.to_frame().index.year, cl_model.ultimate_.to_frame(), label="CL",
)
plt.legend(loc="upper left")
Explanation: If we need to create a new colume, such as AdjEarnedPrmNet with varying implied loss ratios. It is recommend that we perform any data modification in pandas instead of Triangle forms.
Let's perform the estimate using Chainladder and compare the results.
End of explanation
bk_model = cl.Benktander(apriori=0.75, n_iters=2)
Explanation: The Benktander Method
The Benktander method is similar to the BornhuetterFerguson method, but allows for the specification of one additional assumption, n_iters, the number of iterations to recalculate the ultimates. The Benktander method generalizes both the BornhuetterFerguson and the Chainladder estimator through this assumption.
When n_iters = 1, the result is equivalent to the BornhuetterFerguson estimator.
When n_iters is sufficiently large, the result converges to the Chainladder estimator.
End of explanation
bk_model.fit(
X=comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
plt.plot(
bf_model.ultimate_.to_frame().index.year, bf_model.ultimate_.to_frame(), label="BF"
)
plt.plot(
cl_model.ultimate_.to_frame().index.year, cl_model.ultimate_.to_frame(), label="CL"
)
plt.plot(
bk_model.ultimate_.to_frame().index.year, bk_model.ultimate_.to_frame(), label="BK"
)
plt.legend(loc="upper left")
Explanation: Fitting the Benktander method looks identical to the other methods.
End of explanation
cc_model = cl.CapeCod(decay=1, trend=0).fit(
X=comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
Explanation: The Cape Cod Method
The CapeCod method is similar to the BornhuetterFerguson method, except its apriori is computed from the Triangle itself. Instead of specifying an apriori, decay and trend need to be specified.
decay is the rate that gives weights to earlier origin periods, this parameter is required by the Generalized Cape Cod Method, as discussed in Using Best Practices to Determine a Best Reserve Estimate by Struzzieri and Hussian. As the decay factor approaches 1 (the default value), the result approaches the traditional Cape Cod method. As the decay factor approaches 0, the result approaches the Chainladder method.
trend is the trend rate along the origin axis to reflect systematic inflationary impacts on the a priori.
End of explanation
cc_model.apriori_
Explanation: When we fit a CapeCod method, we can see the apriori it computes with the given decay and trend assumptions. Since it is an array of estimated parameters, this CapeCod attribute is called the apriori_, with a trailing underscore.
End of explanation
latest_diagonal = comauto["CumPaidLoss"].latest_diagonal
cdf_as_origin_vector = (
cl.Chainladder().fit(comauto["CumPaidLoss"]).ultimate_
/ comauto["CumPaidLoss"].latest_diagonal
)
latest_diagonal.sum() / (
comauto["EarnedPremNet"].latest_diagonal / cdf_as_origin_vector
).sum()
Explanation: With decay=1, each origin period gets the same apriori_ (this is the traditional Cape Cod). The apriori_ is calculated using the latest diagonal over the used-up exposure, where the used-up exposure is the exposure vector / CDF. Let's validate the calculation of the a priori.
End of explanation
cc_model = cl.CapeCod(decay=0, trend=0).fit(
X=comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
cc_model.apriori_
Explanation: With decay=0, the apriori_ for each origin period stands on its own.
End of explanation
latest_diagonal / (comauto["EarnedPremNet"].latest_diagonal / cdf_as_origin_vector)
Explanation: Doing the same on our manually calculated apriori_ yields the same result.
End of explanation
cc_model.ultimate_ - cl_model.ultimate_
Explanation: Let's verify the result of this Cape Cod model's result with the Chainladder's.
End of explanation
plt.plot(cc_model.apriori_.to_frame().index.year, cc_model.apriori_.to_frame())
Explanation: We can examine the apriori_s to see whether there exhibit any trends over time.
End of explanation
trended_cc_model = cl.CapeCod(decay=0, trend=0.01).fit(
X=comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
plt.plot(
cc_model.apriori_.to_frame().index.year,
cc_model.apriori_.to_frame(),
label="Untrended",
)
plt.plot(
trended_cc_model.apriori_.to_frame().index.year,
trended_cc_model.apriori_.to_frame(),
label="Trended",
)
plt.legend(loc="lower right")
Explanation: Looks like there is a small positive trend, let's judgementally select the trend as 1%.
End of explanation
trended_cc_model = cl.CapeCod(decay=0, trend=0.01).fit(
X=comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
trended_decayed_cc_model = cl.CapeCod(decay=0.75, trend=0.01).fit(
X=comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
plt.plot(
cc_model.apriori_.to_frame().index.year,
cc_model.apriori_.to_frame(),
label="Untrended",
)
plt.plot(
trended_cc_model.apriori_.to_frame().index.year,
trended_cc_model.apriori_.to_frame(),
label="Trended",
)
plt.plot(
trended_decayed_cc_model.apriori_.to_frame().index.year,
trended_decayed_cc_model.apriori_.to_frame(),
label="Trended and Decayed",
)
plt.legend(loc="lower right")
Explanation: We can of course utilize both the trend and the decay parameters together. Adding trend to the CapeCod method is intended to adjust the apriori_s to a common level. Once at a common level, the apriori_ can be estimated from multiple origin periods using the decay factor.
End of explanation
plt.plot(
trended_cc_model.apriori_.to_frame().index.year,
trended_cc_model.apriori_.to_frame(),
label="Trended",
)
plt.plot(
trended_cc_model.detrended_apriori_.to_frame().index.year,
trended_cc_model.detrended_apriori_.to_frame(),
label="Detended to Original",
)
plt.legend(loc="lower right")
Explanation: Once estimated, it is necessary to detrend our apriori_s back to their untrended levels and these are contained in detrended_apriori_. It is the detrended_apriori_ that gets used in the calculation of ultimate_ losses.
End of explanation
bf_model = cl.BornhuetterFerguson().fit(
X=comauto["CumPaidLoss"],
sample_weight=trended_cc_model.detrended_apriori_
* comauto["EarnedPremNet"].latest_diagonal,
)
bf_model.ultimate_.sum() - trended_cc_model.ultimate_.sum()
Explanation: The detrended_apriori_ is a much smoother estimate of the initial expected ultimate_. With the detrended_apriori_ in hand, the CapeCod method estimator behaves exactly like our the BornhuetterFerguson model.
End of explanation
wkcomp = (
cl.load_sample("clrd")
.groupby("LOB")
.sum()
.loc["wkcomp"][["CumPaidLoss", "EarnedPremNet"]]
)
wkcomp
Explanation: Recap
All the deterministic estimators have ultimate_, ibnr_, full_expecation_ and full_triangle_ attributes that are themselves Triangles. These can be manipulated in a variety of ways to gain additional insights from our model. The expected loss methods take in an exposure vector, which itself is a Triangle through the sample_weight argument of the fit method. The CapeCod method has the additional attributes apriori_ and detrended_apriori_ to accommodate the selection of its trend and decay assumptions.
Finally, these estimators work very well with the transformers discussed in previous tutorials. Let's demonstrate the compositional nature of these estimators.
End of explanation
patterns = cl.Pipeline(
[
(
"dev",
cl.Development(
average=["volume"] * 5 + ["simple"] * 4,
n_periods=7,
drop_valuation="1995",
),
),
("tail", cl.TailCurve(curve="inverse_power", extrap_periods=80)),
]
)
cc = cl.CapeCod(decay=0.8, trend=0.02).fit(
X=patterns.fit_transform(wkcomp["CumPaidLoss"]),
sample_weight=wkcomp["EarnedPremNet"].latest_diagonal,
)
cc.ultimate_
plt.bar(cc.ultimate_.to_frame().index.year, cc.ultimate_.to_frame()["2261"])
Explanation: Let's calculate the age-to-age factors:
- Without the the 1995 valuation period
- Using volume weighted for the first 5 factors, and simple average for the next 4 factors (for a total of 9 age-to-age factors)
- Using no more than 7 periods (with n_periods)
End of explanation |
2,341 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kittens
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: If you have used the Internet, you have probably seen videos of kittens unrolling toilet paper.
And you might have wondered how long it would take a standard kitten to unroll 47 m of paper, the length of a standard roll.
The interactions of the kitten and the paper rolls are complex. To keep things simple, let's assume that the kitten pulls down on the free end of the roll with constant force. And let's neglect the friction between the roll and the axle.
This diagram shows the paper roll with the force applied by the kitten, $F$, the lever arm of the force around the axis of rotation, $r$, and the resulting torque, $\tau$.
Assuming that the force applied by the kitten is 0.002 N, how long would it take to unroll a standard roll of toilet paper?
We'll use the same parameters as in Chapter 24
Step2: Rmin and Rmax are the minimum and maximum radius of the roll, respectively.
Mcore is the weight of the core (the cardboard tube at the center) and Mroll is the total weight of the paper.
L is the unrolled length of the paper.
tension is the force the kitten applies by pulling on the loose end of the roll (I chose this value because it yields reasonable results).
In Chapter 24 we defined $k$ to be the constant that relates a change in the radius of the roll to a change in the rotation of the roll
Step4: Moment of Inertia
To compute angular acceleration, we'll need the moment of inertia for the roll.
At http
Step5: Icore is the moment of inertia of the core; Iroll is the moment of inertia of the paper.
rho_h is the density of the paper in terms of mass per unit of area.
To compute rho_h, we compute the area of the complete roll like this
Step6: And divide the mass of the roll by that area.
Step7: As an example, here's the moment of inertia for the complete roll.
Step8: As r decreases, so does I. Here's the moment of inertia when the roll is empty.
Step9: The way $I$ changes over time might be more of a problem than I have made it seem. In the same way that $F = m a$ only applies when $m$ is constant, $\tau = I \alpha$ only applies when $I$ is constant. When $I$ varies, we usually have to use a more general version of Newton's law. However, I believe that in this example, mass and moment of inertia vary together in a way that makes the simple approach work out.
A friend of mine who is a physicist is not convinced; nevertheless, let's proceed on the assumption that I am right.
Simulation
The state variables we'll use are
theta, the total rotation of the roll in radians,
omega, angular velocity in rad / s,
r, the radius of the roll, and
y, the length of the unrolled paper.
Here's a State object with the initial conditions.
Step10: And here's a System object with the starting conditions and t_end.
Step11: You can take it from here.
Exercise
Step12: Exercise
Step13: Now run the simulation.
Step14: And check the results.
Step15: The final value of theta should be about 200 rotations, the same as in Chapter 24.
The final value of omega should be about 63 rad/s, which is about 10 revolutions per second. That's pretty fast, but it might be plausible.
The final value of y should be L, which is 47 m.
The final value of r should be Rmin, which is 0.02 m.
And the total unrolling time should be about 76 seconds, which seems plausible.
The following cells plot the results.
theta increases slowly at first, then accelerates.
Step16: Angular velocity, omega, increases almost linearly at first, as constant force yields almost constant torque. Then, as the radius decreases, the lever arm decreases, yielding lower torque, but moment of inertia decreases even more, yielding higher angular acceleration.
Step17: y increases slowly and then accelerates.
Step18: r decreases slowly, then accelerates. | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Kittens
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
Rmin = 0.02 # m
Rmax = 0.055 # m
Mcore = 15e-3 # kg
Mroll = 215e-3 # kg
L = 47 # m
tension = 0.002 # N
Explanation: If you have used the Internet, you have probably seen videos of kittens unrolling toilet paper.
And you might have wondered how long it would take a standard kitten to unroll 47 m of paper, the length of a standard roll.
The interactions of the kitten and the paper rolls are complex. To keep things simple, let's assume that the kitten pulls down on the free end of the roll with constant force. And let's neglect the friction between the roll and the axle.
This diagram shows the paper roll with the force applied by the kitten, $F$, the lever arm of the force around the axis of rotation, $r$, and the resulting torque, $\tau$.
Assuming that the force applied by the kitten is 0.002 N, how long would it take to unroll a standard roll of toilet paper?
We'll use the same parameters as in Chapter 24:
End of explanation
k = (Rmax**2 - Rmin**2) / 2 / L
k
Explanation: Rmin and Rmax are the minimum and maximum radius of the roll, respectively.
Mcore is the weight of the core (the cardboard tube at the center) and Mroll is the total weight of the paper.
L is the unrolled length of the paper.
tension is the force the kitten applies by pulling on the loose end of the roll (I chose this value because it yields reasonable results).
In Chapter 24 we defined $k$ to be the constant that relates a change in the radius of the roll to a change in the rotation of the roll:
$$dr = k~d\theta$$
And we derived the equation for $k$ in terms of $R_{min}$, $R_{max}$, and $L$.
$$k = \frac{1}{2L} (R_{max}^2 - R_{min}^2)$$
So we can compute k like this:
End of explanation
def moment_of_inertia(r, system):
Moment of inertia for a roll of toilet paper.
r: current radius of roll in meters
system: System object
returns: moment of inertia in kg m**2
Icore = Mcore * Rmin**2
Iroll = np.pi * rho_h / 2 * (r**4 - Rmin**4)
return Icore + Iroll
Explanation: Moment of Inertia
To compute angular acceleration, we'll need the moment of inertia for the roll.
At http://modsimpy.com/moment you can find moments of inertia for
simple geometric shapes. I'll model the core as a "thin cylindrical shell", and the paper roll as a "thick-walled cylindrical tube with open ends".
The moment of inertia for a thin shell is just $m r^2$, where $m$ is the mass and $r$ is the radius of the shell.
For a thick-walled tube the moment of inertia is
$$I = \frac{\pi \rho h}{2} (r_2^4 - r_1^4)$$
where $\rho$ is the density of the material, $h$ is the height of the tube (if we think of the roll oriented vertically), $r_2$ is the outer diameter, and $r_1$ is the inner diameter.
Since the outer diameter changes as the kitten unrolls the paper, we
have to compute the moment of inertia, at each point in time, as a
function of the current radius, r, like this:
End of explanation
area = np.pi * (Rmax**2 - Rmin**2)
area
Explanation: Icore is the moment of inertia of the core; Iroll is the moment of inertia of the paper.
rho_h is the density of the paper in terms of mass per unit of area.
To compute rho_h, we compute the area of the complete roll like this:
End of explanation
rho_h = Mroll / area
rho_h
Explanation: And divide the mass of the roll by that area.
End of explanation
moment_of_inertia(Rmax, system)
Explanation: As an example, here's the moment of inertia for the complete roll.
End of explanation
moment_of_inertia(Rmin, system)
Explanation: As r decreases, so does I. Here's the moment of inertia when the roll is empty.
End of explanation
init = State(theta=0, omega=0, y=0, r=Rmax)
init
Explanation: The way $I$ changes over time might be more of a problem than I have made it seem. In the same way that $F = m a$ only applies when $m$ is constant, $\tau = I \alpha$ only applies when $I$ is constant. When $I$ varies, we usually have to use a more general version of Newton's law. However, I believe that in this example, mass and moment of inertia vary together in a way that makes the simple approach work out.
A friend of mine who is a physicist is not convinced; nevertheless, let's proceed on the assumption that I am right.
Simulation
The state variables we'll use are
theta, the total rotation of the roll in radians,
omega, angular velocity in rad / s,
r, the radius of the roll, and
y, the length of the unrolled paper.
Here's a State object with the initial conditions.
End of explanation
system = System(init=init, t_end=120)
Explanation: And here's a System object with the starting conditions and t_end.
End of explanation
# Solution
def slope_func(t, state, system):
theta, omega, y, r = state
tau = r * tension
I = moment_of_inertia(r, system)
alpha = tau / I
dydt = r * omega
drdt = -k * omega
return omega, alpha, dydt, drdt
# Solution
slope_func(0, system.init, system)
Explanation: You can take it from here.
Exercise:
Write a slope function we can use to simulate this system. Test it with the initial conditions. The results should be approximately
0.0, 0.294, 0.0, 0.0
End of explanation
# Solution
def event_func(t, state, system):
theta, omega, y, r = state
return L-y
# Solution
event_func(0, system.init, system)
Explanation: Exercise: Write an event function that stops the simulation when y equals L, that is, when the entire roll is unrolled. Test your function with the initial conditions.
End of explanation
# Solution
results, details = run_solve_ivp(system, slope_func,
events=event_func)
details.message
Explanation: Now run the simulation.
End of explanation
results.tail()
Explanation: And check the results.
End of explanation
results.theta.plot(color='C0', label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
Explanation: The final value of theta should be about 200 rotations, the same as in Chapter 24.
The final value of omega should be about 63 rad/s, which is about 10 revolutions per second. That's pretty fast, but it might be plausible.
The final value of y should be L, which is 47 m.
The final value of r should be Rmin, which is 0.02 m.
And the total unrolling time should be about 76 seconds, which seems plausible.
The following cells plot the results.
theta increases slowly at first, then accelerates.
End of explanation
results.omega.plot(color='C2', label='omega')
decorate(xlabel='Time (s)',
ylabel='Angular velocity (rad/s)')
Explanation: Angular velocity, omega, increases almost linearly at first, as constant force yields almost constant torque. Then, as the radius decreases, the lever arm decreases, yielding lower torque, but moment of inertia decreases even more, yielding higher angular acceleration.
End of explanation
results.y.plot(color='C1', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
Explanation: y increases slowly and then accelerates.
End of explanation
results.r.plot(color='C4', label='r')
decorate(xlabel='Time (s)',
ylabel='Radius (m)')
Explanation: r decreases slowly, then accelerates.
End of explanation |
2,342 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32,(None,real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32,(None, z_dim), name='input_z')
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out, logits
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. Along with the $tanh$ output, we also need to return the logits for use in calculating the loss with tf.nn.sigmoid_cross_entropy_with_logits.
Exercise: Implement the generator network in the function below. You'll need to return both the logits and the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model, g_logits = generator(input_z, input_size)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples, _ = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples, _ = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
2,343 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: Units
Each FloatParameter or FloatArrayParameter has an associated unit. Let's look at the 'sma' Parameter for the binary orbit.
Step2: From the representation above, we can already see that the units are in solar radii. We can access the units directly via get_default_unit.
Step3: Calling get_value returns only the float of the value in these units.
Step4: Alternatively, you can access an astropy quantity object that contains the value and unit by calling get_quantity.
Step5: Both get_value and get_quantity also accept a unit argument which will return the value or quantity in the requested units (if able to convert). This unit argument takes either a unit object (we imported a forked version of astropy units from within PHOEBE) or a string representation that can be parsed.
Step6: Similarly when setting the value, you can provide either a Quantity object or a value and unit. These will still be stored within PHOEBE according to the default_unit of the Parameter object.
Step7: If for some reason you want to change the default units, you can, but just be careful that this could cause some float-point precision issues. | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u,c
logger = phoebe.logger(clevel='WARNING')
b = phoebe.default_binary()
Explanation: Advanced: Parameter Units
In this tutorial we will learn about how units are handled in the frontend and how to translate between different units.
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component')
Explanation: Units
Each FloatParameter or FloatArrayParameter has an associated unit. Let's look at the 'sma' Parameter for the binary orbit.
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component').get_default_unit()
Explanation: From the representation above, we can already see that the units are in solar radii. We can access the units directly via get_default_unit.
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component').get_value()
Explanation: Calling get_value returns only the float of the value in these units.
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity()
Explanation: Alternatively, you can access an astropy quantity object that contains the value and unit by calling get_quantity.
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component').get_value(unit=u.km)
b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity(unit='km')
Explanation: Both get_value and get_quantity also accept a unit argument which will return the value or quantity in the requested units (if able to convert). This unit argument takes either a unit object (we imported a forked version of astropy units from within PHOEBE) or a string representation that can be parsed.
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component').set_value(3800000*u.km)
b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity()
b.get_parameter(qualifier='sma', component='binary', context='component').set_value(3900000, unit='km')
b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity()
Explanation: Similarly when setting the value, you can provide either a Quantity object or a value and unit. These will still be stored within PHOEBE according to the default_unit of the Parameter object.
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component').set_default_unit('mm')
b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity()
b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity(unit='solRad')
Explanation: If for some reason you want to change the default units, you can, but just be careful that this could cause some float-point precision issues.
End of explanation |
2,344 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stochastic Differential Equations
Step1: This background for these exercises is article of D Higham, An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations, SIAM Review 43
Step2: Further Stochastic integrals
Quick recap | Python Code:
from IPython.core.display import HTML
css_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'
HTML(url=css_file)
Explanation: Stochastic Differential Equations: Lab 2
End of explanation
%matplotlib inline
import numpy
from matplotlib import pyplot
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
rcParams['figure.figsize'] = (12,6)
from scipy.integrate import quad
Explanation: This background for these exercises is article of D Higham, An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations, SIAM Review 43:525-546 (2001).
Higham provides Matlab codes illustrating the basic ideas at http://personal.strath.ac.uk/d.j.higham/algfiles.html, which are also given in the paper.
End of explanation
r = 2.0
K = 1.0
beta = 0.25
X0 = 0.5
T = 1.0
Explanation: Further Stochastic integrals
Quick recap: the key feature is the Ito stochastic integral
\begin{equation}
\int_{t_0}^t G(t') \, \text{d}W(t') = \text{mean-square-}\lim_{n\to +\infty} \left{ \sum_{i=1}^n G(t_{i-1}) (W_{t_i} - W_{t_{i-1}} ) \right}
\end{equation}
where the key point for the Ito integral is that the first term in the sum is evaluated at the left end of the interval ($t_{i-1}$).
Now we use this to write down the SDE
\begin{equation}
\text{d}X_t = f(X_t) \, \text{d}t + g(X_t) \, \text{d}W_t
\end{equation}
with formal solution
\begin{equation}
X_t = X_0 + \int_0^t f(X_s) \, \text{d}s + \int_0^t g(X_s) \, \text{d}W_s.
\end{equation}
Using the Ito stochastic integral formula we get the Euler-Maruyama method
\begin{equation}
X_{n+1} = X_n + \delta t \, f(X_n) + \sqrt{\delta t} \xi_n \, g(X_n)
\end{equation}
by applying the integral over the region $[t_n, t_{n+1} = t_n + \delta t]$. Here $\delta t$ is the width of the interval and $\xi_n$ is the normal random variable $\xi_n \sim N(0, 1)$.
Normal chain rule
If
\begin{equation}
\frac{\text{d}X}{\text{d}t} = f(X_t)
\end{equation}
and we want to find the differential equation satisfied by $h(X(t))$ (or $h(X_t)$), then we write
\begin{align}
&&\frac{\text{d}}{\text{d}t} h(X_t) &= h \left( X(t) + \text{d}X(t) \right) - h(X(t)) \
&&&\simeq h(X(t)) + \text{d}X \, h'(X(t)) + \frac{1}{2} (\text{d}X)^2 \, h''(X(t)) + \dots - h(X(t)) \
&&&\simeq f(X) h'(X) \text{d}t + \frac{1}{2} (f(X))^2 h''(X) (\text{d}t)^2 + \dots \
\implies && \frac{\text{d} h(X)}{dt} &= f(X) h'(X).
\end{align}
Stochastic chain rule
Now run through the same steps using the equation
\begin{equation}
\text{d}X = f(X)\, \text{d}t + g(X) \, \text{d}W.
\end{equation}
We find
\begin{align}
&& \text{d}h &\simeq h'(X(t))\, \text{d}X + \frac{1}{2} h''(X(t)) (\text{d}X)^2 + \dots, \
&&&\simeq h'(X) f(X)\, \text{d}t + h'(X) g(X) ', \text{d}W + \frac{1}{2} \left( f(X) \text{d}t^2 + 2 f(x)g(x)\, \text{d}t dW + g^2(x) \text{d}W^2 \right) \
\implies && \text{d}h &= \left( f(X) h'(X) + \frac{1}{2} h''(X)g^2(X) \right) \, \text{d}t + h'(X) g(X) \, \text{d}W.
\end{align}
This additional $g^2$ term makes all the difference when deriving numerical methods, where the chain rule is repeatedly used.
Using this result
Remember that
\begin{equation}
\int_{t_0}^t W_s \, \text{d}W_s = \frac{1}{2} W^2_t - \frac{1}{2} W^2_{t_0} - \frac{1}{2} (t - t_0).
\end{equation}
From this we need to identify the stochastic differential equation, and also the function $h$, that will give us this result just from the chain rule.
The SDE is
\begin{equation}
\text{d}X_t = \text{d}W_t, \quad f(X) = 0, \quad g(X) = 1.
\end{equation}
Writing the chain rule down in the form
\begin{equation}
h(X_t) = h(X_0) + \int_0^t \left( f(X_s) h'(X_s) + \frac{1}{2} h''(X_s) g^2(X_s) \right) \, \text{d}t + \int_0^t h'(X_s) g(X_s) \, \text{d}W_s.
\end{equation}
Matching the final term (the integral over $\text{d}W_s$) we see that we need $h'$ to go like $X$, or
\begin{equation}
h = X^2, \quad \text{d}X_t = \text{d}W_t, \quad f(X) = 0, \quad g(X) = 1.
\end{equation}
With $X_t = W_t$ we therefore have
\begin{align}
W_t^2 &= W_0^2 + \int_{t_0}^t \frac{1}{2} 2 \, \text{d}s + \int_{t_0}^t 2 W_s \, \text{d}W_s
&= W_0^2 + (t - t_0) + \int_{t_0}^t 2 W_s \, \text{d}W_s
\end{align}
as required.
Milstein's method
Using our chain rule we can construct higher order methods for stochastic differential equations. Milstein's method, applied to the SDE
$$
\text{d}X = f(X) \, \text{d}t + g(X) \,\text{d}W,
$$
is
$$
X_{n+1} = X_n + h f_n + g_n \, \text{d}W_{n} + \tfrac{1}{2} g_n g'n \left( \text{d}W{n}^2 - h \right).
$$
Tasks
Implement Milstein's method, applied to the problem in the previous lab:
$$
\begin{equation}
\text{d}X(t) = \lambda X(t) \, \text{d}t + \mu X(t) \text{d}W(t), \qquad X(0) = X_0.
\end{equation}
$$
Choose any reasonable values of the free parameters $\lambda, \mu, X_0$.
The exact solution to this equation is $X(t) = X(0) \exp \left[ \left( \lambda - \tfrac{1}{2} \mu^2 \right) t + \mu W(t) \right]$. Fix the timetstep and compare your solution to the exact solution.
Check the convergence again.
Compare the performance of the Euler-Maruyama and Milstein method using eg timeit. At what point is one method better than the other?
Population problem
Apply the algorithms, convergence and performance tests to the SDE
$$
\begin{equation}
\text{d}X(t) = r X(t) (K - X(t)) \, \text{d}t + \beta X(t) \,\text{d}W(t), \qquad X(0) = X_0.
\end{equation}
$$
Use the parameters $r = 2, K = 1, \beta = 0.25, X_0 = 0.5$.
End of explanation |
2,345 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Persistent Homology of Sliding Windows
Now that we have heuristically explored the geometry of sliding window embeddings of 1D signals, we will apply tools from persistent homology to quantify the geometry. As before, we first need to import all necessary libraries and setup code to compute sliding window embeddings
Step2: Single Sine
Step3: Questions
Examining the effect of window extent on maximal persistence (with no noise)
Step4: Persistent Homology
Now, we will compute the persistent homology of the above signal following a sliding window embedding. Run this code. Then examine the outputs with both harmonic sinusoids and noncommensurate sinusoids, and note the difference in the persistence diagrams. Note that the two points with highest persistence are highlighted in red on the diagram.
Step5: Questions
Describe a key difference in the persistence diagrams between the harmonic and non-commensurate cases. Explain this difference in terms of the 3-D projection of the PCA embedding. (Hint
Step6: Now, we will look at PCA of the sliding window embeddings of the two signals
Step7: Notice how one looks more "twisted" than the other. To finish this off, let's compute TDA | Python Code:
# Do all of the imports and setup inline plotting
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib import gridspec
from mpl_toolkits.mplot3d import Axes3D
from sklearn.decomposition import PCA
from scipy.interpolate import InterpolatedUnivariateSpline
import ipywidgets as widgets
from IPython.display import display
import warnings
warnings.filterwarnings('ignore')
from IPython.display import clear_output
from ripser import ripser
from persim import plot_diagrams
def getSlidingWindow(x, dim, Tau, dT):
Return a sliding window of a time series,
using arbitrary sampling. Use linear interpolation
to fill in values in windows not on the original grid
Parameters
----------
x: ndarray(N)
The original time series
dim: int
Dimension of sliding window (number of lags+1)
Tau: float
Length between lags, in units of time series
dT: float
Length between windows, in units of time series
Returns
-------
X: ndarray(N, dim)
All sliding windows stacked up
N = len(x)
NWindows = int(np.floor((N-dim*Tau)/dT))
if NWindows <= 0:
print("Error: Tau too large for signal extent")
return np.zeros((3, dim))
X = np.zeros((NWindows, dim))
spl = InterpolatedUnivariateSpline(np.arange(N), x)
for i in range(NWindows):
idxx = dT*i + Tau*np.arange(dim)
start = int(np.floor(idxx[0]))
end = int(np.ceil(idxx[-1]))+2
# Only take windows that are within range
if end >= len(x):
X = X[0:i, :]
break
X[i, :] = spl(idxx)
return X
Explanation: Persistent Homology of Sliding Windows
Now that we have heuristically explored the geometry of sliding window embeddings of 1D signals, we will apply tools from persistent homology to quantify the geometry. As before, we first need to import all necessary libraries and setup code to compute sliding window embeddings
End of explanation
def on_value_change(change):
execute_computation1()
dimslider = widgets.IntSlider(min=1,max=100,value=20,description='Dimension:',continuous_update=False)
dimslider.observe(on_value_change, names='value')
Tauslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=1,description=r'\(\tau :\)' ,continuous_update=False)
Tauslider.observe(on_value_change, names='value')
noiseampslider = widgets.FloatSlider(min=0,max=2,step=0.1,value=0,description='Noise Amplitude',continuous_update=False)
noiseampslider.observe(on_value_change, names='value')
display(widgets.HBox(( dimslider,Tauslider, noiseampslider)))
noise = np.random.randn(10000)
fig = plt.figure(figsize=(9.5, 4))
def execute_computation1():
plt.clf()
# Step 1: Setup the signal
T = 40 # The period in number of samples
NPeriods = 4 # How many periods to go through
N = T*NPeriods # The total number of samples
t = np.linspace(0, 2*np.pi*NPeriods, N+1)[0:N] # Sampling indices in time
x = np.cos(t) # The final signal
x += noiseampslider.value * noise[:len(x)]
# Step 2: Do a sliding window embedding
dim = dimslider.value
Tau = Tauslider.value
dT = 0.5
X = getSlidingWindow(x, dim, Tau, dT)
extent = Tau*dim
# Step 3: Do Rips Filtration
PDs = ripser(X, maxdim=1)['dgms']
I = PDs[1]
# Step 4: Perform PCA down to 2D for visualization
pca = PCA(n_components = 2)
Y = pca.fit_transform(X)
eigs = pca.explained_variance_
# Step 5: Plot original signal, 2-D projection, and the persistence diagram
gs = gridspec.GridSpec(2, 2)
ax = plt.subplot(gs[0,0])
ax.plot(x)
ax.set_ylim((2*min(x), 2*max(x)))
ax.set_title("Original Signal")
ax.set_xlabel("Sample Number")
yr = np.max(x)-np.min(x)
yr = [np.min(x)-0.1*yr, np.max(x)+0.1*yr]
ax.plot([extent, extent], yr, 'r')
ax.plot([0, 0], yr, 'r')
ax.plot([0, extent], [yr[0]]*2, 'r')
ax.plot([0, extent], [yr[1]]*2, 'r')
ax2 = plt.subplot(gs[1,0])
plot_diagrams(PDs)
plt.title("Max Persistence = %.3g"%np.max(I[:, 1] - I[:, 0]))
ax3 = plt.subplot(gs[:,1])
ax3.scatter(Y[:, 0], Y[:, 1])
plt.axis('equal')
plt.title("2-D PCA, Eigenvalues: %.3g, %.3g "%(eigs[0],eigs[1]))
plt.tight_layout()
execute_computation1()
Explanation: Single Sine: Maximum Persistence vs Window Size
First, let's examine the 1D persistent homology of the sliding window embedding of a single perfect sinusoid. Choose dim and $\tau$ to change the extent (window size) to different values, and examine how the maximum persistence changes. How does this support what you saw in the first module?
End of explanation
# Step 1: Setup the signal
T1 = 10 # The period of the first sine in number of samples
T2 = T1*3 # The period of the second sine in number of samples
NPeriods = 10 # How many periods to go through, relative to the second sinusoid
N = T2*NPeriods # The total number of samples
t = np.arange(N) # Time indices
x = np.cos(2*np.pi*(1.0/T1)*t) # The first sinusoid
x += np.cos(2*np.pi*(1.0/T2)*t) # The second sinusoid
plt.figure();
plt.plot(x);
Explanation: Questions
Examining the effect of window extent on maximal persistence (with no noise):
Describe the effect of dimension and $\tau$ on maximal persistence. What does increasing one of these factors while keeping the other constant to do the window extent? What does it do to the maximal persistence? Explain your observations.
Is the maximal persistence a function of the window extent? Justify your answer and explain it geometrically (you may want to refer to the PCA projection plots).
Describe the relation between the eigenvalues and the maximal persistence. (Hint: How do the eigenvalues affect roundness? How does roundness affect persistence?)
Write code to plot scatter plots of maximal persistence vs dimension for fixed $\tau$ and vs $\tau$ for fixed dimension, and maximal persistence vs. dimension for fixed extent (say, extent = 40). Comment on your results.
<br><br>
Now add some noise to your plots. Notice that the maximal persistence point on the persistence diagram is colored in red.
What do you observe regarding the persistence diagram? Explain your observations in terms of your understading of persistence.
At what noise amplitude does the point with maximal persistence appear to get 'swallowed up' by the noise in the diagram? How does this correspond with the 2-D projection?
Note that the original signal has amplitude 1. As you increase noise, is it clear by looking at the signal that there is a periodic function underlying it? How does persistence allow detection of periodicity? Explain.
For fixed noise amplitude (say 1), increase dimension. What effect does this have on detection of periodicity using your method?
Does varying $\tau$ for the same fixed amplitude have the same effect? Explain.
Two Sines
Now let's examine the persistent homology of a signal consisting of the sum of two sinusoids. First, setup and examine the signal. We will use a slightly coarser sampling rate than we did in the first module to keep the persistent homology code running quickly.
End of explanation
def on_value_change(change):
execute_computation3()
secondfreq = widgets.Dropdown(options=[ 2, 3, np.pi],value=3,description='Second Frequency:',disabled=False)
secondfreq.observe(on_value_change,names='value')
noiseampslider = widgets.FloatSlider(min=0,max=2,step=0.1,value=0,description='Noise Amplitude',continuous_update=False)
noiseampslider.observe(on_value_change, names='value')
dimslider = widgets.IntSlider(min=1,max=100,value=20,description='Dimension:',continuous_update=False)
dimslider.observe(on_value_change, names='value')
Tauslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=1,description=r'\(\tau :\)' ,continuous_update=False)
Tauslider.observe(on_value_change, names='value')
display(widgets.HBox(( dimslider,Tauslider)))
display(widgets.HBox(( secondfreq,noiseampslider)))
noise = np.random.randn(10000)
fig = plt.figure(figsize=(9.5, 5))
def execute_computation3():
# Step 1: Setup the signal
T1 = 10 # The period of the first sine in number of samples
T2 = T1*secondfreq.value # The period of the second sine in number of samples
NPeriods = 5 # How many periods to go through, relative to the second sinusoid
N = T2*NPeriods # The total number of samples
t = np.arange(N) # Time indices
x = np.cos(2*np.pi*(1.0/T1)*t) # The first sinusoid
x += np.cos(2*np.pi*(1.0/T2)*t) # The second sinusoid
x += noiseampslider.value * noise[:len(x)]
#Step 2: Do a sliding window embedding
dim = dimslider.value
Tau = Tauslider.value
dT = 0.35
X = getSlidingWindow(x, dim, Tau, dT)
extent = Tau*dim
#Step 3: Do Rips Filtration
PDs = ripser(X, maxdim=1)['dgms']
#Step 4: Perform PCA down to 2D for visualization
pca = PCA()
Y = pca.fit_transform(X)
eigs = pca.explained_variance_
#Step 5: Plot original signal and the persistence diagram
gs = gridspec.GridSpec(3, 2,width_ratios=[1, 2],height_ratios=[2,2,1])
ax = plt.subplot(gs[0,1])
ax.plot(x)
ax.set_ylim((1.25*min(x), 1.25*max(x)))
ax.set_title("Original Signal")
ax.set_xlabel("Sample Number")
yr = np.max(x)-np.min(x)
yr = [np.min(x)-0.1*yr, np.max(x)+0.1*yr]
ax.plot([extent, extent], yr, 'r')
ax.plot([0, 0], yr, 'r')
ax.plot([0, extent], [yr[0]]*2, 'r')
ax.plot([0, extent], [yr[1]]*2, 'r')
ax2 = plt.subplot(gs[0:2,0])
plot_diagrams(PDs)
maxind = np.argpartition(PDs[1][:,1]-PDs[1][:,0], -2)[-2:]
max1 = PDs[1][maxind[1],1] - PDs[1][maxind[1],0]
max2 = PDs[1][maxind[0],1] - PDs[1][maxind[0],0]
ax2.set_title("Persistence Diagram\n Max Pers: %.3g 2nd Pers: %.3g"%(max1,max2) )
ax3 = plt.subplot(gs[2,0])
eigs = eigs[0:min(len(eigs), 10)]
ax3.bar(np.arange(len(eigs)), eigs)
ax3.set_xlabel("Eigenvalue Number")
ax3.set_ylabel("Eigenvalue")
ax3.set_title("PCA Eigenvalues")
c = plt.get_cmap('jet')
C = c(np.array(np.round(np.linspace(0, 255, Y.shape[0])), dtype=np.int32))
C = C[:, 0:3]
ax4 = fig.add_subplot(gs[1:,1], projection = '3d')
ax4.set_title("PCA of Sliding Window Embedding")
ax4.scatter(Y[:, 0], Y[:, 1], Y[:, 2], c=C)
ax4.set_aspect('equal', 'datalim')
plt.tight_layout()
execute_computation3()
Explanation: Persistent Homology
Now, we will compute the persistent homology of the above signal following a sliding window embedding. Run this code. Then examine the outputs with both harmonic sinusoids and noncommensurate sinusoids, and note the difference in the persistence diagrams. Note that the two points with highest persistence are highlighted in red on the diagram.
End of explanation
# Step 1: Setup the signal
T1 = 100 # The period of the first sine in number of samples
T2 = 50
NPeriods = 5 # How many periods to go through, relative to the first sinusoid
N = T1*NPeriods # The total number of samples
t = np.arange(N) # Time indices
coeff1 = 0.6
coeff2 = 0.8
g1 = coeff1*np.cos(2*np.pi*(1.0/T1)*t) # The first sinusoid
g1 += coeff2*np.cos(2*np.pi*(1.0/T2)*t) # The second sinusoid
g2 = coeff2*np.cos(2*np.pi*(1.0/T1)*t) # The first sinusoid
g2 += coeff1*np.cos(2*np.pi*(1.0/T2)*t) # The second sinusoid
fig = plt.figure(figsize=(9.5, 4))
plot1, = plt.plot(g1,label="g1 = %.2gcos(t) + %.2gcos(2t)"%(coeff1, coeff2))
plot2, = plt.plot(g2,color='r',label="g2 = %.2gcos(t) + %.2gcos(2t)"%(coeff2, coeff1));
plt.legend(handles=[plot1,plot2])
plt.legend(bbox_to_anchor=(0., 1.02, 0.69, .102), ncol=2);
Explanation: Questions
Describe a key difference in the persistence diagrams between the harmonic and non-commensurate cases. Explain this difference in terms of the 3-D projection of the PCA embedding. (Hint: consider the shape and the intrinsic dimension of the projection.)
<br><br>
Explain how the persistence diagram allows the detection of non-commensurate sinusoids.
<br><br>
Can the persistence diagram distinguish between a single sinusoid and the sum of two harmonic sinusoids?
<br><br>
Looking back at the 2-D projection of the PCA in the harmonic case from the first lab, explain why the persistence diagram might be surprising if you had only seen that projection. How does looking at the 3-D projection make the persistence diagram less of a surprise?
<h1>Field of Coefficients</h1>
<BR>
Now we will examine a surprising geometric property that is able to tell apart two signals which look quite similar. First, we generate and plot the two signals below:
$$g_1 = 0.6\cos(t) + 0.8\cos(2t)$$
$$g_2 = 0.8\cos(t) + 0.6\cos(2t)$$
End of explanation
####g1
#Step 2: Do a sliding window embedding
dim = 20
Tau = 5
dT = 2
X1 = getSlidingWindow(g1, dim, Tau, dT)
#Step 3: Perform PCA down to 2D for visualization
pca = PCA()
Y = pca.fit_transform(X1)
eigs = pca.explained_variance_
c = plt.get_cmap('jet')
C = c(np.array(np.round(np.linspace(0, 255, Y.shape[0])), dtype=np.int32))
C = C[:, 0:3]
#Step 4: Plot original signal and PCA of the embedding
fig = plt.figure(figsize=(9.5,6))
ax = fig.add_subplot(221)
ax.plot(g1)
ax.set_title("Original Signal")
ax.set_xlabel("Sample Index")
ax2 = fig.add_subplot(222, projection = '3d')
ax2.set_title("g1 = %.2gcos(t) + %.2gcos(2t)"%(coeff1, coeff2))
ax2.scatter(Y[:, 0], Y[:, 1], Y[:, 2], c=C)
ax2.set_aspect('equal', 'datalim')
#####g2
X2 = getSlidingWindow(g2, dim, Tau, dT)
#Perform PCA down to 2D for visualization
pca = PCA()
Y = pca.fit_transform(X2)
eigs = pca.explained_variance_
ax = fig.add_subplot(223)
ax.plot(g2)
ax.set_title("Original Signal")
ax.set_xlabel("Sample Index")
ax2 = fig.add_subplot(224, projection = '3d')
ax2.set_title("g2 = %.2gcos(t) + %.2gcos(2t)"%(coeff2, coeff1))
ax2.scatter(Y[:, 0], Y[:, 1], Y[:, 2], c=C)
ax2.set_aspect('equal', 'datalim')
plt.tight_layout();
Explanation: Now, we will look at PCA of the sliding window embeddings of the two signals
End of explanation
#Step 1: Do rips filtrations with different field coefficients
print("Computing persistence diagrams for g1...")
PDs1_2 = ripser(X1, maxdim=1, coeff=2)['dgms'] #Z2 Coefficients
PDs1_3 = ripser(X1, maxdim=1, coeff=3)['dgms'] #Z3 Coefficients
print("Computing persistence diagrams for g2...")
PDs2_2 = ripser(X2, maxdim=1, coeff=2)['dgms']
PDs2_3 = ripser(X2, maxdim=1, coeff=3)['dgms']
fig = plt.figure(figsize=(8, 6))
plt.subplot(231)
plt.plot(g1)
plt.subplot(232);
plot_diagrams(PDs1_2[1], labels=['H1'])
plt.title("$g_1$ Persistence Diagram $\mathbb{Z}/2\mathbb{Z}$")
plt.subplot(233);
plot_diagrams(PDs1_3[1], labels=['H1'])
plt.title("$g_1$ Persistence Diagram $\mathbb{Z}/3\mathbb{Z}$")
plt.subplot(234)
plt.plot(g2)
plt.subplot(235);
plot_diagrams(PDs2_2[1], labels=['H1'])
plt.title("$g_2$ Persistence Diagram $\mathbb{Z}/2\mathbb{Z}$")
plt.subplot(236);
plot_diagrams(PDs2_3[1])
plt.title("$g_2$ Persistence Diagram $\mathbb{Z}/3\mathbb{Z}$")
plt.tight_layout();
Explanation: Notice how one looks more "twisted" than the other. To finish this off, let's compute TDA
End of explanation |
2,346 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Работа 1.4. Исследование вынужденной прецессии гироскопа
Цель работы
Step1: Параметры установки
$f = 440$ Гц - резонансная частота.
$l = 12,1$ см - расстояние до крайней риски.
$T_{э} = 9 $ c - период эталона.
$M_{э} = 1618.9 \pm 0.5 $ г - масса эталона.
$R_{э} = 4$ см - радиус эталона.
$T_{г} = 7$ с - период гироскопа.
Теоретические формулы
$$\Omega = \frac{mgl}{J_0\omega_0}$$
$$J_э = \frac{M_эR_э^2}{2}$$
$$\frac{J_г}{J_э} = \left(\frac{T_г}{T_ц}\right)^2$$
Построение графика
Step2: Вычисление момента инерции | Python Code:
import numpy as np
import scipy as ps
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Работа 1.4. Исследование вынужденной прецессии гироскопа
Цель работы: исследовать вынужденную прецессию уравновешенного симметричного гироскопа; установить зависимость угловой скорости вынужденной прецессии от величины момента сил, действующих на ось гироскопа; по угловой скорости прецессии определить
угловую скорость вращения ротора гироскопа.
В работе используются: гироскоп в кардановом подвесе, секундомер, набор грузов, отдельный ротор гироскопа, цилиндр известной
массы, крутильный маятник, штангенциркуль, линейка.
End of explanation
data = pd.read_excel('lab-1-4.xlsx', 'table-1')
data.head(len(data))
x = data.values[:, 1]
y = data.values[:, 5]
dx = data.values[:, 2]
dy = data.values[:, 6]
k, b = np.polyfit(x, y, deg=1)
grid = np.linspace(0.0, np.max(x), 300)
plt.figure(figsize=(12, 8))
plt.grid(linestyle='--')
plt.title('Зависимость $\Omega$ от $M$', fontweight='bold', fontsize=20)
plt.xlabel('$M$, $\\frac{кг\\cdot м^2}{с^2}$', fontsize=16)
plt.ylabel('$\Omega$, $\\frac{рад}{с^2}$', fontsize=16)
plt.plot(grid, k * grid + b)
plt.errorbar(x, y, xerr=dx, yerr=dy, fmt='o')
plt.show()
Explanation: Параметры установки
$f = 440$ Гц - резонансная частота.
$l = 12,1$ см - расстояние до крайней риски.
$T_{э} = 9 $ c - период эталона.
$M_{э} = 1618.9 \pm 0.5 $ г - масса эталона.
$R_{э} = 4$ см - радиус эталона.
$T_{г} = 7$ с - период гироскопа.
Теоретические формулы
$$\Omega = \frac{mgl}{J_0\omega_0}$$
$$J_э = \frac{M_эR_э^2}{2}$$
$$\frac{J_г}{J_э} = \left(\frac{T_г}{T_ц}\right)^2$$
Построение графика
End of explanation
J_0 = 1.6189 * 0.04 ** 2.0 / 2.0
T_0 = 9.0
T_1 = 7.0
J_1 = J_0 * (T_1 / T_0) ** 2
print(J_1 * 10 ** 6)
Explanation: Вычисление момента инерции
End of explanation |
2,347 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic regression with pyspark
Import data
Step1: Process categorical columns
The following code does three things with pipeline
Step2: Build StringIndexer stages
Step3: Build OneHotEncoder stages
Step4: Build VectorAssembler stage
Step5: Build pipeline model
Step6: Fit pipeline model
Step7: Transform data
Step8: Split data into training and test datasets
Step9: Build cross-validation model
Estimator
Step10: Parameter grid
Step11: Evaluator
Step12: Cross-validation model
Step13: Fit cross-validation model
Step14: Prediction
Step15: Prediction on training data
Step16: Prediction on test data
Step17: Intercept and coefficients of the regression model
Step18: Parameters from the best model | Python Code:
cuse = spark.read.csv('data/cuse_binary.csv', header=True, inferSchema=True)
cuse.show(5)
Explanation: Logistic regression with pyspark
Import data
End of explanation
from pyspark.ml.feature import StringIndexer, OneHotEncoder, VectorAssembler
from pyspark.ml import Pipeline
# categorical columns
categorical_columns = cuse.columns[0:3]
Explanation: Process categorical columns
The following code does three things with pipeline:
StringIndexer all categorical columns
OneHotEncoder all categorical index columns
VectorAssembler all feature columns into one vector column
Categorical columns
End of explanation
stringindexer_stages = [StringIndexer(inputCol=c, outputCol='strindexed_' + c) for c in categorical_columns]
# encode label column and add it to stringindexer_stages
stringindexer_stages += [StringIndexer(inputCol='y', outputCol='label')]
Explanation: Build StringIndexer stages
End of explanation
onehotencoder_stages = [OneHotEncoder(inputCol='strindexed_' + c, outputCol='onehot_' + c) for c in categorical_columns]
Explanation: Build OneHotEncoder stages
End of explanation
feature_columns = ['onehot_' + c for c in categorical_columns]
vectorassembler_stage = VectorAssembler(inputCols=feature_columns, outputCol='features')
Explanation: Build VectorAssembler stage
End of explanation
# all stages
all_stages = stringindexer_stages + onehotencoder_stages + [vectorassembler_stage]
pipeline = Pipeline(stages=all_stages)
Explanation: Build pipeline model
End of explanation
pipeline_model = pipeline.fit(cuse)
Explanation: Fit pipeline model
End of explanation
final_columns = feature_columns + ['features', 'label']
cuse_df = pipeline_model.transform(cuse).\
select(final_columns)
cuse_df.show(5)
Explanation: Transform data
End of explanation
training, test = cuse_df.randomSplit([0.8, 0.2], seed=1234)
Explanation: Split data into training and test datasets
End of explanation
from pyspark.ml.classification import LogisticRegression
logr = LogisticRegression(featuresCol='features', labelCol='label')
Explanation: Build cross-validation model
Estimator
End of explanation
from pyspark.ml.tuning import ParamGridBuilder
param_grid = ParamGridBuilder().\
addGrid(logr.regParam, [0, 0.5, 1, 2]).\
addGrid(logr.elasticNetParam, [0, 0.5, 1]).\
build()
Explanation: Parameter grid
End of explanation
from pyspark.ml.evaluation import BinaryClassificationEvaluator
evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction")
Explanation: Evaluator
End of explanation
from pyspark.ml.tuning import CrossValidator
cv = CrossValidator(estimator=logr, estimatorParamMaps=param_grid, evaluator=evaluator, numFolds=4)
Explanation: Cross-validation model
End of explanation
cv_model = cv.fit(cuse_df)
Explanation: Fit cross-validation model
End of explanation
show_columns = ['features', 'label', 'prediction', 'rawPrediction', 'probability']
Explanation: Prediction
End of explanation
pred_training_cv = cv_model.transform(training)
pred_training_cv.select(show_columns).show(5, truncate=False)
Explanation: Prediction on training data
End of explanation
pred_test_cv = cv_model.transform(test)
pred_test_cv.select(show_columns).show(5, truncate=False)
Explanation: Prediction on test data
End of explanation
print('Intercept: ' + str(cv_model.bestModel.intercept) + "\n"
'coefficients: ' + str(cv_model.bestModel.coefficients))
Explanation: Intercept and coefficients of the regression model
End of explanation
print('The best RegParam is: ', cv_model.bestModel._java_obj.getRegParam(), "\n",
'The best ElasticNetParam is: cv_model.bestModel._java_obj.getElasticNetParam()')
Explanation: Parameters from the best model
End of explanation |
2,348 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 2
Step1: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
Step3: Learning a multiple regression model
Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features
Step4: Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows
Step5: Making Predictions
In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions.
Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above
Step6: Compute RSS
Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.
Step7: Test your function by computing the RSS on TEST data for the example model
Step8: Create some new features
Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms.
You will use the logarithm function to create a new feature. so first you should import it from the math library.
Step9: Next create the following 4 new features as column in both TEST and TRAIN data
Step10: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large.
Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.
Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)
Quiz Question
Step11: Learning Multiple Models
Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more
Step12: Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients
Step13: Quiz Question
Step14: Quiz Question | Python Code:
import graphlab
Explanation: Regression Week 2: Multiple Regression (Interpretation)
The goal of this first notebook is to explore multiple regression and feature engineering with existing graphlab functions.
In this notebook you will use data on house sales in King County to predict prices using multiple regression. You will:
* Use SFrames to do some feature engineering
* Use built-in graphlab functions to compute the regression weights (coefficients/parameters)
* Given the regression weights, predictors and outcome write a function to compute the Residual Sum of Squares
* Look at coefficients and interpret their meanings
* Evaluate multiple models via RSS
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
sales.head()
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
print len(train_data)
print len(test_data)
Explanation: Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
example_features = ['sqft_living', 'bedrooms', 'bathrooms']
example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features,
validation_set = None)
Explanation: Learning a multiple regression model
Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features:
example_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code:
(Aside: We set validation_set = None to ensure that the results are always the same)
End of explanation
example_weight_summary = example_model.get("coefficients")
print example_weight_summary
Explanation: Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows:
End of explanation
example_predictions = example_model.predict(train_data)
print example_predictions[0] # should be 271789.505878
Explanation: Making Predictions
In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions.
Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:
End of explanation
def get_residual_sum_of_squares(model, data, outcome):
# First get the predictions
predictions = model.predict(data)
# Then compute the residuals/errors
residual = outcome - predictions
# Then square and add them up
residual_squared = residual * residual
RSS = residual_squared.sum()
return(RSS)
Explanation: Compute RSS
Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.
End of explanation
rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price'])
print rss_example_train # should be 2.7376153833e+14
Explanation: Test your function by computing the RSS on TEST data for the example model:
End of explanation
from math import log
Explanation: Create some new features
Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms.
You will use the logarithm function to create a new feature. so first you should import it from the math library.
End of explanation
train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2)
test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2)
# create the remaining 3 features in both TEST and TRAIN data
train_data['bed_bath_rooms'] = train_data.apply(lambda x : x['bedrooms'] * x['bathrooms'])
test_data['bed_bath_rooms'] = test_data.apply(lambda x : x['bedrooms'] * x['bathrooms'])
train_data['log_sqft_living'] = train_data['sqft_living'].apply(lambda x : log(x))
test_data['log_sqft_living'] = test_data['sqft_living'].apply(lambda x : log(x))
train_data['lat_plus_long'] = train_data.apply(lambda x : x['lat'] + x['long'])
test_data['lat_plus_long'] = test_data.apply(lambda x : x['lat'] + x['long'])
Explanation: Next create the following 4 new features as column in both TEST and TRAIN data:
* bedrooms_squared = bedrooms*bedrooms
* bed_bath_rooms = bedrooms*bathrooms
* log_sqft_living = log(sqft_living)
* lat_plus_long = lat + long
As an example here's the first one:
End of explanation
print 'Bedrooms Squared: ' + str(round(test_data['bedrooms_squared'].mean(), 2))
print 'Bed Bath Rooms: ' + str(round(test_data['bed_bath_rooms'].mean(), 2))
print 'Log Sqft Living: ' + str(round(test_data['log_sqft_living'].mean(), 2))
print 'Lat Plus Long: ' + str(round(test_data['lat_plus_long'].mean(), 2))
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large.
Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.
Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)
Quiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits)
End of explanation
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']
model_2_features = model_1_features + ['bed_bath_rooms']
model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
Explanation: Learning Multiple Models
Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more:
* Model 1: squarefeet, # bedrooms, # bathrooms, latitude & longitude
* Model 2: add bedrooms*bathrooms
* Model 3: Add log squarefeet, bedrooms squared, and the (nonsensical) latitude + longitude
End of explanation
# Learn the three models: (don't forget to set validation_set = None)
# model 1
model_1_features_model = graphlab.linear_regression.create(train_data, target = 'price', features = model_1_features,
validation_set = None)
# model 2
model_2_features_model = graphlab.linear_regression.create(train_data, target = 'price', features = model_2_features,
validation_set = None)
# model 3
model_3_features_model = graphlab.linear_regression.create(train_data, target = 'price', features = model_3_features,
validation_set = None)
# Examine/extract each model's coefficients:
model_1_features_weight_summary = model_1_features_model.get("coefficients")
print "Model #1"
print model_1_features_weight_summary
model_2_features_weight_summary = model_2_features_model.get("coefficients")
print "Model #2"
print model_2_features_weight_summary
model_3_features_weight_summary = model_3_features_model.get("coefficients")
print "Model #3"
print model_3_features_weight_summary
Explanation: Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients:
End of explanation
# Compute the RSS on TRAINING data for each of the three models and record the values:
rss_model_1_train = get_residual_sum_of_squares(model_1_features_model, train_data, train_data['price'])
print "Model #1"
print rss_model_1_train
rss_model_2_train = get_residual_sum_of_squares(model_2_features_model, train_data, train_data['price'])
print "Model #2"
print rss_model_2_train
rss_model_3_train = get_residual_sum_of_squares(model_3_features_model, train_data, train_data['price'])
print "Model #3"
print rss_model_3_train
Explanation: Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1?
Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2?
Think about what this means.
Comparing multiple models
Now that you've learned three models and extracted the model weights we want to evaluate which model is best.
First use your functions from earlier to compute the RSS on TRAINING Data for each of the three models.
End of explanation
# Compute the RSS on TESTING data for each of the three models and record the values:
rss_model_1_test = get_residual_sum_of_squares(model_1_features_model, test_data, test_data['price'])
print "Model #1"
print rss_model_1_test
rss_model_2_test = get_residual_sum_of_squares(model_2_features_model, test_data, test_data['price'])
print "Model #2"
print rss_model_2_test
rss_model_3_test = get_residual_sum_of_squares(model_3_features_model, test_data, test_data['price'])
print "Model #3"
print rss_model_3_test
Explanation: Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data? Is this what you expected?
Now compute the RSS on on TEST data for each of the three models.
End of explanation |
2,349 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lists
Lists are collections of heterogeneous objects, which can be of any type, including other lists.
Lists in the Python are mutable and can be changed at any time. Lists can be sliced in the same way as strings, but as the lists are mutable, it is possible to make assignments to the list items.
Syntax
Step1: NOTE
Step2: Removing
Step3: Appending
Step4: Ordering
Step5: Inverting
Step6: The function enumerate() returns a tuple of two elements in each iteration
Step7: The sort (sort) and reversal (reverse) operations are performed in the list and do not create new lists.
Tuples
Similar to lists, but immutable | Python Code:
fruits = ['Apple', 'Mango', 'Grapes', 'Jackfruit',
'Apple', 'Banana', 'Grapes', [1, "Orange"]]
# processing the entire list
for fruit in fruits:
print(fruit, end=", ")
#
print("*"*30)
fruits.insert(0, "kiwi")
print( fruits)
# help(fruits.insert)
# Including
ft1 = list(fruits)
print(id(ft1))
print(id(fruits))
ft1 = fruits[:]
print(id(ft1))
print(id(fruits))
ft2 = fruits
print(id(ft2))
print(id(fruits))
fruits.append('Camel')
print(fruits)
fruits.append(['kiwi', 'Apple', 'Camel'])
print(fruits)
fruits.extend(['kiwi', 'Apple', 'Camel'])
print(fruits)
Explanation: Lists
Lists are collections of heterogeneous objects, which can be of any type, including other lists.
Lists in the Python are mutable and can be changed at any time. Lists can be sliced in the same way as strings, but as the lists are mutable, it is possible to make assignments to the list items.
Syntax:
python
list = [a, b, ..., z]
Common operations with lists:
End of explanation
fruits.extend(['kiwi', ['Apple', 'Camel']])
print(fruits)
Explanation: NOTE: Only one level of extending happens, apple and camel are still in sub-list
End of explanation
## Removing the second instance of Grapes
x = 0
y = 0
for fruit in fruits:
if x == 1 and fruit == 'Grapes':
# del (fruits[y])
fruits.pop(y)
elif fruit == 'Grapes':
x = 1
y +=1
print(fruits)
fruits.remove('Grapes')
Explanation: Removing
End of explanation
print(fruits)
fruits.append("Grapes")
Explanation: Appending
End of explanation
# These will work on only homogeneous list and will fail for heterogeneous
fruits.sort()
print(fruits)
Explanation: Ordering
End of explanation
fruits.reverse()
print(fruits)
# # # prints with number order
fruits = ['Apple', 'Mango', 'Grapes', 'Jackfruit',
'Apple', 'Banana', 'Grapes']
for i, prog in enumerate(fruits):
print( i + 1, '=>', prog)
Explanation: Inverting
End of explanation
my_list = ['A', 'B', 'C']
for a, b in enumerate(my_list):
print(a, b)
my_list = ['A', 'B', 'C']
print ('list:', my_list)
# # The empty list is evaluated as false
while my_list:
# In queues, the first item is the first to go out
# pop(0) removes and returns the first item
print ('Left', my_list.pop(0), ', remain', len(my_list), my_list)
my_list.append("G")
# # More items on the list
my_list += ['D', 'E', 'F']
print ('list:', my_list)
while my_list:
# On stacks, the first item is the last to go out
# pop() removes and retorns the last item
print ('Left', my_list.pop(), ', remain', len(my_list), my_list)
l = ['D', 'E', 'F', "G", "H"]
print(l)
k = ('D', "E", "G", "H")
print(dir(l))
print("*"*8)
print(dir(k))
Explanation: The function enumerate() returns a tuple of two elements in each iteration: a sequence number and an item from the corresponding sequence.
The list has a pop() method that helps the implementation of queues and stacks:
End of explanation
t = ([1, 2], 4)
print(t)
print(" :: Error :: ")
t[0] = 3
print(t)
t[0] = [1, 2, 3]
print(t)
t[0].append(3)
print(t)
t[0][0] = [1, 2, 3]
print(t)
ta = (1, 2, 3, 4, 5)
for a in ta:
print (a)
ta1 = [1, 2, 3, 4, 5]
for a in ta1:
print(a)
Explanation: The sort (sort) and reversal (reverse) operations are performed in the list and do not create new lists.
Tuples
Similar to lists, but immutable: it's not possible to append, delete or make assignments to the items.
Syntax:
my_tuple = (a, b, ..., z)
The parentheses are optional.
Feature: a tuple with only one element is represented as:
t1 = (1,)
The tuple elements can be referenced the same way as the elements of a list:
first_element = tuple[0]
Lists can be converted into tuples:
my_tuple = tuple(my_list)
And tuples can be converted into lists:
my_list = list(my_tuple)
While tuple can contain mutable elements, these elements can not undergo assignment, as this would change the reference to the object.
Example (using the interactive mode):
End of explanation |
2,350 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text Analysis with NLTK
Author
Step1: 1. Corpus acquisition.
In these notebooks we will explore some tools for text analysis and two topic modeling algorithms available from Python toolboxes.
To do so, we will explore and analyze collections of Wikipedia articles from a given category, using wikitools, that makes the capture of content from wikimedia sites very easy.
(As a side note, there are many other available text collections to work with. In particular, the NLTK library has many examples, that you can explore using the nltk.download() tool.
import nltk
nltk.download()
for instance, you can take the gutemberg dataset
Mycorpus = nltk.corpus.gutenberg
text_name = Mycorpus.fileids()[0]
raw = Mycorpus.raw(text_name)
Words = Mycorpus.words(text_name)
Also, tools like Gensim or Sci-kit learn include text databases to work with).
In order to use Wikipedia data, we will select a single category of articles
Step2: You can try with any other categories, but take into account that some categories may contain very few articles. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https
Step3: Now, we have stored the whole text collection in two lists
Step4: 2. Corpus Processing
Topic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection.
Thus, we will proceed with the following steps
Step5: Task
Step6: 2.2. Homogeneization
By looking at the tokenized corpus you may verify that there are many tokens that correspond to punktuation signs and other symbols that are not relevant to analyze the semantic content. They can be removed using the stemming tool from nltk.
The homogeneization process will consist of
Step7: 2.2.2. Stemming vs Lemmatization
At this point, we can choose between applying a simple stemming or ussing lemmatization. We will try both to test their differences.
Task
Step8: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk
Step9: Task
Step10: One of the advantages of the lemmatizer method is that the result of lemmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.
However, without using contextual information, lemmatize() does not remove grammatical differences. This is the reason why "is" or "are" are preserved and not replaced by infinitive "be".
As an alternative, we can apply .lemmatize(word, pos), where 'pos' is a string code specifying the part-of-speech (pos), i.e. the grammatical role of the words in its sentence. For instance, you can check the difference between wnl.lemmatize('is') and wnl.lemmatize('is, pos='v').
2.3. Cleaning
The third step consists of removing those words that are very common in language and do not carry out usefull semantic content (articles, pronouns, etc).
Once again, we might need to load the stopword files using the download tools from nltk
Step11: Task
Step12: 2.4. Vectorization
Up to this point, we have transformed the raw text collection of articles in a list of articles, where each article is a collection of the word roots that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library.
As a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them.
Step13: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.
Task
Step14: At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus.
After that, we have transformed each article (in corpus_clean) in a list tuples (id, n).
Step15: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples
[(0, 1), (3, 3), (5,2)]
for a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero.
[1, 0, 0, 3, 0, 2, 0, 0, 0, 0]
These sparse vectors will be the inputs to the topic modeling algorithms.
Note that, at this point, we have built a Dictionary containing
Step16: and a bow representation of a corpus with
Step17: Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus.
Step18: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is
Step19: which appears
Step20: In the following we plot the most frequent terms in the corpus.
Step21: Exercise
Step22: Exercise
Step23: Exercise
Step24: Exercise (All in one)
Step25: Exercise (Visualizing categories)
Step26: Exercise (bigrams)
Step27: 2.4. Saving results
The dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms analyzed in the following notebook. Save them to be ready to use them during the next session. | Python Code:
%matplotlib inline
# Required imports
from wikitools import wiki
from wikitools import category
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
import numpy as np
import matplotlib.pyplot as plt
from test_helper import Test
import lda
import lda.datasets
import gensim
Explanation: Text Analysis with NLTK
Author: Jesús Cid-Sueiro
Date: 2016/04/03
Last review: 2016/11/16
End of explanation
site = wiki.Wiki("https://en.wikipedia.org/w/api.php")
# Select a category with a reasonable number of articles (>100)
cat = "Economics"
# cat = "Pseudoscience"
print cat
Explanation: 1. Corpus acquisition.
In these notebooks we will explore some tools for text analysis and two topic modeling algorithms available from Python toolboxes.
To do so, we will explore and analyze collections of Wikipedia articles from a given category, using wikitools, that makes the capture of content from wikimedia sites very easy.
(As a side note, there are many other available text collections to work with. In particular, the NLTK library has many examples, that you can explore using the nltk.download() tool.
import nltk
nltk.download()
for instance, you can take the gutemberg dataset
Mycorpus = nltk.corpus.gutenberg
text_name = Mycorpus.fileids()[0]
raw = Mycorpus.raw(text_name)
Words = Mycorpus.words(text_name)
Also, tools like Gensim or Sci-kit learn include text databases to work with).
In order to use Wikipedia data, we will select a single category of articles:
End of explanation
# Loading category data. This may take a while
print "Loading category data. This may take a while..."
cat_data = category.Category(site, cat)
corpus_titles = []
corpus_text = []
for n, page in enumerate(cat_data.getAllMembersGen()):
print "\r Loading article {0}".format(n + 1),
corpus_titles.append(page.title)
corpus_text.append(page.getWikiText())
n_art = len(corpus_titles)
print "\nLoaded " + str(n_art) + " articles from category " + cat
Explanation: You can try with any other categories, but take into account that some categories may contain very few articles. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https://en.wikipedia.org/wiki/Category:Contents, for instance, and select the appropriate one.
We start downloading the text collection.
End of explanation
# n = 5
# print corpus_titles[n]
# print corpus_text[n]
Explanation: Now, we have stored the whole text collection in two lists:
corpus_titles, which contains the titles of the selected articles
corpus_text, with the text content of the selected wikipedia articles
You can browse the content of the wikipedia articles to get some intuition about the kind of documents that will be processed.
End of explanation
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "punkt"
# nltk.download()
Explanation: 2. Corpus Processing
Topic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection.
Thus, we will proceed with the following steps:
Tokenization
Homogeneization
Cleaning
Vectorization
2.1. Tokenization
For the first steps, we will use some of the powerfull methods available from the Natural Language Toolkit. In order to use the word_tokenize method from nltk, you might need to get the appropriate libraries using nltk.download(). You must select option "d) Download", and identifier "punkt"
End of explanation
corpus_tokens = []
for n, art in enumerate(corpus_text):
print "\rTokenizing article {0} out of {1}".format(n + 1, n_art),
# This is to make sure that all characters have the appropriate encoding.
art = art.decode('utf-8')
# Tokenize each text entry.
# scode: tokens = <FILL IN>
# Add the new token list as a new element to corpus_tokens (that will be a list of lists)
# scode: <FILL IN>
print "\n The corpus has been tokenized. Let's check some portion of the first article:"
print corpus_tokens[0][0:30]
Test.assertEquals(len(corpus_tokens), n_art, "The number of articles has changed unexpectedly")
Test.assertTrue(len(corpus_tokens) >= 100,
"Your corpus_tokens has less than 100 articles. Consider using a larger dataset")
Explanation: Task: Insert the appropriate call to word_tokenize in the code below, in order to get the tokens list corresponding to each Wikipedia article:
End of explanation
corpus_filtered = []
for n, token_list in enumerate(corpus_tokens):
print "\rFiltering article {0} out of {1}".format(n + 1, n_art),
# Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.
# Store the result in a new token list, clean_tokens.
# scode: filtered_tokens = <FILL IN>
# Add art to corpus_filtered
# scode: <FILL IN>
print "\nLet's check the first tokens from document 0 after filtering:"
print corpus_filtered[0][0:30]
Test.assertTrue(all([c==c.lower() for c in corpus_filtered[23]]), 'Capital letters have not been removed')
Test.assertTrue(all([c.isalnum() for c in corpus_filtered[13]]), 'Non alphanumeric characters have not been removed')
Explanation: 2.2. Homogeneization
By looking at the tokenized corpus you may verify that there are many tokens that correspond to punktuation signs and other symbols that are not relevant to analyze the semantic content. They can be removed using the stemming tool from nltk.
The homogeneization process will consist of:
Removing capitalization: capital alphabetic characters will be transformed to their corresponding lowercase characters.
Removing non alphanumeric tokens (e.g. punktuation signs)
Stemming/Lemmatization: removing word terminations to preserve the root of the words and ignore grammatical information.
2.2.1. Filtering
Let us proceed with the filtering steps 1 and 2 (removing capitalization and non-alphanumeric tokens).
Task: Convert all tokens in corpus_tokens to lowercase (using .lower() method) and remove non alphanumeric tokens (that you can detect with .isalnum() method). You can do it in a single line of code...
End of explanation
# Select stemmer.
stemmer = nltk.stem.SnowballStemmer('english')
corpus_stemmed = []
for n, token_list in enumerate(corpus_filtered):
print "\rStemming article {0} out of {1}".format(n + 1, n_art),
# Apply stemming to all tokens in token_list and save them in stemmed_tokens
# scode: stemmed_tokens = <FILL IN>
# Add stemmed_tokens to the stemmed corpus
# scode: <FILL IN>
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_stemmed[0][0:30]
Test.assertTrue((len([c for c in corpus_stemmed[0] if c!=stemmer.stem(c)]) < 0.1*len(corpus_stemmed[0])),
'It seems that stemming has not been applied properly')
Explanation: 2.2.2. Stemming vs Lemmatization
At this point, we can choose between applying a simple stemming or ussing lemmatization. We will try both to test their differences.
Task: Apply the .stem() method, from the stemmer object created in the first line, to corpus_filtered.
End of explanation
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "wordnet"
# nltk.download()
Explanation: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk
End of explanation
wnl = WordNetLemmatizer()
# Select stemmer.
corpus_lemmat = []
for n, token_list in enumerate(corpus_filtered):
print "\rLemmatizing article {0} out of {1}".format(n + 1, n_art),
# scode: lemmat_tokens = <FILL IN>
# Add art to the stemmed corpus
# scode: <FILL IN>
print "\nLet's check the first tokens from document 0 after lemmatization:"
print corpus_lemmat[0][0:30]
Explanation: Task: Apply the .lemmatize() method, from the WordNetLemmatizer object created in the first line, to corpus_filtered.
End of explanation
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "stopwords"
# nltk.download()
Explanation: One of the advantages of the lemmatizer method is that the result of lemmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.
However, without using contextual information, lemmatize() does not remove grammatical differences. This is the reason why "is" or "are" are preserved and not replaced by infinitive "be".
As an alternative, we can apply .lemmatize(word, pos), where 'pos' is a string code specifying the part-of-speech (pos), i.e. the grammatical role of the words in its sentence. For instance, you can check the difference between wnl.lemmatize('is') and wnl.lemmatize('is, pos='v').
2.3. Cleaning
The third step consists of removing those words that are very common in language and do not carry out usefull semantic content (articles, pronouns, etc).
Once again, we might need to load the stopword files using the download tools from nltk
End of explanation
corpus_clean = []
stopwords_en = stopwords.words('english')
n = 0
for token_list in corpus_stemmed:
n += 1
print "\rRemoving stopwords from article {0} out of {1}".format(n, n_art),
# Remove all tokens in the stopwords list and append the result to corpus_clean
# scode: clean_tokens = <FILL IN>
# scode: <FILL IN>
print "\n Let's check tokens after cleaning:"
print corpus_clean[0][0:30]
Test.assertTrue(len(corpus_clean) == n_art, 'List corpus_clean does not contain the expected number of articles')
Test.assertTrue(len([c for c in corpus_clean[0] if c in stopwords_en])==0, 'Stopwords have not been removed')
Explanation: Task: In the second line below we read a list of common english stopwords. Clean corpus_stemmed by removing all tokens in the stopword list.
End of explanation
# Create dictionary of tokens
D = gensim.corpora.Dictionary(corpus_clean)
n_tokens = len(D)
print "The dictionary contains {0} tokens".format(n_tokens)
print "First tokens in the dictionary: "
for n in range(10):
print str(n) + ": " + D[n]
Explanation: 2.4. Vectorization
Up to this point, we have transformed the raw text collection of articles in a list of articles, where each article is a collection of the word roots that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library.
As a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them.
End of explanation
# Transform token lists into sparse vectors on the D-space
# scode: corpus_bow = <FILL IN>
Test.assertTrue(len(corpus_bow)==n_art, 'corpus_bow has not the appropriate size')
Explanation: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.
Task: Apply the doc2bow method from gensim dictionary D, to all tokens in every article in corpus_clean. The result must be a new list named corpus_bow where each element is a list of tuples (token_id, number_of_occurrences).
End of explanation
print "Original article (after cleaning): "
print corpus_clean[0][0:30]
print "Sparse vector representation (first 30 components):"
print corpus_bow[0][0:30]
print "The first component, {0} from document 0, states that token 0 ({1}) appears {2} times".format(
corpus_bow[0][0], D[0], corpus_bow[0][0][1])
Explanation: At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus.
After that, we have transformed each article (in corpus_clean) in a list tuples (id, n).
End of explanation
print "{0} tokens".format(len(D))
Explanation: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples
[(0, 1), (3, 3), (5,2)]
for a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero.
[1, 0, 0, 3, 0, 2, 0, 0, 0, 0]
These sparse vectors will be the inputs to the topic modeling algorithms.
Note that, at this point, we have built a Dictionary containing
End of explanation
print "{0} Wikipedia articles".format(len(corpus_bow))
Explanation: and a bow representation of a corpus with
End of explanation
# SORTED TOKEN FREQUENCIES (I):
# Create a "flat" corpus with all tuples in a single list
corpus_bow_flat = [item for sublist in corpus_bow for item in sublist]
# Initialize a numpy array that we will use to cont tokens.
# token_count[n] should store the number of ocurrences of the n-th token, D[n]
token_count = np.zeros(n_tokens)
# Count the number of occurrences of each token.
for x in corpus_bow_flat:
# Update the proper element in token_count
# scode: <FILL IN>
# Sort by decreasing number of occurences
ids_sorted = np.argsort(- token_count)
tf_sorted = token_count[ids_sorted]
Explanation: Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus.
End of explanation
print D[ids_sorted[0]]
Explanation: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is
End of explanation
print "{0} times in the whole corpus".format(tf_sorted[0])
Explanation: which appears
End of explanation
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
# Example data
n_bins = 25
hot_tokens = [D[i] for i in ids_sorted[n_bins-1::-1]]
y_pos = np.arange(len(hot_tokens))
z = tf_sorted[n_bins-1::-1]/n_art
plt.figure()
plt.barh(y_pos, z, align='center', alpha=0.4)
plt.yticks(y_pos, hot_tokens)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
# SORTED TOKEN FREQUENCIES:
# Example data
plt.figure()
plt.semilogy(tf_sorted)
plt.ylabel('Total number of occurrences')
plt.xlabel('Token rank')
plt.title('Token occurrences')
plt.show()
Explanation: In the following we plot the most frequent terms in the corpus.
End of explanation
# scode: cold_tokens = <FILL IN>
print "There are {0} cold tokens, which represent {1}% of the total number of tokens in the dictionary".format(
len(cold_tokens), float(len(cold_tokens))/n_tokens*100)
Explanation: Exercise: There are usually many tokens that appear with very low frequency in the corpus. Count the number of tokens appearing only once, and what is the proportion of them in the token list.
End of explanation
# scode: <WRITE YOUR CODE HERE>
Explanation: Exercise: Represent graphically those 20 tokens that appear in the highest number of articles. Note that you can use the code above (headed by # SORTED TOKEN FREQUENCIES) with a very minor modification.
End of explanation
# scode: <WRITE YOUR CODE HERE>
Explanation: Exercise: Count the number of tokens appearing only in a single article.
End of explanation
# scode: <WRITE YOUR CODE HERE>
Explanation: Exercise (All in one): Note that, for pedagogical reasons, we have used a different for loop for each text processing step creating a new corpus_xxx variable after each step. For very large corpus, this could cause memory problems.
As a summary exercise, repeat the whole text processing, starting from corpus_text up to computing the bow, with the following modifications:
Use a single for loop, avoiding the creation of any intermediate corpus variables.
Use lemmatization instead of stemming.
Remove all tokens appearing in only one document and less than 2 times.
Save the result in a new variable corpus_bow1.
End of explanation
# scode: <WRITE YOUR CODE HERE>
Explanation: Exercise (Visualizing categories): Repeat the previous exercise with a second wikipedia category. For instance, you can take "communication".
Save the result in variable corpus_bow2.
Determine the most frequent terms in corpus_bow1 (term1) and corpus_bow2 (term2).
Transform each article in corpus_bow1 and corpus_bow2 into a 2 dimensional vector, where the first component is the frecuency of term1 and the second component is the frequency of term2
Draw a dispersion plot of all 2 dimensional points, using a different marker for each corpus. Could you differentiate both corpora using the selected terms only? What if the 2nd most frequent term is used?
End of explanation
# scode: <WRITE YOUR CODE HERE>
# Check the code below to see how ngrams works, and adapt it to solve the exercise.
# from nltk.util import ngrams
# sentence = 'this is a foo bar sentences and i want to ngramize it'
# sixgrams = ngrams(sentence.split(), 2)
# for grams in sixgrams:
# print grams
Explanation: Exercise (bigrams): nltk provides an utility to compute n-grams from a list of tokens, in nltk.util.ngrams. Join all tokens in corpus_clean in a single list and compute the bigrams. Plot the 20 most frequent bigrams in the corpus.
End of explanation
import pickle
data = {}
data['D'] = D
data['corpus_bow'] = corpus_bow
pickle.dump(data, open("wikiresults.p", "wb"))
Explanation: 2.4. Saving results
The dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms analyzed in the following notebook. Save them to be ready to use them during the next session.
End of explanation |
2,351 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.ML101.3
Step1: We'll re-use some of our code from before to visualize the data and remind us what
we're looking at
Step2: Visualizing the Data
A good first-step for many problems is to visualize the data using a
Dimensionality Reduction technique. We'll start with the
most straightforward one, Principal Component Analysis (PCA).
PCA seeks orthogonal linear combinations of the features which show the greatest
variance, and as such, can help give you a good idea of the structure of the
data set. Here we'll use RandomizedPCA, because it's faster for large N.
Step3: Question
Step4: Question
Step5: Quantitative Measurement of Performance
We'd like to measure the performance of our estimator without having to resort
to plotting examples. A simple method might be to simply compare the number of
matches
Step6: We see that nearly 1500 of the 1800 predictions match the input. But there are other
more sophisticated metrics that can be used to judge the performance of a classifier
Step7: Another enlightening metric for this sort of multi-label classification
is a confusion matrix | Python Code:
from sklearn.datasets import load_digits
digits = load_digits()
Explanation: 2A.ML101.3: Supervised Learning: Classification of Handwritten Digits
In this section we'll apply scikit-learn to the classification of handwritten
digits. This will go a bit beyond the iris classification we saw before: we'll
discuss some of the metrics which can be used in evaluating the effectiveness
of a classification model.
Source: Course on machine learning with scikit-learn by Gaël Varoquaux
End of explanation
%matplotlib inline
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
Explanation: We'll re-use some of our code from before to visualize the data and remind us what
we're looking at:
End of explanation
from sklearn.decomposition import PCA
pca = PCA(n_components=2, svd_solver="randomized")
proj = pca.fit_transform(digits.data)
plt.scatter(proj[:, 0], proj[:, 1], c=digits.target)
plt.colorbar();
Explanation: Visualizing the Data
A good first-step for many problems is to visualize the data using a
Dimensionality Reduction technique. We'll start with the
most straightforward one, Principal Component Analysis (PCA).
PCA seeks orthogonal linear combinations of the features which show the greatest
variance, and as such, can help give you a good idea of the structure of the
data set. Here we'll use RandomizedPCA, because it's faster for large N.
End of explanation
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
# split the data into training and validation sets
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)
# train the model
clf = GaussianNB()
clf.fit(X_train, y_train)
# use the model to predict the labels of the test data
predicted = clf.predict(X_test)
expected = y_test
Explanation: Question: Given these projections of the data, which numbers do you think
a classifier might have trouble distinguishing?
Gaussian Naive Bayes Classification
For most classification problems, it's nice to have a simple, fast, go-to
method to provide a quick baseline classification. If the simple and fast
method is sufficient, then we don't have to waste CPU cycles on more complex
models. If not, we can use the results of the simple method to give us
clues about our data.
One good method to keep in mind is Gaussian Naive Bayes. It fits a Gaussian distribution to each training label independantly on each feature, and uses this to quickly give a rough classification. It is generally not sufficiently accurate for real-world data, but can perform surprisingly well, for instance on text data.
End of explanation
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(X_test.reshape(-1, 8, 8)[i], cmap=plt.cm.binary,
interpolation='nearest')
# label the image with the target value
if predicted[i] == expected[i]:
ax.text(0, 7, str(predicted[i]), color='green')
else:
ax.text(0, 7, str(predicted[i]), color='red')
Explanation: Question: why did we split the data into training and validation sets?
Let's plot the digits again with the predicted labels to get an idea of
how well the classification is working:
End of explanation
matches = (predicted == expected)
print(matches.sum())
print(len(matches))
matches.sum() / float(len(matches))
Explanation: Quantitative Measurement of Performance
We'd like to measure the performance of our estimator without having to resort
to plotting examples. A simple method might be to simply compare the number of
matches:
End of explanation
from sklearn import metrics
from pandas import DataFrame
DataFrame(metrics.classification_report(expected, predicted, output_dict=True)).T
Explanation: We see that nearly 1500 of the 1800 predictions match the input. But there are other
more sophisticated metrics that can be used to judge the performance of a classifier:
several are available in the sklearn.metrics submodule.
One of the most useful metrics is the classification_report, which combines several
measures and prints a table with the results:
End of explanation
DataFrame(metrics.confusion_matrix(expected, predicted))
Explanation: Another enlightening metric for this sort of multi-label classification
is a confusion matrix: it helps us visualize which labels are
being interchanged in the classification errors:
End of explanation |
2,352 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
QuTiP Example
Step1: Imports
Step2: Plotting Support
Step3: Settings
Step4: Superoperator Representations and Plotting
We start off by first demonstrating plotting of superoperators, as this will be useful to us in visualizing the results of a contracted channel.
In particular, we will use Hinton diagrams as implemented by qutip.visualization.hinton, which
show the real parts of matrix elements as squares whose size and color both correspond to the magnitude of each element. To illustrate, we first plot a few density operators.
Step5: We show superoperators as matrices in the Pauli basis, such that any Hermicity-preserving map is represented by a real-valued matrix. This is especially convienent for use with Hinton diagrams, as the plot thus carries complete information about the channel.
As an example, conjugation by $\sigma_z$ leaves $\mathbb{1}$ and $\sigma_z$ invariant, but flips the sign of $\sigma_x$ and $\sigma_y$. This is indicated in Hinton diagrams by a negative-valued square for the sign change and a positive-valued square for a +1 sign.
Step6: As a couple more examples, we also consider the supermatrix for a Hadamard transform and for $\sigma_z \otimes H$.
Step7: Reduced Channels
As an example of tensor contraction, we now consider the map $S(\rho) = \Tr_2[\cnot (\rho \otimes \ket{0}\bra{0}) \cnot^\dagger]$.
We can think of the $\cnot$ here as a system-environment representation of an open quantum process, in which an environment register is prepared in a state $\rho_{\text{anc}}$, then a unitary acts jointly on the system of interest and environment. Finally, the environment is traced out, leaving a channel on the system alone. In terms of Wood diagrams, this can be represented as the composition of a preparation map, evolution under the system-environment unitary, and then a measurement map.
The two tensor wires on the left indicate where we must take a tensor contraction to obtain the measurement map. Numbering the tensor wires from 0 to 3, this corresponds to a tensor_contract argument of (1, 3).
Step8: Meanwhile, the super_tensor function implements the swap on the right, such that we can quickly find the preparation map.
Step9: For a $\cnot$ system-environment model, the composition of these maps should give us a completely dephasing channel. The channel on both qubits is just the superunitary $\cnot$ channel
Step10: We now complete by multiplying the superunitary $\cnot$ by the preparation channel above, then applying the partial trace channel by contracting the second and fourth index indices. As expected, this gives us a dephasing map.
Step11: Epilouge | Python Code:
from __future__ import division, print_function
Explanation: QuTiP Example: Superoperators, Pauli Basis and Channel Contraction
Christopher Granade <br>
Institute for Quantum Computing
$\newcommand{\ket}[1]{\left|#1\right\rangle}$
$\newcommand{\bra}[1]{\left\langle#1\right|}$
$\newcommand{\cnot}{{\scriptstyle \rm CNOT}}$
$\newcommand{\Tr}{\operatorname{Tr}}$
Introduction
In this notebook, we will demonstrate the tensor_contract function, which contracts one or more pairs of indices of a Qobj. This functionality can be used to find rectangular superoperators that implement the partial trace channel $S(\rho) = \Tr_2(\rho)$, for instance. Using this functionality, we can quickly turn a system-environment representation of an open quantum process into a superoperator representation.
Preamble
Features
We enable a few features such that this notebook runs in both Python 2 and 3.
End of explanation
import numpy as np
import qutip as qt
from qutip.ipynbtools import version_table
Explanation: Imports
End of explanation
%matplotlib inline
Explanation: Plotting Support
End of explanation
qt.settings.colorblind_safe = True
Explanation: Settings
End of explanation
qt.visualization.hinton(qt.identity([2, 3]).unit());
qt.visualization.hinton(qt.Qobj([
[1, 0.5],
[0.5, 1]
]).unit());
Explanation: Superoperator Representations and Plotting
We start off by first demonstrating plotting of superoperators, as this will be useful to us in visualizing the results of a contracted channel.
In particular, we will use Hinton diagrams as implemented by qutip.visualization.hinton, which
show the real parts of matrix elements as squares whose size and color both correspond to the magnitude of each element. To illustrate, we first plot a few density operators.
End of explanation
qt.visualization.hinton(qt.to_super(qt.sigmaz()));
Explanation: We show superoperators as matrices in the Pauli basis, such that any Hermicity-preserving map is represented by a real-valued matrix. This is especially convienent for use with Hinton diagrams, as the plot thus carries complete information about the channel.
As an example, conjugation by $\sigma_z$ leaves $\mathbb{1}$ and $\sigma_z$ invariant, but flips the sign of $\sigma_x$ and $\sigma_y$. This is indicated in Hinton diagrams by a negative-valued square for the sign change and a positive-valued square for a +1 sign.
End of explanation
qt.visualization.hinton(qt.to_super(qt.hadamard_transform()));
qt.visualization.hinton(qt.to_super(qt.tensor(qt.sigmaz(), qt.hadamard_transform())));
Explanation: As a couple more examples, we also consider the supermatrix for a Hadamard transform and for $\sigma_z \otimes H$.
End of explanation
s_meas = qt.tensor_contract(qt.to_super(qt.identity([2, 2])), (1, 3))
s_meas
Explanation: Reduced Channels
As an example of tensor contraction, we now consider the map $S(\rho) = \Tr_2[\cnot (\rho \otimes \ket{0}\bra{0}) \cnot^\dagger]$.
We can think of the $\cnot$ here as a system-environment representation of an open quantum process, in which an environment register is prepared in a state $\rho_{\text{anc}}$, then a unitary acts jointly on the system of interest and environment. Finally, the environment is traced out, leaving a channel on the system alone. In terms of Wood diagrams, this can be represented as the composition of a preparation map, evolution under the system-environment unitary, and then a measurement map.
The two tensor wires on the left indicate where we must take a tensor contraction to obtain the measurement map. Numbering the tensor wires from 0 to 3, this corresponds to a tensor_contract argument of (1, 3).
End of explanation
q = qt.tensor(qt.identity(2), qt.basis(2))
s_prep = qt.sprepost(q, q.dag())
s_prep
Explanation: Meanwhile, the super_tensor function implements the swap on the right, such that we can quickly find the preparation map.
End of explanation
qt.visualization.hinton(qt.to_super(qt.cnot()))
Explanation: For a $\cnot$ system-environment model, the composition of these maps should give us a completely dephasing channel. The channel on both qubits is just the superunitary $\cnot$ channel:
End of explanation
qt.tensor_contract(qt.to_super(qt.cnot()), (1, 3)) * s_prep
qt.visualization.hinton(qt.tensor_contract(qt.to_super(qt.cnot()), (1, 3)) * s_prep);
Explanation: We now complete by multiplying the superunitary $\cnot$ by the preparation channel above, then applying the partial trace channel by contracting the second and fourth index indices. As expected, this gives us a dephasing map.
End of explanation
version_table()
Explanation: Epilouge
End of explanation |
2,353 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
For background, see Mapping Census Data, including the
scan of the 10-question form. Keep in mind what people were asked and the range of data available in the census.
Using the census API to get an understanding of some of the geographic entities in the 2010 census. We'll specifically be using the variable P0010001, the total population.
What you will do in this notebook
Step1: The census documentation has example URLs but needs your API key to work. In this notebook, we'll use the IPython notebook HTML display mechanism to help out.
Step2: Note
Step3: You can filter Puerto Rico (PR) in a number of ways -- use the way you're most comfortable with.
Optional fun
Step4: If states_df is calculated properly, the following asserts will pass silently.
Step5: Counties
Looking at http
Step6: Check properties of counties_df
Step7: Using FIPS code as the Index
From Mapping Census Data
Step8: Counties in California
Let's look at home
Step9: Different ways to read off the population of Alameda County -- still looking for the best way
Step10: If you know the FIPS code for Alameda County, just read off the population using .ix
Step11: Reading off all the tracts in Alameda County
Step12: Using Generators to yield all the tracts in the country
http | Python Code:
# YouTube video I made on how to use the American Factfinder site to look up addresses
from IPython.display import YouTubeVideo
YouTubeVideo('HeXcliUx96Y')
# standard numpy, pandas, matplotlib imports
import numpy as np
import matplotlib.pyplot as plt
from pandas import DataFrame, Series, Index
import pandas as pd
# check that CENSUS_KEY is defined
import census
import us
import requests
import settings
assert settings.CENSUS_KEY is not None
Explanation: Goal
For background, see Mapping Census Data, including the
scan of the 10-question form. Keep in mind what people were asked and the range of data available in the census.
Using the census API to get an understanding of some of the geographic entities in the 2010 census. We'll specifically be using the variable P0010001, the total population.
What you will do in this notebook:
Sum the population of the states (or state-like entity like DC) to get the total population of the nation
Add up the counties for each state and validate the sums
Add up the census tracts for each county and validate the sums
We will make use of pandas in this notebook.
I often have the following diagram in mind to help understand the relationship among entities. Also use the list of example URLs -- it'll come in handy.
<a href="http://www.flickr.com/photos/raymondyee/12297467734/" title="Census Geographic Hierarchies by Raymond Yee, on Flickr"><img src="http://farm4.staticflickr.com/3702/12297467734_af8882d310_c.jpg" width="618" height="800" alt="Census Geographic Hierarchies"></a>
Working out the geographical hierarchy for Cafe Milano
It's helpful to have a concrete instance of a place to work with, especially when dealing with rather intangible entities like census tracts, block groups, and blocks. You can use the American FactFinder site to look up for any given US address the corresponding census geographies.
Let's use Cafe Milano in Berkeley as an example. You can verify the following results by typing in the address into http://factfinder2.census.gov/faces/nav/jsf/pages/searchresults.xhtml?refresh=t.
https://www.evernote.com/shard/s1/sh/dc0bfb96-4965-4fbf-bc28-c9d4d0080782/2bd8c92a045d62521723347d62fa2b9d
2522 Bancroft Way, BERKELEY, CA, 94704
State: California
County: Alameda County
County Subdivision: Berkeley CCD, Alameda County, California
Census Tract: Census Tract 4228, Alameda County, California
Block Group: Block Group 1, Census Tract 4228, Alameda County, California
Block: Block 1001, Block Group 1, Census Tract 4228, Alameda County, California
End of explanation
c = census.Census(key=settings.CENSUS_KEY)
Explanation: The census documentation has example URLs but needs your API key to work. In this notebook, we'll use the IPython notebook HTML display mechanism to help out.
End of explanation
# call the API and instantiate `df`
df = DataFrame(c.sf1.get('NAME,P0010001', geo={'for':'state:*'}))
# convert the population to integer
df['P0010001'] = df['P0010001'].astype(np.int)
df.head()
states_df = df[df['NAME'] != 'Puerto Rico']
'a' in ['a', 'b']
Explanation: Note: we can use c.sf1 to access 2010 census (SF1: Census Summary File 1 (2010, 2000, 1990) available in API -- 2010 is the default)
see documentation: sunlightlabs/census
Summing up populations by state
Let's make a DataFrame named states_df with columns NAME, P0010001 (for population), and state (to hold the FIPS code). Make sure to exclude Puerto Rico.
End of explanation
states_fips = np.array([state.fips for state in us.states.STATES])
states_df = df[np.in1d(df.state,states_fips)]
Explanation: You can filter Puerto Rico (PR) in a number of ways -- use the way you're most comfortable with.
Optional fun: filter PR in the following way
calculate a np.array holding the the fips of the states
then use numpy.in1d, which is a analogous to the in operator to test membership in a list
End of explanation
# check that we have three columns
assert set(states_df.columns) == set((u'NAME', u'P0010001', u'state'))
# check that the total 2010 census population is correct
assert np.sum(states_df.P0010001) == 308745538
# check that the number of states+DC is 51
assert len(states_df) == 51
Explanation: If states_df is calculated properly, the following asserts will pass silently.
End of explanation
# Here's a way to use translate
# http://api.census.gov/data/2010/sf1?get=P0010001&for=county:*
# into a call using the census.Census object
r = c.sf1.get('NAME,P0010001', geo={'for':'county:*'})
# ask yourself what len(r) means and what it should be
len(r)
# let's try out one of the `census` object convenience methods
# instead of using `c.sf1.get`
r = c.sf1.state_county('NAME,P0010001',census.ALL,census.ALL)
r
# convert the json from the API into a DataFrame
# coerce to integer the P0010001 column
df = DataFrame(r)
df['P0010001'] = df['P0010001'].astype('int')
# display the first records
df.head()
# calculate the total population
# what happens when you google the number you get?
np.sum(df['P0010001'])
# often you can use dot notation to access a DataFrame column
df.P0010001.head()
# let's filter out PR -- what's the total population now
sum(df[np.in1d(df.state, states_fips)].P0010001)
# fall back to non-Pandas solution if you need to
np.sum([int(county['P0010001']) for county in r if county['state'] in states_fips])
# construct counties_df with only 50 states + DC
#counties_df = df[np.in1d(df.state, states_fips)]
counties_df = df.loc[np.in1d(df.state, states_fips)].copy()
len(counties_df)
set(counties_df.columns) == set(df.columns)
Explanation: Counties
Looking at http://api.census.gov/data/2010/sf1/geo.html, we see
state-county: http://api.census.gov/data/2010/sf1?get=P0010001&for=county:*
if we want to grab all counties in one go, or you can grab counties state-by-state:
http://api.census.gov/data/2010/sf1?get=P0010001&for=county:*&in=state:06
for all counties in the state with FIPS code 06 (which is what state?)
End of explanation
# number of counties
assert len(counties_df) == 3143 #3143 county/county-equivs in US
# check that the total population by adding all counties == population by adding all states
assert np.sum(counties_df['P0010001']) == np.sum(states_df.P0010001)
# check we have same columns between counties_df and df
set(counties_df.columns) == set(df.columns)
Explanation: Check properties of counties_df
End of explanation
# take a look at the current structure of counties_df
counties_df.head()
states_df.head()
# reindex states_df by state FIPS
# http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.set_index.html
states_df.set_index(keys='state', inplace=True)
states_df.head()
states_df.columns
# display the result of using set_index
counties_df.head()
# df.loc[np.in1d(df.state, states_fips), 'FIPS'] = counties_df.apply(lambda s:s['state']+s['county'], axis=1)
counties_df['FIPS'] = counties_df.apply(lambda s:s['state']+s['county'], axis=1)
df[np.in1d(df.state, states_fips)].head()
counties_df.head()
def double(x):
return 2*x
counties_df.P0010001.apply(double)
# http://manishamde.github.io/blog/2013/03/07/pandas-and-python-top-10/#create
counties_df['FIPS'] = counties_df.apply(lambda s:s['state'] + s['county'], axis=1)
counties_df.set_index('FIPS', inplace=True)
counties_df.head()
counties_df.groupby('state').sum().head()
states_df.P0010001.head()
# now we're ready to compare for each state, if you add all the counties, do you get the same
# population?
# not that you can do .agg('sum') instead of .sum()
# look at http://pandas.pydata.org/pandas-docs/dev/groupby.html to learn more about agg
np.all(states_df.P0010001 == counties_df.groupby('state').agg('sum').P0010001)
Explanation: Using FIPS code as the Index
From Mapping Census Data:
Each state (SUMLEV = 040) has a 2-digit FIPS ID; Delaware's is 10.
Each county (SUMLEV = 050) within a state has a 3-digit FIPS ID, appended to the 2-digit state ID. New Castle County, Delaware, has FIPS ID 10003.
Each Census Tract (SUMLEV = 140) within a county has a 6-digit ID, appended to the county code. The Tract in New Castle County DE that contains most of the the UD campus has FIPS ID 10003014502.
Each Block Group (SUMLEV = 150) within a Tract has a single digit ID appended to the Tract ID. The center of campus in the northwest corner of the tract is Block Group100030145022.
Each Block (SUMLEV = 750) within a Block Group is identified by three more digits appended to the Block Group ID. Pearson Hall is located in Block 100030145022009.
End of explanation
# boolean indexing to pull up California
states_df[states_df.NAME == 'California']
# use .ix -- most general indexing
# http://pandas.pydata.org/pandas-docs/dev/indexing.html#different-choices-for-indexing-loc-iloc-and-ix
states_df.ix['06']
# California counties
counties_df[counties_df.state=='06']
counties_df[counties_df.NAME == 'Alameda County']
counties_df[counties_df.NAME == 'Alameda County']['P0010001']
Explanation: Counties in California
Let's look at home: California state and Alameda County
End of explanation
list(counties_df[counties_df.NAME == 'Alameda County']['P0010001'].to_dict().values())[0]
list(counties_df[counties_df.NAME == 'Alameda County']['P0010001'].iteritems())[0][1]
int(counties_df[counties_df.NAME == 'Alameda County']['P0010001'].values)
Explanation: Different ways to read off the population of Alameda County -- still looking for the best way
End of explanation
# this is like accessing a cell in a spreadsheet -- row, col
ALAMEDA_COUNTY_FIPS = '06001'
counties_df.ix[ALAMEDA_COUNTY_FIPS,'P0010001']
Explanation: If you know the FIPS code for Alameda County, just read off the population using .ix
End of explanation
counties_df.ix[ALAMEDA_COUNTY_FIPS,'county']
# http://api.census.gov/data/2010/sf1/geo.html
# state-county-tract
geo = {'for': 'tract:*',
'in': 'state:%s county:%s' % (us.states.CA.fips,
counties_df.ix[ALAMEDA_COUNTY_FIPS,'county'])}
r = c.sf1.get('NAME,P0010001', geo=geo)
#use state_county_tract to make a DataFrame
alameda_county_tracts_df = DataFrame(r)
alameda_county_tracts_df['P0010001'] = alameda_county_tracts_df['P0010001'].astype('int')
alameda_county_tracts_df['FIPS'] = alameda_county_tracts_df.apply(lambda s: s['state']+s['county']+s['tract'], axis=1)
alameda_county_tracts_df.head()
alameda_county_tracts_df.apply(lambda s: s['state']+s['county']+s['tract'], axis=1)
alameda_county_tracts_df.P0010001.sum()
# Cafe Milano is in tract 4228
MILANO_TRACT_ID = '422800'
alameda_county_tracts_df[alameda_county_tracts_df.tract==MILANO_TRACT_ID]
Explanation: Reading off all the tracts in Alameda County
End of explanation
import time
import us
from itertools import islice
def census_tracts(variable=('NAME','P0010001'), sleep_time=1.0):
for state in us.states.STATES:
print (state)
for tract in c.sf1.get(variable,
geo={'for':"tract:*",
'in':'state:{state_fips}'.format(state_fips=state.fips)
}):
yield tract
# don't hit the API more than once a second
time.sleep(sleep_time)
# limit the number of tracts we crawl for until we're reading to get all of them
tracts_df = DataFrame(list(islice(census_tracts(), 100)))
tracts_df['P0010001'] = tracts_df['P0010001'].astype('int')
tracts_df.head()
Explanation: Using Generators to yield all the tracts in the country
http://www.jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/
End of explanation |
2,354 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Datalab Tutorial
In this tutorial, we'll do some exploratory data analysis in BigQuery using Datalab.
Requirements
If you haven't already, you may sign-up for the free GCP trial credit. Before you begin, give this project any name you like and enable the BigQuery API.
Create a Datalab instance.
NYC Yellow Taxi Data
We'll analyze BigQuery's public dataset on the NYC yellow taxi ride. BigQuery supports both standard and legacy SQL, which are demonstrated in this tutorial.
Step1: Let's look at the table schema
Step2: 1. What is the most common pick-up time?
Step3: Let's name this query result pickup_time and reference it to create the chart below.
Step4: 7
Step5: Let's label this query result vendor and reference it to create the following pie chart.
Step6: 3. Provide summary statistics on trip distance
Step7: Datalab also supports LaTeX rendering. The min distance is $-4.08\times10^7$ miles (interesting!), $Q_1$ is 0.9 miles and $Q_3$ is 2.7 miles. The trip distance is skewed to the right since the mean is greater than the median (1.54 miles).
4. Let's plot the pickup location
Step8: 4. Could distance and fare amount explain the payment disputes for rides from the JFK airport?
Step9: There seems to be a weak positive relationship ($r = +\sqrt{r^2} = 0.145$) between the trip distance and the fare amount for taxis that picked up rides from the airport and had payment disputes.
How can you share your notebook?
To download your notebook, go to Notebook > Download in Datalab.
To push your notebook to your GitHub repo, type the usual git commands in a cell precendented with an exclamation mark, like so | Python Code:
%sql -d standard
SELECT
*
FROM
`nyc-tlc.yellow.trips`
LIMIT
5
Explanation: Datalab Tutorial
In this tutorial, we'll do some exploratory data analysis in BigQuery using Datalab.
Requirements
If you haven't already, you may sign-up for the free GCP trial credit. Before you begin, give this project any name you like and enable the BigQuery API.
Create a Datalab instance.
NYC Yellow Taxi Data
We'll analyze BigQuery's public dataset on the NYC yellow taxi ride. BigQuery supports both standard and legacy SQL, which are demonstrated in this tutorial.
End of explanation
%bigquery schema --table nyc-tlc:yellow.trips
Explanation: Let's look at the table schema:
End of explanation
%%bq query -n pickup_time
WITH subquery AS (
SELECT
EXTRACT(HOUR FROM pickup_datetime) AS hour
FROM
`nyc-tlc.yellow.trips`)
SELECT
Hour,
COUNT(Hour) AS count
FROM
subquery
GROUP BY
Hour
ORDER BY
count DESC
Explanation: 1. What is the most common pick-up time?
End of explanation
# Let's visualize the pick-up time distribution
%chart columns --data pickup_time
Explanation: Let's name this query result pickup_time and reference it to create the chart below.
End of explanation
%%sql -d legacy -m vendor
SELECT
TOP(vendor_id) AS vendor,
COUNT(*) AS count
FROM
[nyc-tlc:yellow.trips]
Explanation: 7:00 PM is the most common pick-up time.
2. Give the vendor distribution
The above queries were all standard SQL. This is an example of how legacy SQL can be executed in Datalab.
End of explanation
%chart pie --data vendor
Explanation: Let's label this query result vendor and reference it to create the following pie chart.
End of explanation
%%sql -d legacy
SELECT
QUANTILES(trip_distance, 5) AS quantile,
MIN(trip_distance) AS min,
MAX(trip_distance) AS max,
AVG(trip_distance) AS avg,
STDDEV(trip_distance) AS std_dev
FROM
[nyc-tlc:yellow.trips]
Explanation: 3. Provide summary statistics on trip distance
End of explanation
%%bq query -n pickup_location
SELECT
pickup_latitude,
pickup_longitude
FROM
`nyc-tlc.yellow.trips`
LIMIT
10
%%chart map --data pickup_location
Explanation: Datalab also supports LaTeX rendering. The min distance is $-4.08\times10^7$ miles (interesting!), $Q_1$ is 0.9 miles and $Q_3$ is 2.7 miles. The trip distance is skewed to the right since the mean is greater than the median (1.54 miles).
4. Let's plot the pickup location
End of explanation
%%bq query -n dispute
SELECT
trip_distance,
fare_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
rate_code = "2"
AND payment_type = "DIS"
%%chart scatter --data dispute
height: 400
hAxis:
title: Distance
vAxis:
title: Fare Amount
trendlines:
0:
type: line
color: green
showR2: true
visibleInLegend: true
Explanation: 4. Could distance and fare amount explain the payment disputes for rides from the JFK airport?
End of explanation
!git add *
!git commit -m "your message"
!git push
Explanation: There seems to be a weak positive relationship ($r = +\sqrt{r^2} = 0.145$) between the trip distance and the fare amount for taxis that picked up rides from the airport and had payment disputes.
How can you share your notebook?
To download your notebook, go to Notebook > Download in Datalab.
To push your notebook to your GitHub repo, type the usual git commands in a cell precendented with an exclamation mark, like so:
End of explanation |
2,355 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multivariable Regression Model of FBI Property Crime Statistics
Using the FBI
Step1: Perfect accuracy, as expected. However......
Predicting ALL property crimes is a more interesting question.
Building a Model to Predict Property Crimes (without using the Property Crime features)
To start, let's take a look at how each of the non-property crime features interact with property crime.
Step2: That single outlier is making the relationships difficult to view. Let's remove the outlier.
Step3: There is a large number of 0's for Murder. Perhaps let's use a binary value for murder occurring vs no murder occurring.
Step4: Noice!!
What about performance when the binary Murder feature is used?
Step5: There is a slight increase of performance when the binary indicator for murder is used.
Leave no man behind!
Reintroduce the outlier to the model.
Step6: Hmmmm....it seems that outlier result has also heavily weighted the R-squared result and coefficients. The linear model which did not incorporate the outlier is likely to be a better indicator of overall trends and accuracy.
Best Model
Step7: Validating regression models for prediction
Now let's use cross-validation to obtain a more accurate description of our accuracy
Step8: Test the Model with Data From Another State
Now let's test our model with the 2013 Crime Rate dataset for California. Will the predictive power be similar? | Python Code:
import warnings
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import linear_model
# Suppress annoying harmless error.
warnings.filterwarnings(
action="ignore"
)
data_path = "https://raw.githubusercontent.com/Thinkful-Ed/data-201-resources/master/New_York_offenses/NEW_YORK-Offenses_Known_to_Law_Enforcement_by_City_2013%20-%2013tbl8ny.csv"
data = pd.read_csv(data_path, delimiter = ',', skiprows=4, header=0, skipfooter=3, thousands=',')
data = pd.DataFrame(data)
data.head()
# Instantiate and fit our model.
regr = linear_model.LinearRegression()
Y = data['Property\ncrime'].values.reshape(-1, 1)
X = data[["Larceny-\ntheft", "Motor\nvehicle\ntheft", "Burglary"]]
regr.fit(X, Y)
# Inspect the results.
print('\nCoefficients: \n', regr.coef_)
print('\nIntercept: \n', regr.intercept_)
print('\nR-squared:')
print(regr.score(X, Y))
Explanation: Multivariable Regression Model of FBI Property Crime Statistics
Using the FBI:UCR Crime dataset, which can be found here, build a regression model to predict property crimes.
The FBI defines property crime as including the offenses of burglary, larceny-theft, motor vehicle theft, and arson. To predict property crime, one can simply use these features.
End of explanation
plt.figure(figsize=(15,5))
sns.pairplot(data, vars =['Property\ncrime', 'Population', 'Violent\ncrime',
'Murder and\nnonnegligent\nmanslaughter',
'Rape\n(legacy\ndefinition)2',
'Robbery', 'Aggravated\nassault'])
plt.show()
Explanation: Perfect accuracy, as expected. However......
Predicting ALL property crimes is a more interesting question.
Building a Model to Predict Property Crimes (without using the Property Crime features)
To start, let's take a look at how each of the non-property crime features interact with property crime.
End of explanation
dataCleaned = data[data["Property\ncrime"] < 20000]
plt.figure(figsize=(15,5))
sns.pairplot(dataCleaned, vars =['Property\ncrime', 'Population', 'Violent\ncrime',
'Murder and\nnonnegligent\nmanslaughter',
'Rape\n(legacy\ndefinition)2',
'Robbery', 'Aggravated\nassault'])
plt.show()
plt.scatter(dataCleaned["Property\ncrime"], dataCleaned["Murder and\nnonnegligent\nmanslaughter"])
plt.title('Raw values')
plt.xlabel("Property Crime")
plt.ylabel("Murder")
plt.show()
Explanation: That single outlier is making the relationships difficult to view. Let's remove the outlier.
End of explanation
dataCleaned["Murder"] = dataCleaned['Murder and\nnonnegligent\nmanslaughter'].apply(lambda x: 0 if x == 0 else 1)
plt.scatter(dataCleaned["Property\ncrime"], dataCleaned["Murder"])
plt.title('Raw values')
plt.xlabel("Property Crime")
plt.ylabel("Murder")
plt.show()
dataCleaned.head()
regr = linear_model.LinearRegression()
Y = dataCleaned['Property\ncrime'].values.reshape(-1, 1)
X = dataCleaned[['Population', 'Violent\ncrime',
'Murder and\nnonnegligent\nmanslaughter',
'Rape\n(legacy\ndefinition)2',
'Robbery', 'Aggravated\nassault']]
regr.fit(X, Y)
# Inspect the results.
print('\nCoefficients: \n', regr.coef_)
print('\nIntercept: \n', regr.intercept_)
print('\nR-squared:')
print(regr.score(X, Y))
Explanation: There is a large number of 0's for Murder. Perhaps let's use a binary value for murder occurring vs no murder occurring.
End of explanation
regr = linear_model.LinearRegression()
Y = dataCleaned['Property\ncrime'].values.reshape(-1, 1)
X = dataCleaned[['Population', 'Violent\ncrime',
'Murder', 'Rape\n(legacy\ndefinition)2',
'Robbery', 'Aggravated\nassault']]
regr.fit(X, Y)
# Inspect the results.
print('\nCoefficients: \n', regr.coef_)
print('\nIntercept: \n', regr.intercept_)
print('\nR-squared:')
print(regr.score(X, Y))
Explanation: Noice!!
What about performance when the binary Murder feature is used?
End of explanation
data["Murder"] = data['Murder and\nnonnegligent\nmanslaughter'].apply(lambda x: 0 if x == 0 else 1)
regr = linear_model.LinearRegression()
Y = data['Property\ncrime'].values.reshape(-1, 1)
X = data[['Population', 'Violent\ncrime',
'Murder', 'Rape\n(legacy\ndefinition)2',
'Robbery', 'Aggravated\nassault']]
regr.fit(X, Y)
# Inspect the results.
print('\nCoefficients: \n', regr.coef_)
print('\nIntercept: \n', regr.intercept_)
print('\nR-squared:')
print(regr.score(X, Y))
Explanation: There is a slight increase of performance when the binary indicator for murder is used.
Leave no man behind!
Reintroduce the outlier to the model.
End of explanation
regr = linear_model.LinearRegression()
Y = dataCleaned['Property\ncrime'].values.reshape(-1, 1)
X = dataCleaned[['Population', 'Violent\ncrime',
'Murder', 'Rape\n(legacy\ndefinition)2',
'Robbery', 'Aggravated\nassault']]
regr.fit(X, Y)
# Inspect the results.
print('\nCoefficients: \n', regr.coef_)
print('\nIntercept: \n', regr.intercept_)
print('\nR-squared:')
print(regr.score(X, Y))
Explanation: Hmmmm....it seems that outlier result has also heavily weighted the R-squared result and coefficients. The linear model which did not incorporate the outlier is likely to be a better indicator of overall trends and accuracy.
Best Model:
End of explanation
from sklearn.cross_validation import cross_val_score
regr = linear_model.LinearRegression()
y = data['Property\ncrime'].values.reshape(-1, 1)
X = data[['Population', 'Violent\ncrime',
'Murder', 'Rape\n(legacy\ndefinition)2',
'Robbery', 'Aggravated\nassault']]
scores = cross_val_score(regr, X, y, cv = 10)
print("Percent accuracy within each fold:\n")
print(scores)
print("\nMean accuracy:\n")
print(scores.mean())
Explanation: Validating regression models for prediction
Now let's use cross-validation to obtain a more accurate description of our accuracy:
End of explanation
data_path = "files/table_8_offenses_known_to_law_enforcement_california_by_city_2013.csv"
dataCA = pd.read_csv(data_path, delimiter = ',', skiprows=4, header=0, skipfooter=3, thousands=',')
dataCA = pd.DataFrame(dataCA)
dataCA.head()
dataCA["Murder"] = dataCA['Murder and\nnonnegligent\nmanslaughter'].apply(lambda x: 0 if x == 0 else 1)
y = dataCA['Property\ncrime'].values.reshape(-1, 1)
X = dataCA[['Population', 'Violent\ncrime',
'Murder', 'Rape\n(legacy\ndefinition)2',
'Robbery', 'Aggravated\nassault']]
scores = cross_val_score(regr, X, y, cv = 10)
print("Percent accuracy within each fold:\n")
print(scores)
print("\nMean accuracy:\n")
print(scores.mean())
Explanation: Test the Model with Data From Another State
Now let's test our model with the 2013 Crime Rate dataset for California. Will the predictive power be similar?
End of explanation |
2,356 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Define target events based on time lag, plot evoked response
This script shows how to define higher order events based on
time lag between reference and target events. For
illustration, we will put face stimuli presented into two
classes, that is 1) followed by an early button press
(within 590 milliseconds) and followed by a late button
press (later than 590 milliseconds). Finally, we will
visualize the evoked responses to both 'quickly-processed'
and 'slowly-processed' face stimuli.
Step1: Set parameters
Step2: Find stimulus event followed by quick button presses
Step3: View evoked response | Python Code:
# Authors: Denis Engemann <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.event import define_target_events
from mne.datasets import sample
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
Explanation: Define target events based on time lag, plot evoked response
This script shows how to define higher order events based on
time lag between reference and target events. For
illustration, we will put face stimuli presented into two
classes, that is 1) followed by an early button press
(within 590 milliseconds) and followed by a late button
press (later than 590 milliseconds). Finally, we will
visualize the evoked responses to both 'quickly-processed'
and 'slowly-processed' face stimuli.
End of explanation
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: EEG + STI 014 - bad channels (modify to your needs)
include = [] # or stim channels ['STI 014']
raw.info['bads'] += ['EEG 053'] # bads
# pick MEG channels
picks = mne.pick_types(raw.info, meg='mag', eeg=False, stim=False, eog=True,
include=include, exclude='bads')
Explanation: Set parameters
End of explanation
reference_id = 5 # presentation of a smiley face
target_id = 32 # button press
sfreq = raw.info['sfreq'] # sampling rate
tmin = 0.1 # trials leading to very early responses will be rejected
tmax = 0.59 # ignore face stimuli followed by button press later than 590 ms
new_id = 42 # the new event id for a hit. If None, reference_id is used.
fill_na = 99 # the fill value for misses
events_, lag = define_target_events(events, reference_id, target_id,
sfreq, tmin, tmax, new_id, fill_na)
print(events_) # The 99 indicates missing or too late button presses
# besides the events also the lag between target and reference is returned
# this could e.g. be used as parametric regressor in subsequent analyses.
print(lag[lag != fill_na]) # lag in milliseconds
# #############################################################################
# Construct epochs
tmin_ = -0.2
tmax_ = 0.4
event_id = dict(early=new_id, late=fill_na)
epochs = mne.Epochs(raw, events_, event_id, tmin_,
tmax_, picks=picks, baseline=(None, 0),
reject=dict(mag=4e-12))
# average epochs and get an Evoked dataset.
early, late = [epochs[k].average() for k in event_id]
Explanation: Find stimulus event followed by quick button presses
End of explanation
times = 1e3 * epochs.times # time in milliseconds
title = 'Evoked response followed by %s button press'
fig, axes = plt.subplots(2, 1)
early.plot(axes=axes[0], time_unit='s')
axes[0].set(title=title % 'late', ylabel='Evoked field (fT)')
late.plot(axes=axes[1], time_unit='s')
axes[1].set(title=title % 'early', ylabel='Evoked field (fT)')
plt.show()
Explanation: View evoked response
End of explanation |
2,357 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1D Data Analysis, Histograms, Boxplots, and Violin Plots
Unit 7, Lecture 2
Numerical Methods and Statistics
Prof. Andrew White, 2/27/2020
Goals
Be able to histogram 1D data
Understand the difference between 1D and categorical 1D data
Know how to make violin plots
Step1: Getting Data from PyDatasets
One way to get data is using the pydatasets package. You can find the list of datasets here and you can use one like so
Step2: We have loaded a dataset with 71 data points and each data point has 2 pieces of information. Let's see one
Step3: The first slice index says grab row 0 and the second slice says grab all columns. Each data point contains the mass of a chicken and the type of food it was fed.
Analzying 1D Data
Let's see an example with data from Lake Huron. Our first tool to understand 1D numerical data is to look at sample mean and sample standard deviation.
Step4: This data has 98 rows and 2 columns. The columns contain the year and the depth of Lake Huron in feet. We cannot simply take the mean of all the data because that would be the mean of all years and depths. Instead, we can slice out only one of the columns
Step5: We now will follow significant figures convention in our calculations. Each data point in the dataset has 5 digits of precision, so our mean should as well. Thus we will print like so
Step6: We can similarily calculate the sample standard deviation
Step7: We had to specify manually that we want to have the $N - 1$ term in the denominator. numpy uses a convention where you can specify what is subtracted from $N$ in the denominator through the ddof argument, which stands for deducted degrees of freedom. Thus ddof = 1 means we want to have $N - 1$ in the denominator instead of the default $N$ in the denominator.
Histogramming
Histogramming is the process of sorting data into bins and counting how much data is in each bin. Let's see a basic example.
Step8: We made two bins, one from 0 to 10 and another from 10 to 20. Those were specified with the bins = [0, 10, 20] command. We then were given the counts of data within those bins. We can plot using our output from np.histogram, or we can do both the histogram and plot using plt.hist.
Step9: There are a few problems we can see. The first is that the x-axis has ticks in weird locations. The second problem is that the bars are right on top of one another, so it's hard to tell what's going on. Let's adjust the options to fix this.
Step10: Now let's take a look at the Lake Huron level.
Step11: What went wrong?
Step12: The ticks aren't great and I personally don't like the bars touching. Let's work a little on improving the plot.
Step13: We can see a lot from this figure. We can see the lowest and highest depths. This representation may remind you of how probability distributions look and indeed, this representation is how you can reconstruct probability mass functions. To see this, let's look at another example.
We'll look at a larger dataset that is speed of cars.
Step14: Now the y-axis shows the proportion of times that a particular speed was observed. Thanks to the Law of Large Numbers, and the fact we have 8,500 samples, we know that these proportions will approach the probabilities of these intervals. For example, the probability of observing a speed between 25 and 30 mph is $\approx 0.012$. If we make our bins small enough, we'll eventually be able to assign a probability to any value and thus we'll have recreated the probability mass function!
Kernel Density Estimation
Kernel density estimation is a more sophisticated method for estimating the probability mass function from a histogram. It can help you see what type of distribution your data might follow (e.g., normal, exponential). Let's see an example.
Step15: The new solid line shows us that a normal distribution would be a good fit, although the right tail is a little long. This line is generated by estimating what the histogram would look like if the bins were infinitely small.
Categorical Data Boxplots
Sometimes we'll have measured some quantity, like mass of a chicken, under multiple conditions. This is not exactly 2D, because the conditions are usually categorical data. For example, my conditions are the kind of food I've fed to my chickens. We can analyze this using a boxplot, which shows the category and quartiles in one plot.
Step16: This first step is a way to find all the unique labels to find our possible categories. Now we'll use that to separate our data into a list of arrays, one for each catgory, instead of one large array.
Step17: Whew! That was a lot of work. We used a few tricks. One was that you can slice using True and False values in numpy. Let's see a smaller example
Step18: The other thing we did is convert the array into floating point numbers. Recall that in numpy each array can only be one data type. The original chicken dataset had strings in it, like 'linseed', so that the whole array was strings. We thus had to convert to floats to do calculations on the chicken weights. We used the astype() method on the array.
So we found which rows had their second column (the category column) be equal to category c and then slice out those rows. Now we can make the boxplot.
Step19: The box plot shows a quite a bit of information. It should the median in as a horizontal line over the box. The box itself shows the middle two quartiles and the "whiskers" show the bottom 10th and upper 90th percentiles of the data. The points outside of the boxs are outliers.
Violin Plots
Just like how we saw that you can use kernel density estimation to provide richer information, we can apply this to boxplots as well. | Python Code:
%matplotlib inline
import random
import numpy as np
import matplotlib.pyplot as plt
from math import sqrt, pi
import scipy
import scipy.stats
plt.style.use('seaborn-whitegrid')
!pip install --user pydataset
Explanation: 1D Data Analysis, Histograms, Boxplots, and Violin Plots
Unit 7, Lecture 2
Numerical Methods and Statistics
Prof. Andrew White, 2/27/2020
Goals
Be able to histogram 1D data
Understand the difference between 1D and categorical 1D data
Know how to make violin plots
End of explanation
import pydataset
data = pydataset.data('chickwts').values
data.shape
Explanation: Getting Data from PyDatasets
One way to get data is using the pydatasets package. You can find the list of datasets here and you can use one like so:
End of explanation
print(data[0, :])
Explanation: We have loaded a dataset with 71 data points and each data point has 2 pieces of information. Let's see one
End of explanation
#load data
huron = pydataset.data('LakeHuron').values
#see the dimensions of the data
print(huron.shape)
#look at the first row
print(huron[0,:])
Explanation: The first slice index says grab row 0 and the second slice says grab all columns. Each data point contains the mass of a chicken and the type of food it was fed.
Analzying 1D Data
Let's see an example with data from Lake Huron. Our first tool to understand 1D numerical data is to look at sample mean and sample standard deviation.
End of explanation
huron_mean = np.mean(huron[:, 1])
print(huron_mean)
Explanation: This data has 98 rows and 2 columns. The columns contain the year and the depth of Lake Huron in feet. We cannot simply take the mean of all the data because that would be the mean of all years and depths. Instead, we can slice out only one of the columns
End of explanation
huron_mean = np.mean(huron[:, 1])
print('The mean is {:.5} ft'.format(huron_mean))
Explanation: We now will follow significant figures convention in our calculations. Each data point in the dataset has 5 digits of precision, so our mean should as well. Thus we will print like so:
End of explanation
huron_std = np.std(huron[:, 1], ddof=1)
print('The sample standard deviation is {:.5} ft'.format(huron_std))
Explanation: We can similarily calculate the sample standard deviation:
End of explanation
#create some data
x = [1, 2, 13, 15, 11, 12]
#compute histogram
counts, bin_edges = np.histogram(x, bins=[0, 10, 20])
for i in range(len(counts)):
print('There were {} samples between {} and {}'.format(counts[i], bin_edges[i], bin_edges[i + 1]))
Explanation: We had to specify manually that we want to have the $N - 1$ term in the denominator. numpy uses a convention where you can specify what is subtracted from $N$ in the denominator through the ddof argument, which stands for deducted degrees of freedom. Thus ddof = 1 means we want to have $N - 1$ in the denominator instead of the default $N$ in the denominator.
Histogramming
Histogramming is the process of sorting data into bins and counting how much data is in each bin. Let's see a basic example.
End of explanation
plt.hist(x, bins=[0, 10, 20])
plt.show()
Explanation: We made two bins, one from 0 to 10 and another from 10 to 20. Those were specified with the bins = [0, 10, 20] command. We then were given the counts of data within those bins. We can plot using our output from np.histogram, or we can do both the histogram and plot using plt.hist.
End of explanation
#rwidth controls how close bars are
plt.hist(x, bins=[0, 10, 20], rwidth = 0.99, normed=False)
#set exactly where the ticks should be
plt.xticks([0, 10, 20])
plt.yticks([2, 4])
plt.xlim(0, 20)
plt.ylim(0, 5)
plt.show()
Explanation: There are a few problems we can see. The first is that the x-axis has ticks in weird locations. The second problem is that the bars are right on top of one another, so it's hard to tell what's going on. Let's adjust the options to fix this.
End of explanation
plt.hist(huron)
plt.show()
Explanation: Now let's take a look at the Lake Huron level.
End of explanation
plt.hist(huron[:, 1])
plt.xlabel('Lake Huron Level in Feet')
plt.ylabel('Number of Times Observed')
plt.show()
Explanation: What went wrong?
End of explanation
plt.style.use('seaborn')
plt.hist(huron[:, 1], rwidth=0.9)
plt.xlabel('Lake Huron Level in Feet')
plt.ylabel('Number of Times Observed')
plt.yticks([0, 5, 10, 15, 20])
plt.ylim(0,20)
plt.axvline(x=np.mean(huron[:,1]),color='red', label='mean')
plt.legend()
plt.show()
Explanation: The ticks aren't great and I personally don't like the bars touching. Let's work a little on improving the plot.
End of explanation
car_speed = pydataset.data('amis').values
speeds = car_speed[:, 0]
print(len(speeds))
#now we'll use normed
plt.hist(speeds, normed=True)
plt.xlabel('Speed in mph')
plt.ylabel('Proportion')
plt.show()
Explanation: We can see a lot from this figure. We can see the lowest and highest depths. This representation may remind you of how probability distributions look and indeed, this representation is how you can reconstruct probability mass functions. To see this, let's look at another example.
We'll look at a larger dataset that is speed of cars.
End of explanation
import seaborn as sns
sns.distplot(speeds, bins=range(15, 65))
plt.show()
Explanation: Now the y-axis shows the proportion of times that a particular speed was observed. Thanks to the Law of Large Numbers, and the fact we have 8,500 samples, we know that these proportions will approach the probabilities of these intervals. For example, the probability of observing a speed between 25 and 30 mph is $\approx 0.012$. If we make our bins small enough, we'll eventually be able to assign a probability to any value and thus we'll have recreated the probability mass function!
Kernel Density Estimation
Kernel density estimation is a more sophisticated method for estimating the probability mass function from a histogram. It can help you see what type of distribution your data might follow (e.g., normal, exponential). Let's see an example.
End of explanation
data = pydataset.data('chickwts').values
categories = np.unique(data[:,1])
print(categories)
Explanation: The new solid line shows us that a normal distribution would be a good fit, although the right tail is a little long. This line is generated by estimating what the histogram would look like if the bins were infinitely small.
Categorical Data Boxplots
Sometimes we'll have measured some quantity, like mass of a chicken, under multiple conditions. This is not exactly 2D, because the conditions are usually categorical data. For example, my conditions are the kind of food I've fed to my chickens. We can analyze this using a boxplot, which shows the category and quartiles in one plot.
End of explanation
data_as_arrays = []
#loop over categories
for c in categories:
#get a True/False array showing which rows had which category
rows_with_category = data[:,1] == c
#now slice out the rows with the category and grab column 0 (the chicken mass)
data_slice = data[rows_with_category, 0]
#now we need to make the data into floats, because it happened to be loaded as a string
data_as_arrays.append(data_slice.astype(float))
Explanation: This first step is a way to find all the unique labels to find our possible categories. Now we'll use that to separate our data into a list of arrays, one for each catgory, instead of one large array.
End of explanation
x = np.array([4, 10, 20])
my_slice = [True, False, True]
x[my_slice]
Explanation: Whew! That was a lot of work. We used a few tricks. One was that you can slice using True and False values in numpy. Let's see a smaller example:
End of explanation
#NOTICE WE USE Seaborn not PLT
sns.boxplot(data=data_as_arrays)
#Need to replace the ticks (0, through N - 1) with the names of categories
plt.xticks(range(len(categories)), categories)
plt.xlabel('Feed')
plt.ylabel('Chicken Mass in Grams')
plt.show()
Explanation: The other thing we did is convert the array into floating point numbers. Recall that in numpy each array can only be one data type. The original chicken dataset had strings in it, like 'linseed', so that the whole array was strings. We thus had to convert to floats to do calculations on the chicken weights. We used the astype() method on the array.
So we found which rows had their second column (the category column) be equal to category c and then slice out those rows. Now we can make the boxplot.
End of explanation
sns.violinplot(data=data_as_arrays)
#Need to replace the ticks (0, through N - 1) with the names of categories
plt.xticks(range(len(categories)), categories)
plt.xlabel('Feed')
plt.ylabel('Chicken Mass in Grams')
plt.show()
Explanation: The box plot shows a quite a bit of information. It should the median in as a horizontal line over the box. The box itself shows the middle two quartiles and the "whiskers" show the bottom 10th and upper 90th percentiles of the data. The points outside of the boxs are outliers.
Violin Plots
Just like how we saw that you can use kernel density estimation to provide richer information, we can apply this to boxplots as well.
End of explanation |
2,358 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Emission from the relativistic charged particles
The SR (spontaneous radiation) module calculates the spectral-spatial distibution of electromagnetic emission priduced by the relativistic charges using the Lienard-Wiechert and retarded potentials Fourier-transformed in time.
The calculator goes in two flavours
Step1: Lets define parameters of a 50 periods undulator with a small deflection parameter $K_0=0.1$, and an electron with $\gamma_e=100$
Step2: Electron is added by creating a dummy specie equipped with the undulator device, and a single particle with $p_x = \sqrt{\gamma_e^2-1}$ is added
Step3: The SR calculator constractor takes the mode specifier Mode which can be 'far', 'near' and 'near-circ' which define the far field angular mapping, near field coordinate mapping in Cartesian geometry and near field coordinate mapping in cylindrical geometry. The mapping domain is defined by the Grid
- for 'far' it is [($k_{min}$, $k_{max}$), ($\theta_{min}$, $\theta_{max}$), ($\phi_{min}$, $\phi_{max}$), ($N_k$, $N_\theta$, $N_\phi$)], where $k$, $\theta$ and $\phi$ are the wavenumber elevation and azimuthal angles of the emission.
- for 'near' it is [($k_{min}$, $k_{max}$), ($x_{min}$, $x_{max}$), ($y_{min}$, $y_{max}$), ($N_k$, $N_x$, $N_y$)], where 'x' and 'y' are screen Cartesian dimensions
- for 'near-circ' it is [($k_{min}$, $k_{max}$), ($r_{min}$, $r_{max}$), ($\phi_{min}$, $\phi_{max}$), ($N_k$, $N_r$, $N_\phi$)], where 'x' and 'y' are screen cylindrical dimensions
The Features attribut can contain WavelenghtGrid, which would provide the homogeneous grid for the wavelengths.
After the calculator is constructed, the track-container should be initiated.
Step4: The simulation is run as usually, but after each (or selected) step the track point is added to the track container using add_track method.
After the orbits are recorded the spectrum is calculated with calculate_spectrum method, which can take the component as an argument (e.g. comp='z'). In contrast to the axes conventions use in linear machines (z, x, y) and storage rings (s, x, z), here we use the generic convention (x, y, z) -- propagation axis is X, and oscillations axis is typically Z. The defalut is comp='all' which accounts for all components.
Step5: The few useful functions are available
Step6: Let us compare the obtained spectrum with analytical estimates, derived with Periods>1 and $K_0<<1$ approximations. Here we use expressions from [I. A. Andriyash et al, Phys. Rev. ST Accel. Beams 16 100703 (2013)], which can be easily derived or found in textbooks on undulator radiation.
- the resonant frequency depends on angle as $\propto \left(1+\theta^2\gamma_e^2\right)^{-1}$
- the energy in units of full number of photons with fundamental wavelength is
$$N_{ph} = \frac{7\pi \alpha}{24}\, K_0^2\, \left(1+\frac{K_0^2}{2}\right)\, N_\mathrm{periods}$$
- the transverse profiles of emission are $\propto \left(1+\theta^2\gamma_e^2\right)^{-3}$ (vertical), $\propto \left(1+\theta^2\gamma_e^2\right)^{-7}$ (horizontal)
Step7: As mentioned before, the radiation profile for a given (e.g. fundumantal) wavenumber, can be specified as k0 argument to get_spot. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import sys,time
import numpy as np
from scipy.constants import c,hbar
from scipy.interpolate import griddata
from chimera.moduls.species import Specie
from chimera.moduls.chimera_main import ChimeraRun
from chimera.moduls.SR import SR
from chimera.moduls import fimera as chimera
Explanation: Emission from the relativistic charged particles
The SR (spontaneous radiation) module calculates the spectral-spatial distibution of electromagnetic emission priduced by the relativistic charges using the Lienard-Wiechert and retarded potentials Fourier-transformed in time.
The calculator goes in two flavours:
- far field angular distibution computed using the Lienard-Wiechert potentials in the far field:
$$ \cfrac{\mathrm{d}\mathcal{W}}{\mathrm{d}\Omega\, \mathrm{d}\omega} = \frac{e^2}{4\pi^2c} \sum_p \left| \int_{-\infty}^{\infty} \mathrm{d}t\,\cfrac{\bf n!\times! (\bf n!-!\beta)!\times!\dot{\beta}}{(1-{\bf n\,\beta})^2}\;\mathrm{e}^{i \omega (t-\mathbf{n}\mathbf{r}e/c )}\right|^2\,, $$
- near field angular distibution computed using the formula:
$$ \mathbf{E}\omega = i\cfrac{e\omega}{c}\int_{-\infty}^{\infty} dt \cfrac{1}{R} \left[\mathbf{\beta} - \mathbf{n} \left(1+\cfrac{ic}{\omega R}\right)\right]\mathrm{e}^{i \omega (t+R/c )}$$
for more details see [O. Chubar, Infrared Physics & Technology 49, 96 (2006)]
This utility can be used to study the spontaneous emission from undulators or channels.
End of explanation
K0 = 0.1
Periods=50
g0=100.0
StepsPerPeriod=24
gg = g0/(1.+K0**2/2)**.5
vb = (1.-gg**-2)**0.5
k_res = 2*gg**2
dt = 1./StepsPerPeriod
Steps2Do = int((Periods+2)/dt)+1
Explanation: Lets define parameters of a 50 periods undulator with a small deflection parameter $K_0=0.1$, and an electron with $\gamma_e=100$
End of explanation
specie_in = {'TimeStep':dt,
'Devices':([chimera.undul_analytic,np.array([K0, 1., 0, Periods])],)
}
beam = Specie(specie_in)
NumParts = 1
beam.Data['coords'] = np.zeros((3,NumParts))
beam.Data['momenta'] = np.zeros((3,NumParts))
beam.Data['momenta'][0] = np.sqrt(g0**2-1)
beam.Data['coords_halfstep'] = beam.Data['coords'].copy()
beam.Data['weights'] = np.ones((NumParts,))/NumParts
chimera_in = {'Particles':(beam,),}
Chimera = ChimeraRun(chimera_in)
Explanation: Electron is added by creating a dummy specie equipped with the undulator device, and a single particle with $p_x = \sqrt{\gamma_e^2-1}$ is added
End of explanation
sr_in_far = {'Grid':[(0.02*k_res,1.1*k_res),(0,2./g0),(0.,2*np.pi),(160,80,24)],
'TimeStep':dt,'Features':(),
}
sr_in_nearcirc = {'Grid':[(0.02*k_res,1.1*k_res),(0,15),(0.,2*np.pi),1e3,(160,80,24)],
'TimeStep':dt,'Features':(),'Mode':'near-circ',
}
sr_in_near = {'Grid':[(0.02*k_res,1.1*k_res),(-15,15),(-15,15),1e3,(160,160,160)],
'TimeStep':dt,'Features':(),'Mode':'near',
}
sr_calc_far = SR(sr_in_far)
sr_calc_nearcirc = SR(sr_in_nearcirc)
sr_calc_near = SR(sr_in_near)
sr_calc_far.init_track(Steps2Do,beam)
sr_calc_nearcirc.init_track(Steps2Do,beam)
sr_calc_near.init_track(Steps2Do,beam)
Explanation: The SR calculator constractor takes the mode specifier Mode which can be 'far', 'near' and 'near-circ' which define the far field angular mapping, near field coordinate mapping in Cartesian geometry and near field coordinate mapping in cylindrical geometry. The mapping domain is defined by the Grid
- for 'far' it is [($k_{min}$, $k_{max}$), ($\theta_{min}$, $\theta_{max}$), ($\phi_{min}$, $\phi_{max}$), ($N_k$, $N_\theta$, $N_\phi$)], where $k$, $\theta$ and $\phi$ are the wavenumber elevation and azimuthal angles of the emission.
- for 'near' it is [($k_{min}$, $k_{max}$), ($x_{min}$, $x_{max}$), ($y_{min}$, $y_{max}$), ($N_k$, $N_x$, $N_y$)], where 'x' and 'y' are screen Cartesian dimensions
- for 'near-circ' it is [($k_{min}$, $k_{max}$), ($r_{min}$, $r_{max}$), ($\phi_{min}$, $\phi_{max}$), ($N_k$, $N_r$, $N_\phi$)], where 'x' and 'y' are screen cylindrical dimensions
The Features attribut can contain WavelenghtGrid, which would provide the homogeneous grid for the wavelengths.
After the calculator is constructed, the track-container should be initiated.
End of explanation
t0 = time.time()
for i in range(Steps2Do):
Chimera.make_step(i)
sr_calc_far.add_track(beam)
sr_calc_nearcirc.add_track(beam)
sr_calc_near.add_track(beam)
print('Done orbits in {:g} sec'.format(time.time()-t0))
t0 = time.time()
sr_calc_far.calculate_spectrum(comp='z')
print('Done farfield spectrum in {:g} min'.format((time.time()-t0)/60.))
t0 = time.time()
sr_calc_nearcirc.calculate_spectrum(comp='z')
print('Done nearfield (cylindric) spectrum in {:g} min'.format((time.time()-t0)/60.))
t0 = time.time()
sr_calc_near.calculate_spectrum(comp='z')
print('Done nearfield spectrum in {:g} min'.format((time.time()-t0)/60. ))
Explanation: The simulation is run as usually, but after each (or selected) step the track point is added to the track container using add_track method.
After the orbits are recorded the spectrum is calculated with calculate_spectrum method, which can take the component as an argument (e.g. comp='z'). In contrast to the axes conventions use in linear machines (z, x, y) and storage rings (s, x, z), here we use the generic convention (x, y, z) -- propagation axis is X, and oscillations axis is typically Z. The defalut is comp='all' which accounts for all components.
End of explanation
args_calc = {'chim_units':False, 'lambda0_um':1}
FullSpect_far = sr_calc_far.get_full_spectrum(**args_calc)
FullSpect_nearcirc = sr_calc_nearcirc.get_full_spectrum(**args_calc)
FullSpect_near = sr_calc_near.get_full_spectrum(**args_calc)
spotXY_far,ext_far = sr_calc_far.get_spot_cartesian(bins=(120,120),**args_calc)
spotXY_nearcirc,ext_nearcirc = sr_calc_nearcirc.get_spot_cartesian(bins=(120,120),**args_calc)
spotXY_near = sr_calc_near.get_spot(**args_calc)
ext_near = np.array(list(sum(sr_calc_near.Args['Grid'][1:3], ())))
FullEnergy_far = sr_calc_far.get_energy(**args_calc)/k_res
FullEnergy_nearcirc = sr_calc_nearcirc.get_energy(**args_calc)/k_res
FullEnergy_near = sr_calc_near.get_energy(**args_calc)/k_res
FullEnergy_theor = sr_calc_near.J_in_um*(7*np.pi/24)/137.*K0**2*(1+K0**2/2)*Periods
print('** Full energy in far field agrees with theory with {:.1f}% error'. \
format(100*(2*(FullEnergy_far-FullEnergy_theor)/(FullEnergy_far+FullEnergy_theor))))
print('** Full energy in near field (cylindric) agrees with theory with {:.1f}% error'. \
format(100*(2*(FullEnergy_nearcirc-FullEnergy_theor)/(FullEnergy_nearcirc+FullEnergy_theor))))
print('** Full energy in near field agrees with theory with {:.1f}% error'. \
format(100*(2*(FullEnergy_near-FullEnergy_theor)/(FullEnergy_near+FullEnergy_theor))))
Explanation: The few useful functions are available:
- get_full_spectrum returns the full energy spectral-angular distribution, $\mathrm{d}\mathcal{W}/(\mathrm{d}\varepsilon\, \mathrm{d}\Theta)$ (dimensionless)
- get_energy_spectrum returns $\Theta$-integrated get_full_spectrum
- get_spot returns $\varepsilon$-integrated get_full_spectrum, in the units of $(dW/d\Theta)$ [J] $\cdot\lambda_0$ [$\mu$m]
- get_spot_cartesian is same as get_spot but returns the profile interpolated onto cartesian axis; takes th_part argument (default value is 1) to specify the cone angle relative to the full cone $(0,\theta_{max})$, and bins=(Nx,Ny) to specify the resolution of the cartesian grid
- get_energy returns full energy either in $(dW)$ [J] $\cdot\lambda_0$ [$\mu$m]
Each of these diagnostics can take a spect_filter argument, which will multiply the spectral-angular distribution (shape should conform with multiplication operation).
For get_spot and get_spot_cartesian the wavenumber can be specified to show the profile for a single energy.
The marco-particles weights are either defined as in main code, chim_units=True, or represent number of real electrons chim_units=False. Since in CHIMERA the particles charge is $\propto \lambda_0$, if chim_units=True is used the get_spot and get_energy return Jouls.
End of explanation
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(12,5))
extent = np.array(sr_calc_far.Args["Grid"][0]+sr_calc_far.Args["Grid"][1])
extent[:2] /= k_res
extent[2:] *= g0
th,ph = sr_calc_far.Args['theta'], sr_calc_far.Args['phi']
ax1.plot( 1./(1.+(th*g0)**2), th*g0, ':', c='k',lw=1.5)
ax1.imshow(FullSpect_far.mean(-1).T,
interpolation='spline16', aspect='auto', origin='lower',
cmap = plt.cm.Spectral_r, extent=extent)
ax1.set_xlabel(r'Wavenumber $k/k_0$', fontsize=18)
ax1.set_ylabel(r'Angle, $\theta \gamma_e$', fontsize=18)
ax2.imshow(spotXY_far.T,origin='lower',cmap = plt.cm.bone_r,extent=g0*ext_far)
ax2.set_xlabel(r'Horiz. angle, $\theta \gamma_e$', fontsize=18)
ax2.set_ylabel(r'Vert. angle, $\theta \gamma_e$', fontsize=18)
th = np.r_[g0*ext_far[0]:g0*ext_far[1]:spotXY_far.shape[0]*1j]
ax3 = plt.twinx(ax=ax2);
ax3.plot(th, spotXY_far[:,spotXY_far.shape[0]//2+1]/spotXY_far.max(), c='b')
ax3.plot(th, (1+th**2)**-7, '--',c='b',lw=2)
ax3.set_ylim(0,3)
ax3.yaxis.set_ticks([])
ax4 = plt.twiny(ax=ax2);
ax4.plot(spotXY_far[spotXY_far.shape[0]//2+1,:]/spotXY_far.max(), th, c='g')
ax4.plot((1+th**2)**-3, th, '--',c='g',lw=2)
ax4.set_xlim(0,3)
ax4.xaxis.set_ticks([]);
Explanation: Let us compare the obtained spectrum with analytical estimates, derived with Periods>1 and $K_0<<1$ approximations. Here we use expressions from [I. A. Andriyash et al, Phys. Rev. ST Accel. Beams 16 100703 (2013)], which can be easily derived or found in textbooks on undulator radiation.
- the resonant frequency depends on angle as $\propto \left(1+\theta^2\gamma_e^2\right)^{-1}$
- the energy in units of full number of photons with fundamental wavelength is
$$N_{ph} = \frac{7\pi \alpha}{24}\, K_0^2\, \left(1+\frac{K_0^2}{2}\right)\, N_\mathrm{periods}$$
- the transverse profiles of emission are $\propto \left(1+\theta^2\gamma_e^2\right)^{-3}$ (vertical), $\propto \left(1+\theta^2\gamma_e^2\right)^{-7}$ (horizontal)
End of explanation
spotXY_k0,ext = sr_calc_far.get_spot_cartesian(k0=k_res,th_part=0.2,**args_calc)
Spect1D = sr_calc_far.get_energy_spectrum(**args_calc)
k_ax = sr_calc_far.get_spectral_axis()/k_res
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(14,5))
ax1.plot(k_ax, Spect1D/Spect1D.max())
ax1.plot(k_ax, FullSpect_far[1:,0,0]/FullSpect_far[1:,0,0].max())
ax1.set_xlim(0.5,1.1)
ax1.set_ylim(0,1.1)
ax1.set_xlabel(r'Wavenumber $k/k_0$', fontsize=18)
ax1.set_ylabel(r'$dW/d\varepsilon_{ph}$ (norm.)', fontsize=18)
ax1.legend(('angle-integrated','on-axis'), loc=2,fontsize=15)
ax2.imshow(spotXY_k0.T,origin='lower',cmap = plt.cm.bone_r,extent=g0*ext_far)
ax2.set_xlabel(r'Horiz. angle, $\theta \gamma_e$', fontsize=18)
ax2.set_ylabel(r'Vert. angle, $\theta \gamma_e$', fontsize=18);
Lsource = (sr_calc_near.Args['Grid'][3] - 0.5*Periods)
ext_wo_far = np.array(list(sum(sr_calc_far.Args['Grid'][0:2], ())))
ext_wo_far[2] = -ext_wo_far[3]
ext_wo_far[2:] *= g0
ext_wo_far[:2] /= k_res
ext_wo_nearcirc = np.array(list(sum(sr_calc_nearcirc.Args['Grid'][0:2], ())))
ext_wo_nearcirc[2] = -ext_wo_nearcirc[3]
ext_wo_nearcirc[2:] *= g0/Lsource
ext_wo_nearcirc[:2] /= k_res
ext_wo_near = np.array(list(sum(sr_calc_near.Args['Grid'][0:2], ())))
ext_wo_near[2:] *= g0/Lsource
ext_wo_near[:2] /= k_res
fig,((ax1,ax2),(ax3,ax4),(ax5,ax6),) = plt.subplots(3,2, figsize=(12,15))
ax1.imshow(np.hstack((FullSpect_far[:,::-1,0],FullSpect_far[:,:,12])).T,
origin='lower',interpolation='bilinear',
aspect='auto',cmap=plt.cm.Spectral_r,
extent=ext_wo_far)
ax2.imshow(np.hstack((FullSpect_far[:,::-1,6],FullSpect_far[:,:,18])).T,
origin='lower',interpolation='bilinear',
aspect='auto',cmap=plt.cm.Spectral_r,
extent=ext_wo_far)
ax3.imshow(np.hstack((FullSpect_nearcirc[:,::-1,0],FullSpect_nearcirc[:,:,12])).T,
origin='lower',interpolation='bilinear',
aspect='auto',cmap=plt.cm.Spectral_r,
extent=ext_wo_nearcirc)
ax4.imshow(np.hstack((FullSpect_nearcirc[:,::-1,6],FullSpect_nearcirc[:,:,18])).T,
origin='lower',interpolation='bilinear',
aspect='auto',cmap=plt.cm.Spectral_r,
extent=ext_wo_nearcirc)
ax5.imshow(FullSpect_near[:,:,75:86].mean(-1).T,
origin='lower',interpolation='bilinear',
aspect='auto',cmap=plt.cm.Spectral_r,
extent=ext_wo_near )
ax6.imshow(FullSpect_near[:,78:86,:].mean(-2).T,
origin='lower',interpolation='bilinear',
aspect='auto',cmap=plt.cm.Spectral_r,
extent=ext_wo_near )
ax1.set_ylabel('Far field \n Angle (mrad)',fontsize=16)
ax3.set_ylabel('Near field (cylindric) \n Angle (mrad)',fontsize=16)
ax5.set_ylabel('Near field \n Angle (mrad)',fontsize=16)
for ax in (ax1,ax2,ax3,ax4,ax5,ax6):
ax.set_ylim(-1.5,1.5)
ax.set_xlabel('Wavenumber ($k/k_0$)',fontsize=16)
fig,axs = plt.subplots(3,3, figsize=(16,15))
kk = 0.95*k_res
for i in range(3):
kk = (0.7*k_res, 0.98*k_res, None)[i]
(ax1,ax2,ax3) = axs[:,i]
spotXY_far,ext_far = sr_calc_far.get_spot_cartesian(k0=kk,**args_calc)
spotXY_nearcirc,ext_nearcirc = sr_calc_nearcirc.get_spot_cartesian(k0=kk,**args_calc)
spotXY_near = sr_calc_near.get_spot(k0=kk,**args_calc)
ax1.imshow(spotXY_far.T,origin='lower', cmap = plt.cm.Spectral_r,
extent=g0*ext_far)
ax2.imshow(spotXY_nearcirc.T, origin='lower',cmap=plt.cm.Spectral_r,
extent=g0*ext_nearcirc/(sr_calc_nearcirc.Args['Grid'][3]) )
ax3.imshow(spotXY_near.T, origin='lower',cmap=plt.cm.Spectral_r,
extent=g0*ext_near/Lsource )
if i==0:
ax1.set_ylabel('Far field',fontsize=16)
ax2.set_ylabel('Near field (cylindric)',fontsize=16)
ax3.set_ylabel('Near field',fontsize=16)
for ax in (ax1,ax2,ax3):
ax.set_xlabel('Angle (mrad)',fontsize=16)
ax.set_xlim(-1.5,1.5)
ax.set_ylim(-1.5,1.5)
Explanation: As mentioned before, the radiation profile for a given (e.g. fundumantal) wavenumber, can be specified as k0 argument to get_spot.
End of explanation |
2,359 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ImageNet with GoogLeNet
Input
GoogLeNet (the neural network structure which this notebook uses) was created to analyse 224x224 pictures from the ImageNet competition.
Output
This notebook classifies each input image into exatly one output classification (out of 1000 possibilities).
Step1: Functions for building the GoogLeNet model with Lasagne are defined in model.googlenet
Step2: The actual structure of the model is somewhat complex, to see the code, uncomment the line below (don't execute the code that appears in the cell, though)
Step3: The 27Mb parameter set has already been downloaded...
Step4: Build the model and select layers we need - the features are taken from the final network layer, before the softmax nonlinearity.
Step5: Load the pretrained weights into the network
Step6: The images need some preprocessing before they can be fed to the CNN
Step7: Quick Test on an Example Image
Let's verify that GoogLeNet and our preprocessing are functioning properly
Step8: Test on Multiple Images in a Directory
Feel free to upload more images into the given directory (or create a new one), and see what the results are... | Python Code:
import theano
import theano.tensor as T
import lasagne
from lasagne.utils import floatX
import numpy as np
import scipy
import matplotlib.pyplot as plt
%matplotlib inline
import os
import json
import pickle
Explanation: ImageNet with GoogLeNet
Input
GoogLeNet (the neural network structure which this notebook uses) was created to analyse 224x224 pictures from the ImageNet competition.
Output
This notebook classifies each input image into exatly one output classification (out of 1000 possibilities).
End of explanation
from model import googlenet
Explanation: Functions for building the GoogLeNet model with Lasagne are defined in model.googlenet:
End of explanation
# Uncomment and execute this cell to see the GoogLeNet source
# %load models/imagenet_theano/googlenet.py
Explanation: The actual structure of the model is somewhat complex, to see the code, uncomment the line below (don't execute the code that appears in the cell, though)
End of explanation
# !wget -N --directory-prefix=./data/googlenet https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/blvc_googlenet.pkl
Explanation: The 27Mb parameter set has already been downloaded...
End of explanation
cnn_layers = googlenet.build_model()
cnn_input_var = cnn_layers['input'].input_var
cnn_feature_layer = cnn_layers['loss3/classifier']
cnn_output_layer = cnn_layers['prob']
get_cnn_features = theano.function([cnn_input_var], lasagne.layers.get_output(cnn_feature_layer))
print("Defined GoogLeNet model")
Explanation: Build the model and select layers we need - the features are taken from the final network layer, before the softmax nonlinearity.
End of explanation
params = pickle.load(open('./data/googlenet/blvc_googlenet.pkl', 'rb'), encoding='iso-8859-1')
model_param_values = params['param values']
classes = params['synset words']
lasagne.layers.set_all_param_values(cnn_output_layer, model_param_values)
Explanation: Load the pretrained weights into the network
End of explanation
MEAN_VALUES = np.array([104, 117, 123]).reshape((3,1,1))
def prep_image(im):
if len(im.shape) == 2:
im = im[:, :, np.newaxis]
im = np.repeat(im, 3, axis=2)
# Resize so smallest dim = 224, preserving aspect ratio
h, w, _ = im.shape
if h < w:
#im = skimage.transform.resize(im, (224, w*224/h), preserve_range=True)
im = scipy.misc.imresize(im, (224, w*224/h))
else:
#im = skimage.transform.resize(im, (h*224/w, 224), preserve_range=True)
im = scipy.misc.imresize(im, (h*224/w, 224))
# Central crop to 224x224
h, w, _ = im.shape
im = im[h//2-112:h//2+112, w//2-112:w//2+112]
rawim = np.copy(im).astype('uint8')
# Shuffle axes to c01
im = np.swapaxes(np.swapaxes(im, 1, 2), 0, 1)
# Convert to BGR
im = im[::-1, :, :]
im = im - MEAN_VALUES
return rawim, floatX(im[np.newaxis])
Explanation: The images need some preprocessing before they can be fed to the CNN
End of explanation
im = plt.imread('./images/cat-with-tongue_224x224.jpg')
plt.imshow(im)
rawim, cnn_im = prep_image(im)
plt.imshow(rawim)
p = get_cnn_features(cnn_im)
print(classes[p.argmax()])
Explanation: Quick Test on an Example Image
Let's verify that GoogLeNet and our preprocessing are functioning properly :
End of explanation
image_dir = './images/'
image_files = [ '%s/%s' % (image_dir, f) for f in os.listdir(image_dir)
if (f.lower().endswith('png') or f.lower().endswith('jpg')) and f!='logo.png' ]
import time
t0 = time.time()
for i, f in enumerate(image_files):
im = plt.imread(f)
#print("Image File:%s" % (f,))
rawim, cnn_im = prep_image(im)
prob = get_cnn_features(cnn_im)
top5 = np.argsort(prob[0])[-1:-6:-1]
plt.figure()
plt.imshow(im.astype('uint8'))
plt.axis('off')
for n, label in enumerate(top5):
plt.text(350, 50 + n * 25, '{}. {}'.format(n+1, classes[label]), fontsize=14)
print("DONE : %6.2f seconds each" %(float(time.time() - t0)/len(image_files),))
Explanation: Test on Multiple Images in a Directory
Feel free to upload more images into the given directory (or create a new one), and see what the results are...
End of explanation |
2,360 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Isentropic Analysis
The MetPy function mpcalc.isentropic_interpolation allows for isentropic analysis from model
analysis data in isobaric coordinates.
Step1: Getting the data
In this example, NARR reanalysis data
for 18 UTC 04 April 1987 from the National Centers for Environmental Information will be
used.
Step2: We will reduce the dimensionality of the data as it is pulled in to remove an empty time
dimension, as well as add longitude and latitude as coordinates (instead of data variables).
Step3: To properly interpolate to isentropic coordinates, the function must know the desired output
isentropic levels. An array with these levels will be created below.
Step4: Conversion to Isentropic Coordinates
Once three dimensional data in isobaric coordinates has been pulled and the desired
isentropic levels created, the conversion to isentropic coordinates can begin. Data will be
passed to the function as below. The function requires that isentropic levels, as well as a
DataArray of temperature on isobaric coordinates be input. Any additional inputs (in this
case specific humidity, geopotential height, and u and v wind components) will be
logarithmicaly interpolated to isentropic space.
Step5: The output is an xarray Dataset
Step6: Note that the units on our wind variables are not ideal for plotting. Instead, let us
convert them to more appropriate values.
Step7: Converting to Relative Humidity
The NARR only gives specific humidity on isobaric vertical levels, so relative humidity will
have to be calculated after the interpolation to isentropic space.
Step8: Plotting the Isentropic Analysis
Step9: Montgomery Streamfunction
The Montgomery Streamfunction, ${\psi} = gdz + CpT$, is often desired because its
gradient is proportional to the geostrophic wind in isentropic space. This can be easily
calculated with mpcalc.montgomery_streamfunction. | Python Code:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.plots import add_metpy_logo, add_timestamp
from metpy.units import units
Explanation: Isentropic Analysis
The MetPy function mpcalc.isentropic_interpolation allows for isentropic analysis from model
analysis data in isobaric coordinates.
End of explanation
data = xr.open_dataset(get_test_data('narr_example.nc', False))
print(list(data.variables))
Explanation: Getting the data
In this example, NARR reanalysis data
for 18 UTC 04 April 1987 from the National Centers for Environmental Information will be
used.
End of explanation
data = data.squeeze().set_coords(['lon', 'lat'])
Explanation: We will reduce the dimensionality of the data as it is pulled in to remove an empty time
dimension, as well as add longitude and latitude as coordinates (instead of data variables).
End of explanation
isentlevs = [296.] * units.kelvin
Explanation: To properly interpolate to isentropic coordinates, the function must know the desired output
isentropic levels. An array with these levels will be created below.
End of explanation
isent_data = mpcalc.isentropic_interpolation_as_dataset(
isentlevs,
data['Temperature'],
data['u_wind'],
data['v_wind'],
data['Specific_humidity'],
data['Geopotential_height']
)
Explanation: Conversion to Isentropic Coordinates
Once three dimensional data in isobaric coordinates has been pulled and the desired
isentropic levels created, the conversion to isentropic coordinates can begin. Data will be
passed to the function as below. The function requires that isentropic levels, as well as a
DataArray of temperature on isobaric coordinates be input. Any additional inputs (in this
case specific humidity, geopotential height, and u and v wind components) will be
logarithmicaly interpolated to isentropic space.
End of explanation
isent_data
Explanation: The output is an xarray Dataset:
End of explanation
isent_data['u_wind'] = isent_data['u_wind'].metpy.convert_units('kt')
isent_data['v_wind'] = isent_data['v_wind'].metpy.convert_units('kt')
Explanation: Note that the units on our wind variables are not ideal for plotting. Instead, let us
convert them to more appropriate values.
End of explanation
isent_data['Relative_humidity'] = mpcalc.relative_humidity_from_specific_humidity(
isent_data['pressure'],
isent_data['temperature'],
isent_data['Specific_humidity']
).metpy.convert_units('percent')
Explanation: Converting to Relative Humidity
The NARR only gives specific humidity on isobaric vertical levels, so relative humidity will
have to be calculated after the interpolation to isentropic space.
End of explanation
# Set up our projection and coordinates
crs = ccrs.LambertConformal(central_longitude=-100.0, central_latitude=45.0)
lon = isent_data['pressure'].metpy.longitude
lat = isent_data['pressure'].metpy.latitude
# Coordinates to limit map area
bounds = [(-122., -75., 25., 50.)]
# Choose a level to plot, in this case 296 K (our sole level in this example)
level = 0
fig = plt.figure(figsize=(17., 12.))
add_metpy_logo(fig, 120, 245, size='large')
ax = fig.add_subplot(1, 1, 1, projection=crs)
ax.set_extent(*bounds, crs=ccrs.PlateCarree())
ax.add_feature(cfeature.COASTLINE.with_scale('50m'), linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Plot the surface
clevisent = np.arange(0, 1000, 25)
cs = ax.contour(lon, lat, isent_data['pressure'].isel(isentropic_level=level),
clevisent, colors='k', linewidths=1.0, linestyles='solid',
transform=ccrs.PlateCarree())
cs.clabel(fontsize=10, inline=1, inline_spacing=7, fmt='%i', rightside_up=True,
use_clabeltext=True)
# Plot RH
cf = ax.contourf(lon, lat, isent_data['Relative_humidity'].isel(isentropic_level=level),
range(10, 106, 5), cmap=plt.cm.gist_earth_r, transform=ccrs.PlateCarree())
cb = fig.colorbar(cf, orientation='horizontal', aspect=65, shrink=0.5, pad=0.05,
extendrect='True')
cb.set_label('Relative Humidity', size='x-large')
# Plot wind barbs
ax.barbs(lon.values, lat.values, isent_data['u_wind'].isel(isentropic_level=level).values,
isent_data['v_wind'].isel(isentropic_level=level).values, length=6,
regrid_shape=20, transform=ccrs.PlateCarree())
# Make some titles
ax.set_title(f'{isentlevs[level]:~.0f} Isentropic Pressure (hPa), Wind (kt), '
'Relative Humidity (percent)', loc='left')
add_timestamp(ax, isent_data['time'].values.astype('datetime64[ms]').astype('O'),
y=0.02, high_contrast=True)
fig.tight_layout()
Explanation: Plotting the Isentropic Analysis
End of explanation
# Calculate Montgomery Streamfunction and scale by 10^-2 for plotting
msf = mpcalc.montgomery_streamfunction(
isent_data['Geopotential_height'],
isent_data['temperature']
).values / 100.
# Choose a level to plot, in this case 296 K
level = 0
fig = plt.figure(figsize=(17., 12.))
add_metpy_logo(fig, 120, 250, size='large')
ax = plt.subplot(111, projection=crs)
ax.set_extent(*bounds, crs=ccrs.PlateCarree())
ax.add_feature(cfeature.COASTLINE.with_scale('50m'), linewidth=0.75)
ax.add_feature(cfeature.STATES.with_scale('50m'), linewidth=0.5)
# Plot the surface
clevmsf = np.arange(0, 4000, 5)
cs = ax.contour(lon, lat, msf[level, :, :], clevmsf,
colors='k', linewidths=1.0, linestyles='solid', transform=ccrs.PlateCarree())
cs.clabel(fontsize=10, inline=1, inline_spacing=7, fmt='%i', rightside_up=True,
use_clabeltext=True)
# Plot RH
cf = ax.contourf(lon, lat, isent_data['Relative_humidity'].isel(isentropic_level=level),
range(10, 106, 5), cmap=plt.cm.gist_earth_r, transform=ccrs.PlateCarree())
cb = fig.colorbar(cf, orientation='horizontal', aspect=65, shrink=0.5, pad=0.05,
extendrect='True')
cb.set_label('Relative Humidity', size='x-large')
# Plot wind barbs
ax.barbs(lon.values, lat.values, isent_data['u_wind'].isel(isentropic_level=level).values,
isent_data['v_wind'].isel(isentropic_level=level).values, length=6,
regrid_shape=20, transform=ccrs.PlateCarree())
# Make some titles
ax.set_title(f'{isentlevs[level]:~.0f} Montgomery Streamfunction '
r'($10^{-2} m^2 s^{-2}$), Wind (kt), Relative Humidity (percent)', loc='left')
add_timestamp(ax, isent_data['time'].values.astype('datetime64[ms]').astype('O'),
y=0.02, pretext='Valid: ', high_contrast=True)
fig.tight_layout()
plt.show()
Explanation: Montgomery Streamfunction
The Montgomery Streamfunction, ${\psi} = gdz + CpT$, is often desired because its
gradient is proportional to the geostrophic wind in isentropic space. This can be easily
calculated with mpcalc.montgomery_streamfunction.
End of explanation |
2,361 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scattering and SBG-FS file stream read notebook
This notebook explores setting up tasks for scatter and SBG's file storage setup. This runs multiple samples in a scatter plus batch mode.
Step1: Logging into your account on CGC
Use your authentication token to sync up your account
Step2: Finding the project
Step3: Listing bam files in the project
Step4: Get the app to run
Step5: Set up the number of files per task
Step6: Set up draft tasks and perform analysis | Python Code:
import sevenbridges as sbg
from sevenbridges.errors import SbgError
from sevenbridges.http.error_handlers import *
import re
import datetime
import binpacking
print("SBG library imported.")
print sbg.__version__
Explanation: Scattering and SBG-FS file stream read notebook
This notebook explores setting up tasks for scatter and SBG's file storage setup. This runs multiple samples in a scatter plus batch mode.
End of explanation
prof = 'default'
config_file = sbg.Config(profile=prof)
api = sbg.Api(config=config_file,error_handlers=[rate_limit_sleeper,maintenance_sleeper,general_error_sleeper])
print "Api Configured!!"
print "Api Username : ", api.users.me()
Explanation: Logging into your account on CGC
Use your authentication token to sync up your account
End of explanation
my_project = api.projects.get(id='anellor1/omfgene')
for m in my_project.get_members():
print m
print my_project.billing_group
Explanation: Finding the project
End of explanation
#Listing all files in a project
files = [f for f in api.files.query(project=my_project,limit=100).all() if f.name.endswith(".bam")]
print len(files)
Explanation: Listing bam files in the project
End of explanation
app = api.apps.get(id="anellor1/omfgene/omfgene-wrapper")
print app.name
input_port_app = 'input_file'
Explanation: Get the app to run
End of explanation
import math
inputs = {}
num_files = len(files)
num_hosts = 10 #instances in workflow
jobs_per_host = 36 #threads in per instance
minutes_per_run = 25 #estimated
runs_per_hour = 300 / minutes_per_run # Setting number of hours to run an task to be a LCD of minutes_per_run
tasks_per_run = runs_per_hour * jobs_per_host * num_hosts
num_runs = int(math.ceil(num_files*1.0 / tasks_per_run))
print num_files,tasks_per_run,num_runs
Explanation: Set up the number of files per task
End of explanation
for run_index in range(num_runs):
low_bound = run_index * tasks_per_run
high_bound = min((run_index + 1) * tasks_per_run, num_files)
#print low_bound,high_bound
input_files = files[low_bound:high_bound]
task_name = "OMFGene task Run:{}, NumFiles:{}, TimeStamp {}".format(run_index+1, high_bound-low_bound, datetime.datetime.now())
inputs[input_port_app] = input_files
my_task = api.tasks.create(name=task_name, project=my_project,
app=app, inputs=inputs, run=False)
if my_task.errors:
print(my_task.errors())
else:
print('Your task %s is ready to go' % my_task.name)
# Comment off the statement for execution of tasks.
my_task.run()
Explanation: Set up draft tasks and perform analysis
End of explanation |
2,362 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 7
Step1: Today's lab reviews Maximum Likelihood Estimation, and introduces interctive plotting in the jupyter notebook.
Part 1
Step2: Question 2
Step3: Question 3
Step4: Question 4
Step5: Part 2
Step6: What's the log-likelihood? As before, don't just use np.log(btype_likelihood).
Step7: Question 6
Step8: Now, complete the plot_btype_likelihood_3d function.
Step9: Question 7
Step10: We also can make some 2d color plots, to get a better view of exactly where our values are maximized. As in the 3d plots, redder colors refer to higher likelihoods.
Step11: As with the binomial, the likelihood has a "sharper" distribution than the log-likelihood. So, plotting the likelihood, we can see our maximal point with greater clarity.
Step12: Question 8
Step13: Submitting your assignment
If you made a good-faith effort to complete the lab, change i_finished_the_lab to True in the cell below. In any case, run the cells below to submit the lab. | Python Code:
# Run this cell to set up the notebook.
import numpy as np
import pandas as pd
import seaborn as sns
import scipy as sci
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import patches, cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from mpl_toolkits.mplot3d import Axes3D
from client.api.notebook import Notebook
ok = Notebook('lab07.ok')
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
Explanation: Lab 7: Maximum Likelihood Estimation
End of explanation
factorial = sci.misc.factorial # so you don't have to look it up
def likelihood(n, p, x):
return factorial(n)/factorial(n-x)/factorial(x) * p**x * (1-p)**(n-x) #SOLUTION
Explanation: Today's lab reviews Maximum Likelihood Estimation, and introduces interctive plotting in the jupyter notebook.
Part 1: Likelihood of the Binomial Distribution
Recall that the binomial distribution describes the chance of $x$ successes out of $n$ trials, where the trials are independent and each has a probability $p$ of success. For instance, the number of sixes rolled in ten rolls of a die is distributed $Binomial(10, \frac{1}{6})$.
Given $n$ draws from a $Binomial(n, p)$ distribution, which resulted in $x$ successes, we wish to find the chance $p$ of success via maximum likelihood estimation.
Question 1: Likelihood of the Binomial
What is the likelihood function for the binomial, L(p)? Remember, this is equal to the probability of the data occuring given some chance of success $p$.
As an aid, we provide a factorial(x) function below.
End of explanation
def log_likelihood(n, p, x):
return x * np.log(p) + (n - x) * np.log(1 - p) #SOLUTION
Explanation: Question 2: Log-likelihood of the Binomial
What is the log of the likelihood function for the binomial, $log(L(p)) = lik(p)$? Don't just use np.log(likelihood) - determine the value as a new function of n, x, and p.
End of explanation
def highest_likelihood(n, x):
return x / n #SOLUTION
Explanation: Question 3: Maximum Likelihood Estimate of the Binomial
Given $n$ samples from a binomial distribution $Bin(n, p)$, $x$ of which were successes, what is the value $p$ which maximizes the log-likelihood function?
Hint: Find $\frac{d}{dp}lik(p)$, set it equal to 0, and solve for p in terms of x and n.
End of explanation
n_widget = widgets.FloatSlider(min=1, max=20, step=1, value=20)
x_widget = widgets.FloatSlider(min=0, max=20, step=1, value=5)
# We want to make sure x <= n, otherwise we get into trouble
def update_x_range(*args):
x_widget.max = n_widget.value
n_widget.observe(update_x_range, 'value')
def plot_likelihood(n, x, plot_log=False):
# values of p are on the x-axis.
# We plot every value from 0.01 to 0.99
pvals = np.arange(1, 100)/100
# values of either Likelihood(p) or log(Likelihood(p))
# are on the y-axis, depending on the method
if plot_log:
yvals = log_likelihood(n, pvals, x) #SOLUTION
else:
yvals = likelihood(n, pvals, x) #SOLUTION
plt.plot(pvals, yvals)
# Put a line where L(p) is maximized and print the value p*
p_star = highest_likelihood(n, x)
plt.axvline(p_star, lw=1.5, color='r', ls='dashed')
plt.text(p_star + 0.01, min(yvals), 'p*=%.3f' % (p_star))
plt.xlabel('p')
if plot_log:
plt.ylabel('lik(p)')
plt.title("log(Likelihood(p)), if X ~ bin(n, p) = k")
else:
plt.ylabel('L(p)')
plt.title("Likelihood of p, if X ~ bin(n, p) = k")
plt.show()
interact(plot_likelihood, n=n_widget, x=x_widget, log=False);
Explanation: Question 4: Interactive Plotting
Using the interact jupyter notebook extension, we can create interactive plots. In this case, we create an interactive plot of likelihood as a function of $p$ - interactive in the sense that we can plug in our own values of $n$ and $x$ and see how the plot changes. We can also choose our method of plotting - likelihood or log(likelihood).
We've provided code that creates sliders for n and x, and a checkbox to determine whether to plot the likelihood or the log-likelihood. Finish our code by defining the variable yvals, and then run it and play around a bit with the output.
End of explanation
def btype_likelihood(pa, pb, po, O, A, B, AB):
return (po**2)**O * (pa**2 + 2*pa*po)**A * (pb**2 + 2*pb*po)**B * (2*pa*pb)**AB #SOLUTION
Explanation: Part 2: Likelihood of the Blood Types
Here's a more complex example, involving several variables. Recall the blood types experiment from lecture. We assume a model where a person's blood type is determined by two genes, each of which is identically-distributed between three alleles. We call the alleles $a$, $b$, and $o$. For each person, the two specific allele variants are random and independent of one another.
We know that, if a person has alleles $a$ and $b$, they have blood type $AB$. If the have alleles $a$ and $a$, or $a$ and $o$, they have blood type $A$. Similarly, if the have alleles $b$ and $b$, or $b$ and $o$, they have blood type $B$. Finally, if they have alleles $o$ and $o$, they have blood type $O$.
We measure the blood types of a group of people, and get counts of each type $A$, $B$, $AB$, and $O$. Using these counts, we wish to determine the frequency of alleles $a$, $b$, and $o$. We know that, under the assumption of Hardy-Weinberg equilibrium:
The frequency of type $O$ is $p_o^2$.
The frequency of type $A$ is $p_a^2 + 2p_op_a$.
The frequency of type $B$ is $p_b^2 + 2p_op_b$.
And the frequency of type $AB$ is $2p_ap_b$.
Question 5: blood type likelihood formulas
What's the likelihood of allele probabilities $p_a$, $p_b$, $p_o$, given sample counts O, A, B, AB?
Hint: Think about how the binomial formula can be extended. Don't worry about the $n$ choose $k$ bit - we're only concerned with the specific values O, A, B, and AB that we observed, so that term will be the same regardless of $p_a, p_b, p_o$, and it can be ignored.
End of explanation
def btype_log_likelihood(pa, pb, po, O, A, B, AB):
return np.log(po**2)*O + np.log(pa**2 + 2*pa*po)*A + np.log(pb**2 + 2*pb*po)*B + np.log(2*pa*pb)*AB #SOLUTION
Explanation: What's the log-likelihood? As before, don't just use np.log(btype_likelihood).
End of explanation
def plot_surface_3d(X, Y, Z, orient_x = 45, orient_y = 45):
highest_Z = max(Z.reshape(-1,1))
lowest_Z = min(Z.reshape(-1,1))
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, Z,
cmap=cm.coolwarm,
linewidth=0,
antialiased=False,
rstride=5, cstride=5)
ax.zaxis.set_major_locator(LinearLocator(5))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.view_init(orient_y, orient_x)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.title("log(Likelihood(p_a, p_b))")
plt.xlabel("p_a")
plt.ylabel("p_b")
plt.show()
Explanation: Question 6: Interactive 3D Plots of Allele Distribution Likelihood
Fill in the function plot_btype_likelihood_3d, which plots the log-likelihood as $p_a$ and $p_b$ vary (since $p_o$ is a simple function of $p_a$ and $p_b$, this covers all possible triplets of values). You'll need to define four methods of interact input - we recommend sticking with FloatSlider. Allow for samples of up to 1000 people, with anywhere from 0 to 100% of the population having each phenotype $A$, $B$, $AB$, $O$.
First, run this cell to define a function for plotting 3D graphs:
End of explanation
O = widgets.FloatSlider(min=1, max=200, step=1, value=120) #SOLUTION
A = widgets.FloatSlider(min=1, max=200, step=1, value=100) #SOLUTION
B = widgets.FloatSlider(min=1, max=200, step=1, value=30) #SOLUTION
AB = widgets.FloatSlider(min=1, max=200, step=1, value=5) #SOLUTION
def plot_btype_likelihood_3d(O, A, B, AB):
pa = np.arange(1, 50)/100
pb = np.arange(1, 50)/100
pa, pb = np.meshgrid(pa, pb) # get all pairs
po = 1 - pa - pb #SOLUTION
likelihoods = btype_log_likelihood(pa, pb, po, O, A, B, AB) #SOLUTION
plot_surface_3d(pa, pb, likelihoods)
interact(plot_btype_likelihood_3d, O=O, A=A, B=B, AB=AB);
Explanation: Now, complete the plot_btype_likelihood_3d function.
End of explanation
O2 = widgets.FloatSlider(min=1, max=200, step=1, value=120) #SOLUTION
A2 = widgets.FloatSlider(min=1, max=200, step=1, value=100) #SOLUTION
B2 = widgets.FloatSlider(min=1, max=200, step=1, value=30) #SOLUTION
AB2 = widgets.FloatSlider(min=1, max=200, step=1, value=5) #SOLUTION
X = widgets.FloatSlider(min=-360, max=360, step=15, value=90) #SOLUTION
Y = widgets.FloatSlider(min=-360, max=360, step=15, value=30) #SOLUTION
def plot_btype_likelihood_3d_oriented(O, A, B, AB, X, Y):
pa = np.arange(1, 50)/100
pb = np.arange(1, 50)/100
pa, pb = np.meshgrid(pa, pb) # get all pairs
po = 1 - pa - pb #SOLUTION
likelihoods = btype_log_likelihood(pa, pb, po, O, A, B, AB) #SOLUTION
plot_surface_3d(pa, pb, likelihoods, orient_x=X, orient_y=Y)
interact(plot_btype_likelihood_3d_oriented, O=O2, A=A2, B=B2, AB=AB2, X=X, Y=Y);
Explanation: Question 7: Rotating 3D Plots of Allele Distribution Likelihood
We can also rotate this 3D graphic by passing values orient_x and orient_y to the plot_surface_3d function. Add two new sliders, and fill in the plot_btype_likelihood_3d_oriented function. You may want to set the step size on the new sliders to a value greater than one, and make sure the max value is large enough such that they can rotate all the way around. You should be able to copy-paste a good deal of code from above.
End of explanation
O3 = widgets.FloatSlider(min=1, max=200, step=1, value=120)
A3 = widgets.FloatSlider(min=1, max=200, step=1, value=100)
B3 = widgets.FloatSlider(min=1, max=200, step=1, value=30)
AB3 = widgets.FloatSlider(min=1, max=200, step=1, value=5)
def plot_btype_log_likelihood_heatmap(O, A, B, AB):
pa = np.arange(1, 50)/100
pb = np.arange(1, 50)/100
pa, pb = np.meshgrid(pa, pb) # get all possible pairs
po = 1 - pa - pb
likelihoods = btype_log_likelihood(pa, pb, po, O, A, B, AB)
plt.pcolor(pa, pb, likelihoods, cmap=cm.coolwarm)
plt.xlabel("p_a")
plt.ylabel("p_b")
plt.title("log(Likelihood(p_a, p_b))")
plt.show()
interact(plot_btype_log_likelihood_heatmap, O=O3, A=A3, B=B3, AB=AB3);
Explanation: We also can make some 2d color plots, to get a better view of exactly where our values are maximized. As in the 3d plots, redder colors refer to higher likelihoods.
End of explanation
O4 = widgets.FloatSlider(min=1, max=200, step=1, value=120)
A4 = widgets.FloatSlider(min=1, max=200, step=1, value=100)
B4 = widgets.FloatSlider(min=1, max=200, step=1, value=30)
AB4 = widgets.FloatSlider(min=1, max=200, step=1, value=5)
def plot_btype_likelihood_heatmap(O, A, B, AB):
pa = np.arange(1, 100)/100
pb = np.arange(1, 100)/100
pa, pb = np.meshgrid(pa, pb) # get all possible pairs
po = 1 - pa - pb
likelihoods = btype_likelihood(pa, pb, po, O, A, B, AB)
likelihoods[(pa + pb) > 1] = 0 # Don't plot impossible probability pairs
plt.pcolor(pa, pb, likelihoods, cmap=cm.coolwarm)
plt.xlabel("p_a")
plt.ylabel("p_b")
plt.title("Likelihood(p_a, p_b)")
plt.show()
interact(plot_btype_likelihood_heatmap, O=O4, A=A4, B=B4, AB=AB4);
Explanation: As with the binomial, the likelihood has a "sharper" distribution than the log-likelihood. So, plotting the likelihood, we can see our maximal point with greater clarity.
End of explanation
O5 = widgets.FloatSlider(min=1, max=200, step=1, value=120)
A5 = widgets.FloatSlider(min=1, max=200, step=1, value=100)
B5 = widgets.FloatSlider(min=1, max=200, step=1, value=30)
AB5 = widgets.FloatSlider(min=1, max=200, step=1, value=5)
def maximize_btype_likelihood(O, A, B, AB):
def flipped_btype_fixed_params(params):
# "params" is a list containing p_a, p_b, p_o
pa, pb, po = params
# We wish to return a value which is minimized when the log-likelihood is maximized...
# What function would accomplish this?
return -btype_log_likelihood(pa, pb, po, O, A, B, AB) #SOLUTION
# We need to provide an initial guess at the solution
initial_guess = [1/3, 1/3, 1/3]
# Each variable is bounded between zero and one
# sci.optimize.minimize seems to dislike exact zero bounds, though, so we use 10^-6
bnds = ((1e-6, 1), (1e-6, 1), (1e-6, 1))
# An additional constraint on our parameters - they must sum to one
# The minimizer will only check params where constraint_fn(params) = 0
def constraint_fn(params):
# "params" is a list containing p_a, p_b, p_o
return sum(params) - 1
constraint = ({'type': 'eq', 'fun': constraint_fn},)
pa, pb, po = sci.optimize.minimize(flipped_btype_fixed_params,
x0=initial_guess,
bounds=bnds,
constraints=constraint).x
return "pa* = %.3f, pb* = %.2f, po* = %.3f" % (pa, pb, po)
interact(maximize_btype_likelihood, O=O5, A=A5, B=B5, AB=AB5);
Explanation: Question 8: Getting the MLE for the blood-type question
Finally, we want to get our actual estimates for $p_a, p_b, p_o$! However, unlike in the Binomial example, we don't want to calculate our MLE by hand. So instead, we use function-minimizers to calculate the highest likelihood.
scipy's optimize.minimize function allows us to find the tuple of arguments that minimizes a function of $n$ variables, subject to desired constraints. Given any set of observed phenotype counts $O, A, B, AB$, we can thus find the specific values $p_a, p_b, p_o$ that maximize the log-likelihood function. Finish the nested function flipped_btype_fixed_params in order to do just that.
End of explanation
i_finished_the_lab = False
_ = ok.grade('qcompleted')
_ = ok.backup()
_ = ok.submit()
Explanation: Submitting your assignment
If you made a good-faith effort to complete the lab, change i_finished_the_lab to True in the cell below. In any case, run the cells below to submit the lab.
End of explanation |
2,363 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<CENTER>
<header>
<h1>Pandas Tutorial</h1>
<h3>EuroScipy, Cambridge UK, August 27th, 2015</h3>
<h2>Joris Van den Bossche</h2>
<p></p>
Source
Step1: Let's start with a showcase
Case study
Step2: to answering questions about this data in a few lines of code
Step3: How many exceedances of the limit values?
Step4: What is the difference in diurnal profile between weekdays and weekend? | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn
pd.options.display.max_rows = 8
Explanation: <CENTER>
<header>
<h1>Pandas Tutorial</h1>
<h3>EuroScipy, Cambridge UK, August 27th, 2015</h3>
<h2>Joris Van den Bossche</h2>
<p></p>
Source: <a href="https://github.com/jorisvandenbossche/2015-EuroScipy-pandas-tutorial">https://github.com/jorisvandenbossche/2015-EuroScipy-pandas-tutorial</a>
</header>
</CENTER>
About me: Joris Van den Bossche
PhD student at Ghent University and VITO, Belgium
bio-science engineer, air quality research
pandas core dev
->
https://github.com/jorisvandenbossche
@jorisvdbossche
Licensed under CC BY 4.0 Creative Commons
Content of this talk
Why do you need pandas?
Basic introduction to the data structures
Guided tour through some of the pandas features with two case studies: movie database and a case study about air quality
If you want to follow along, this is a notebook that you can view or run yourself:
All materials (notebook, data, link to nbviewer): https://github.com/jorisvandenbossche/2015-EuroScipy-pandas-tutorial
You need pandas >= 0.15.2 (easy solution is using Anaconda)
Some imports:
End of explanation
data = pd.read_csv('data/airbase_data.csv', index_col=0, parse_dates=True)
data
Explanation: Let's start with a showcase
Case study: air quality in Europe
AirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe
Starting from these hourly data for different stations:
End of explanation
data['1999':].resample('A').plot(ylim=[0,100])
Explanation: to answering questions about this data in a few lines of code:
Does the air pollution show a decreasing trend over the years?
End of explanation
exceedances = data > 200
exceedances = exceedances.groupby(exceedances.index.year).sum()
ax = exceedances.loc[2005:].plot(kind='bar')
ax.axhline(18, color='k', linestyle='--')
Explanation: How many exceedances of the limit values?
End of explanation
data['weekday'] = data.index.weekday
data['weekend'] = data['weekday'].isin([5, 6])
data_weekend = data.groupby(['weekend', data.index.hour])['FR04012'].mean().unstack(level=0)
data_weekend.plot()
Explanation: What is the difference in diurnal profile between weekdays and weekend?
End of explanation |
2,364 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch. 11 - Evaluating and deploying the model
In chapter 10 we have learned a lot of new tricks and tools to build neural networks that can deal with stuructured data such as the bank marketing dataset.
In this chapter we will take a look at model evaluation and the steps that are nessecary till we can deploy out model in the real world.
Loading the model
In the last chapter we have experimented with different tools for our neural network. To spare the step in this chapter, a model has been saved as an H5 file already. It was created using the tools from the last chapter.
Step1: Loading the data
We have also seen already how to split data into train, dev and test sets. To save repetition, the data from used in the last chapter has been saved and can now be loaded using numpys load function. Note that this directly creates a numpy array and not a pandas dataframe as it did when we loaded the data from csv.
Step2: Evaluating on the evaluation criteria
In the last chapter, we looked at the accuracy and loss of the model. Accuracy is a good proxy for our actual evaluation metric but it is not exactly the same. Recall that our evaluation metric is the profit made per thousand calls and can be defined as
$$ profit = \frac{100 * TP - 40 * FP}{TP + FP} * 1000 $$
Where $TP$ are true positives, where our model predicted the customer would subscribe and the customer actually did subscribe and $FP$ are false positives where the model predicted the customer would subscribe but the customer did not subcribe.
In python we can define the profit function like this
Step3: Before we make statements on the profit we need predictions from the model. Note that our model does not output weather a customer will buy or not but only the probability of a purchase. So we need to convert those probabilities into predictions. We do this by setting a threshold and assigning all probabilities above this threshold to positive and all below to negative. In this case the threshold is 50% meaning that for all customers with a higher than 50% predicted probability of a purchase we expect a purchase and call them.
Step4: To calculate the profit we need the true and false positives. A handy way to calculate true and positives as well as true and false negatives is sklearns confusion matrix. A confusion matrix is a matrix that counts how the model 'confuses' or correctly classifies examples.
||Predicted Negative|Predicted Positive|
|-----------------|
|Actual Negative|TN|FP|
|Actual Positve|FN|TP|
We can get the values out of the matrix by unrolling it with the numpy ravel function.
Step5: Now we can calculate the profit of our campaign using the profit function defined above.
Step6: The profit predicted on the test set looks quite well \$72,231.40! An increase of more than \$42,000 over the base case. The Portuguese bank will be quite happy that we boosted the profit of their campaign. However, before shipping the model into all call centers, we should give the confusion matrix a closer look, as there is more information to be gained. We can plot it with a seaborn heatmap.
Step7: Interpreting the confusion matrix
As you can see, the model correctly classifies the majority of cases. However, it seems like it is a bit pessimistic. See how there are more false negatives than false positives. This however is not a bad thing as false positives cost money and false negatives do not. Confusion matrices can be very useful especially when you are working with more than two classes. It might well be that your classifier confuses two categories very often while it does well on the rest. Then it is often a good idea to go back and either engineer features or add data that will help distinguish the two classes in question. A confusion matrix is a good way to get an overview about what is going on and whether there is a systematic error.
Experimenting with thresholds
As false positives cost money, we should think about at which threshold we should call customers. 50% probability seems like an a bit arbitrary value, so it is worth investigating the performance under different thresholds. To this end, we will calculate the expected profit over an array of thresholds from 0 to 1.
Step8: We can now plot the expected profits
Step9: Naturally the expected profit per thousand calls rises the higher we set the probability of a yes before we call. It reaches the optimum case of \$100,000 at a threshold of 0.91 and at a threshold of 0.95 there are no positives any more. Of course simply evaluating the profit on a per thousand calls base and not factoring in that many customers who might have bought did not get called. Whether or not this is a problem depends on of many potential calls there are. If we have a limited database of phone numbers we might want to set the threshold lower or we would make only very few calls. In practice, we therefore usually sort customers by predicted likelihood of a subscription and then call the ones with the best chances first. If we have many potential customers we can call only the ones with very good chances, otherwise we will have to make do with the lower rungs as well.
Checking for systematic bias
Before we roll out the model we should make sure that does not systematically discriminate against a certain group of people. While a model that systematically does not call a minority is usually no drama, a model that systematically disapproves loans to that minority is. Machine learning models amplify biases in the data. Since they find correlations and patterns, they are likely to overreact to them a little bit. We can imagine this like setting up rules of thumb. We know that a rule of thumb is not entirely correct when we use it ourselves but it is better than just a random choice so we use it in our day to day life a lot. The same goes for models which amplify patterns and use them like rules of thumb, even if the reality is less clear.
When checking for hidden biases you usually would check for data your model should not discriminate on, such as gender or skin color. These variables should not be trained on in the first place but there might be traces of them in the data since gender might correlate with the job status for example. There are two kinds of biases our model could exhibit
Step10: It looks like the model does not exhibit any biases about young people. This is the outcome we hoped for so we can proceed.
The second case of undesired biases that get picked up by the model is more pervasive and often more difficult to deal with. A classic example comes from HR
Step11: It seems as males are in fact a little bit more likely to subscribe although the effect is quite small. We do not want our model to discriminate based on gender which is why gender was not included in the training set. But perhaps the model did pick up on it and it might even have over amplified the effect.
Step12: It seems as the model did not pick up on gender bias. The probability for males is close to 50%, as it is for the entire dataset.
Checking for hidden biases is tricky business and a few simple statistics seldom do the trick. The biggest danger of hidden biases are that they are hidden. It is hard to anticipate them in advance and check for them. This is why it is important to give people who interact with the model a voice. They might spot them because they get confronted with them.
Testing on new data
Now that we are reasonable sure we have a good model, we need to test it on new data before we ship it. Test data should be hold out data that is not touched during the training or evaluation process. Because if we change our model based on the evaluation we run the risk of fitting our model to the data we evaluate on. Therefore it is important to have a 'clean' dataset for testing. We will run the profit evaluation above again, this time on the test set
Step13: Accuracy and loss look good. There seems to be little difference to the outcomes of our dev set. | Python Code:
import keras
from keras.models import load_model
model = load_model('./support_files/Ch11_model.h5')
Explanation: Ch. 11 - Evaluating and deploying the model
In chapter 10 we have learned a lot of new tricks and tools to build neural networks that can deal with stuructured data such as the bank marketing dataset.
In this chapter we will take a look at model evaluation and the steps that are nessecary till we can deploy out model in the real world.
Loading the model
In the last chapter we have experimented with different tools for our neural network. To spare the step in this chapter, a model has been saved as an H5 file already. It was created using the tools from the last chapter.
End of explanation
import numpy as np
X_dev, y_dev = np.load('./support_files/Ch_11_X_dev.npy'), np.load('./support_files/Ch_11_y_dev.npy')
X_test, y_test = np.load('./support_files/Ch_11_X_test.npy'), np.load('./support_files/Ch_11_y_test.npy')
Explanation: Loading the data
We have also seen already how to split data into train, dev and test sets. To save repetition, the data from used in the last chapter has been saved and can now be loaded using numpys load function. Note that this directly creates a numpy array and not a pandas dataframe as it did when we loaded the data from csv.
End of explanation
def profit_per_thousand_calls(tp,fp):
profit = (100*tp - 40*fp) /(tp + fp) * 1000
return profit
Explanation: Evaluating on the evaluation criteria
In the last chapter, we looked at the accuracy and loss of the model. Accuracy is a good proxy for our actual evaluation metric but it is not exactly the same. Recall that our evaluation metric is the profit made per thousand calls and can be defined as
$$ profit = \frac{100 * TP - 40 * FP}{TP + FP} * 1000 $$
Where $TP$ are true positives, where our model predicted the customer would subscribe and the customer actually did subscribe and $FP$ are false positives where the model predicted the customer would subscribe but the customer did not subcribe.
In python we can define the profit function like this:
End of explanation
# Get the probabilities
predictions = model.predict(X_dev)
# Turn the probabilities into definite predictions
predictions[predictions >= 0.5] = 1
predictions[predictions < 0.5] = 0
Explanation: Before we make statements on the profit we need predictions from the model. Note that our model does not output weather a customer will buy or not but only the probability of a purchase. So we need to convert those probabilities into predictions. We do this by setting a threshold and assigning all probabilities above this threshold to positive and all below to negative. In this case the threshold is 50% meaning that for all customers with a higher than 50% predicted probability of a purchase we expect a purchase and call them.
End of explanation
from sklearn.metrics import confusion_matrix
# calculate confusion matrix
cm = confusion_matrix(y_pred=predictions,y_true=y_dev)
# Unroll confusion matrix
tn, fp, fn, tp = cm.ravel()
Explanation: To calculate the profit we need the true and false positives. A handy way to calculate true and positives as well as true and false negatives is sklearns confusion matrix. A confusion matrix is a matrix that counts how the model 'confuses' or correctly classifies examples.
||Predicted Negative|Predicted Positive|
|-----------------|
|Actual Negative|TN|FP|
|Actual Positve|FN|TP|
We can get the values out of the matrix by unrolling it with the numpy ravel function.
End of explanation
profit_per_thousand_calls(tp,fp)
Explanation: Now we can calculate the profit of our campaign using the profit function defined above.
End of explanation
# Import plotting libraries
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Create heatmap
ax = sns.heatmap(cm,xticklabels=['Negative','Positive'],yticklabels=['Negative','Positive'])
# Add axis labels
ax.set(xlabel='Predictions', ylabel='True values')
# Render heatmap
plt.show()
Explanation: The profit predicted on the test set looks quite well \$72,231.40! An increase of more than \$42,000 over the base case. The Portuguese bank will be quite happy that we boosted the profit of their campaign. However, before shipping the model into all call centers, we should give the confusion matrix a closer look, as there is more information to be gained. We can plot it with a seaborn heatmap.
End of explanation
# Get the probabilities
predictions = model.predict(X_dev)
# Create array of thresholds ranging from 0 to 1 in 0.01 steps
thesholds = np.arange(0,1,0.01)
# Create empty array holding profits for all thresholds
profits = []
# Loop over possible thresholds
for t in thesholds:
# Create a copy of the predicted probabilities
pred = predictions.copy()
# Turn the probabilities into definite predictions
# This time using the threshold
pred[predictions >= t] = 1
pred[predictions < t] = 0
# Get confusion matrix
tn, fp, fn, tp = confusion_matrix(y_pred=pred,y_true=y_dev).ravel()
# At some point there might not be any positives anymore, so we should stop there
if ((tp + fp ) == 0):
print('No more positives at threshold:',t)
break # Ends the loop
# Calculate the profit and add it to the list.
profits.append(profit_per_thousand_calls(tp,fp))
Explanation: Interpreting the confusion matrix
As you can see, the model correctly classifies the majority of cases. However, it seems like it is a bit pessimistic. See how there are more false negatives than false positives. This however is not a bad thing as false positives cost money and false negatives do not. Confusion matrices can be very useful especially when you are working with more than two classes. It might well be that your classifier confuses two categories very often while it does well on the rest. Then it is often a good idea to go back and either engineer features or add data that will help distinguish the two classes in question. A confusion matrix is a good way to get an overview about what is going on and whether there is a systematic error.
Experimenting with thresholds
As false positives cost money, we should think about at which threshold we should call customers. 50% probability seems like an a bit arbitrary value, so it is worth investigating the performance under different thresholds. To this end, we will calculate the expected profit over an array of thresholds from 0 to 1.
End of explanation
plt.plot(profits)
Explanation: We can now plot the expected profits:
End of explanation
# 'age_young' is the 22nd column in the numpy array so we can get the data for all young customers like this:
X_dev_young = X_dev[X_dev[:,22]==1]
# The indices in the data and the labels match so we can get the labels for the young people like this:
y_dev_young = y_dev[X_dev[:,22]==1]
# Now we can calculate the actual probability that a young person subscribes
y_dev_young.mean()
# And the predicted probability that a young person subcribes
young_pred = model.predict(X_dev_young)
young_pred.mean()
Explanation: Naturally the expected profit per thousand calls rises the higher we set the probability of a yes before we call. It reaches the optimum case of \$100,000 at a threshold of 0.91 and at a threshold of 0.95 there are no positives any more. Of course simply evaluating the profit on a per thousand calls base and not factoring in that many customers who might have bought did not get called. Whether or not this is a problem depends on of many potential calls there are. If we have a limited database of phone numbers we might want to set the threshold lower or we would make only very few calls. In practice, we therefore usually sort customers by predicted likelihood of a subscription and then call the ones with the best chances first. If we have many potential customers we can call only the ones with very good chances, otherwise we will have to make do with the lower rungs as well.
Checking for systematic bias
Before we roll out the model we should make sure that does not systematically discriminate against a certain group of people. While a model that systematically does not call a minority is usually no drama, a model that systematically disapproves loans to that minority is. Machine learning models amplify biases in the data. Since they find correlations and patterns, they are likely to overreact to them a little bit. We can imagine this like setting up rules of thumb. We know that a rule of thumb is not entirely correct when we use it ourselves but it is better than just a random choice so we use it in our day to day life a lot. The same goes for models which amplify patterns and use them like rules of thumb, even if the reality is less clear.
When checking for hidden biases you usually would check for data your model should not discriminate on, such as gender or skin color. These variables should not be trained on in the first place but there might be traces of them in the data since gender might correlate with the job status for example. There are two kinds of biases our model could exhibit:
- A deviation from the data: E.g. our data reflects no lower subscription likelihood for a certain minority but the model discriminates against it anyway.
- A bias reflected in the data: E.g. our data is skewed but we know that the minority is not more or less likely to subscribe or do not want our model to react to it.
To deal with the first kind, we can compare the average probabilities the model assigns to one group against the actual probabilities of that group. Let's say we want to check whether our model discriminates against young people. We know from chapter 9 that young people are actually a bit more likely to subscribe and there are good reasons for it as they have more time and likely do not have a long term deposit yet. But perhaps our model is overly optimistic about them:
End of explanation
# Load the gender data
# It is encoded so that 1 = male and 0 = female
gender = np.load('./support_files/Ch11_gender.npy')
# Get the subset of males in our dev data
X_dev_male = X_dev[gender[:,0] == 1]
y_dev_male = y_dev[gender[:,0] == 1]
# Calculate probability of male subscription to check for bias in data
y_dev_male.mean()
Explanation: It looks like the model does not exhibit any biases about young people. This is the outcome we hoped for so we can proceed.
The second case of undesired biases that get picked up by the model is more pervasive and often more difficult to deal with. A classic example comes from HR: A company has kept good records about promotions and wants to train a model on it. However, the managers that made those promotions where not gender neutral and did promote fewer women. If we train on this data we should expect that the model will install a firm glass ceiling.
As an example we will check whether our model discriminates based on gender. The gender data is not part of the original UCI dataset and was generated for the purpose of giving an example.
End of explanation
# Get predicted probabailites
male_pred = model.predict(X_dev_male)
# Calculate mean probability
male_pred.mean()
Explanation: It seems as males are in fact a little bit more likely to subscribe although the effect is quite small. We do not want our model to discriminate based on gender which is why gender was not included in the training set. But perhaps the model did pick up on it and it might even have over amplified the effect.
End of explanation
# First check accuracy and loss
model.evaluate(x=X_test,y=y_test)
Explanation: It seems as the model did not pick up on gender bias. The probability for males is close to 50%, as it is for the entire dataset.
Checking for hidden biases is tricky business and a few simple statistics seldom do the trick. The biggest danger of hidden biases are that they are hidden. It is hard to anticipate them in advance and check for them. This is why it is important to give people who interact with the model a voice. They might spot them because they get confronted with them.
Testing on new data
Now that we are reasonable sure we have a good model, we need to test it on new data before we ship it. Test data should be hold out data that is not touched during the training or evaluation process. Because if we change our model based on the evaluation we run the risk of fitting our model to the data we evaluate on. Therefore it is important to have a 'clean' dataset for testing. We will run the profit evaluation above again, this time on the test set:
End of explanation
# Get the probabilities now from the test set
predictions = model.predict(X_test)
# Calculate profit with 50% threshold
# Turn the probabilities into definite predictions
predictions[predictions >= 0.5] = 1
predictions[predictions < 0.5] = 0
# calculate confusion matrix
tn, fp, fn, tp = confusion_matrix(y_pred=predictions,y_true=y_test).ravel()
profit_per_thousand_calls(tp,fp)
Explanation: Accuracy and loss look good. There seems to be little difference to the outcomes of our dev set.
End of explanation |
2,365 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step1: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
Step5: Use interact with plot_fermidist to explore the distribution | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
Image('fermidist.png')
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
print (np.e)
def fermidist(energy, mu, kT):
Compute the Fermi distribution at energy, mu and kT.
# YOUR CODE HERE
#raise NotImplementedError()
F = 1/(np.e**((energy-mu)/kT)+1)
return F
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
\begin{equation}
F(\epsilon) = \frac{1}{e^{\frac{\epsilon - \mu}{kT}}+1}
\end{equation}
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
def plot_fermidist(mu, kT):
# YOUR CODE HERE
#raise NotImplementedError()
plt.plot(np.linspace(0.0,10.0), fermidist(np.linspace(0.0,10.0),mu,kT),color="c")
plt.box(False)
plt.tick_params(axis='x', top="off")
plt.tick_params(axis='y',right ="off")
plt.xlabel('Energy')
plt.ylabel('F($\epsilon$)')
plt.title('Fermi Distibution')
plt.xlim(-.1,11)
plt.ylim(-.1,1.01)
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
# YOUR CODE HERE
#raise NotImplementedError()
interact(plot_fermidist, mu=[0.0,5.0], kT=[0.1,10.0]);
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation |
2,366 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Activité - Faire danser PoppyTorso
Première partie
Step1: Ensuite, vous allez créer un objet s'appellant poppy et étant un robot de type PoppyTorso. Vous pouvez donner le nom que vous souhaitez à votre robot. Il vous suffit d'écrire
Step2: Comme toute chose en language Python, notre robot poppy est un objet qui contient d'autres objets qui sont ses moteurs.
Ca y est, si vous arrivez à accéder aux moteurs de Poppy, vous pourrez le faire bouger...
Vous devez donc accéder aux moteurs de poppy (qui se nomment "motors") et qui se trouve à l'intérieur de Poppy pour cela tapez
Step3: Tous les mouvements sont basés sur des rotations des moteurs situés aux articulations. Il suffit de fixer l'angle que l'on désire pour un moteur. Pour cela, nous pouvons utiliser la méthode
Step4: A présent choisissez un moteur au hasard dans la liste de moteurs obtenues précédemment et faîtes le bouger pour le localiser sur le robot.
Vous devez remplir le tableau suivant avec les noms des 10 moteurs
Step5: Si votre robot ne répond plus et que vous ne comprenez pas pourquoi, le programme de contrôle du robot ou l'interface Jupiter est peut être hors service, dans ce cas vous pouvez recharger les programmes en choissisant Kernel puis Restart dans le menu de Jupyter. Il faut ensuite tout recommencer au début de ce guide.
Maintenant, à vous de mettre les bras de votre robot à l'horizontale.
Step6: Vous avez sans doute remarqué que les mouvements de tous les moteurs s'éxécutent en même temps, en simultané.
Il peut être utile de décomposer les mouvements. Par exemple, pour mettre les bras à l'horizontale
Step7: Les bras sont à l'horizontale, remettez les dans leur position de départ, c'est à dire avec les angles des moteurs à 0 degrés.
Step8: A présent que vous savez, faire bouger votre robot, soyez créatif et inventez une danse pour lui !
Step9: Pour terminer la simulation, il faut arréter le robot
Step10: Deuxième partie | Python Code:
from poppy.creatures import PoppyTorso
Explanation: Activité - Faire danser PoppyTorso
Première partie : en utilisant, le simulateur V-REP :
Compétences visées par cette activité :
Savoir utiliser des modules en y récupérant des classes. Instancier un objet à partir d'une classe. Utiliser une méthode et un attribut liée à un objet.
Faire le lien entre rotation des moteurs et position du robot dans l'espace.
Faire preuve de créativité en developpant une chorégraphie.
Lien avec les programmes scolaires, voir :
Pour ICN en classe de seconde : http://www.poppy-prof.fr/?page_id=4&id=67<br>
Pour les mathématiques en classe de seconde : http://www.poppy-prof.fr/?page_id=4&id=37
Pour faire fonctionner notre robot, il faut utiliser Python mais pas seulement. Nous allons aussi avoir besoin de ce que l'on appelle une librairie. La librairie qui permet d'utiliser notre robot s'appelle Pypot et elle est entièrement écrite avec le language Python.
Cette librairie a été construite par des chercheurs très compétants et nous allons simplement apprendre à l'utiliser.
La première chose à faire est d'aller chercher dans la librairie Pypot, les bons "livres", ceux dont nous allons avoir besoin. Ces "livres" se nomment des modules en vocabulaire Python.
Toutes les instructions seront passées au robot via l'interface sur laquelle vous êtes en train de lire ces lignes. Cette interface se nomme Jupyter ou Notebook.
Pour éxécuter les instructions écrites dans une case de Jupyter, il faut :<br>
_Sélectionner la case en cliquant dessus.<br>
_Cliquez sur la case lecture située dans la barre de menu : <img src="images/play.jpg" alt="play" /><br>
_Ou appuyez simultanément sur shitf+entrée.<br>
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
poppy = PoppyTorso(simulator='vrep')
Explanation: Ensuite, vous allez créer un objet s'appellant poppy et étant un robot de type PoppyTorso. Vous pouvez donner le nom que vous souhaitez à votre robot. Il vous suffit d'écrire :
> nom_du_robot = PoppyTorso(simulator='vrep')
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
poppy.motors
Explanation: Comme toute chose en language Python, notre robot poppy est un objet qui contient d'autres objets qui sont ses moteurs.
Ca y est, si vous arrivez à accéder aux moteurs de Poppy, vous pourrez le faire bouger...
Vous devez donc accéder aux moteurs de poppy (qui se nomment "motors") et qui se trouve à l'intérieur de Poppy pour cela tapez :
> nom_du_robot.motors
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
poppy.head_z.goto_position(90,1)
Explanation: Tous les mouvements sont basés sur des rotations des moteurs situés aux articulations. Il suffit de fixer l'angle que l'on désire pour un moteur. Pour cela, nous pouvons utiliser la méthode :
> goto_position(angle_en_degrées,temps)
Cette méthode doit s'appliquer aux objets moteurs de l'objet robot :
> nom_du_robot.nom_du_moteur.goto_position(angle_en_degrées,temps)
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
# pour remettre la simulation à zéro :
poppy.reset_simulation()
Explanation: A présent choisissez un moteur au hasard dans la liste de moteurs obtenues précédemment et faîtes le bouger pour le localiser sur le robot.
Vous devez remplir le tableau suivant avec les noms des 10 moteurs :
<img src="./images/moteur_torso2.jpg" alt="poppy-torso" style="height: 500px;"/>
Le tableau suivant doit être rempli par les élèves.
La correction est donnée à titre indicatif :
Nom du moteur 1 : ........... <br>
Nom du moteur 2 : ........... <br>
Nom du moteur 3 : ........... <br>
Nom du moteur 4 : ........... <br>
Nom du moteur 5 : ........... <br>
Nom du moteur 6 : ........... <br>
Nom du moteur 7 : ........... <br>
Nom du moteur 8 : ........... <br>
Nom du moteur 9 : ........... <br>
Nom du moteur 10 : ........... <br>
Nom du moteur 11 : ........... <br>
Nom du moteur 12 : ........... <br>
Nom du moteur 13 : ........... <br>
Si lors de vos essais, vous faîtes tomber votre robot, il est important de connaitre l'instruction qui permet de remettre la simulation à zéro :
> nom_du_robot.reset_simulation()
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
# pour mettre les bras à l'horizontale
poppy.r_shoulder_x.goto_position(-100,1)
poppy.l_shoulder_x.goto_position(100,1)
poppy.r_elbow_y.goto_position(100,1)
poppy.l_elbow_y.goto_position(100,1)
Explanation: Si votre robot ne répond plus et que vous ne comprenez pas pourquoi, le programme de contrôle du robot ou l'interface Jupiter est peut être hors service, dans ce cas vous pouvez recharger les programmes en choissisant Kernel puis Restart dans le menu de Jupyter. Il faut ensuite tout recommencer au début de ce guide.
Maintenant, à vous de mettre les bras de votre robot à l'horizontale.
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
# pour mettre les bras à l'horizontale
poppy.r_shoulder_x.goto_position(-100,1)
poppy.l_shoulder_x.goto_position(100,1,wait='True')
poppy.r_elbow_y.goto_position(100,1)
poppy.l_elbow_y.goto_position(100,1)
Explanation: Vous avez sans doute remarqué que les mouvements de tous les moteurs s'éxécutent en même temps, en simultané.
Il peut être utile de décomposer les mouvements. Par exemple, pour mettre les bras à l'horizontale : bouger d'abord les épaules puis ensuite les coudes. Pour faire cela, il faut rajouter à la méthode goto_position() un argument wait='True' :
> nom_du_robot.nom_du_moteur.goto_position(angle_en_degrées,temps,wait='True')
A présent, mettez les brais à l'horizotale en bougeant d'abord les épaules, puis ensuite les coudes :
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
# pour remettre les bras dans leur position de départ :
poppy.r_elbow_y.goto_position(0,1)
poppy.l_elbow_y.goto_position(0,1,wait='True')
poppy.r_shoulder_x.goto_position(0,1)
poppy.l_shoulder_x.goto_position(0,1,wait='True')
Explanation: Les bras sont à l'horizontale, remettez les dans leur position de départ, c'est à dire avec les angles des moteurs à 0 degrés.
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
poppy.head_z.goto_position(40,1,wait='True')
poppy.head_z.goto_position(-40,1,wait='True')
poppy.head_z.goto_position(40,1,wait='True')
poppy.head_z.goto_position(-40,1,wait='True')
poppy.head_z.goto_position(0,1,wait='True')
poppy.r_shoulder_x.goto_position(-90,2)
poppy.l_shoulder_x.goto_position(90,2)
poppy.l_arm_z.goto_position(90,2)
poppy.r_arm_z.goto_position(50,2,wait='True')
poppy.r_shoulder_x.goto_position(0,2)
poppy.l_shoulder_x.goto_position(0,2)
poppy.l_arm_z.goto_position(0,2)
poppy.r_arm_z.goto_position(0,2,wait='True')
poppy.r_shoulder_x.goto_position(-90,2)
poppy.l_shoulder_x.goto_position(90,2)
poppy.l_arm_z.goto_position(-50,2)
poppy.r_arm_z.goto_position(-90,2,wait='True')
poppy.r_shoulder_x.goto_position(0,2)
poppy.l_shoulder_x.goto_position(0,2)
poppy.l_arm_z.goto_position(0,2)
poppy.r_arm_z.goto_position(0,2,wait='True')
poppy.l_arm_z.goto_position(90,3)
poppy.r_arm_z.goto_position(-90,3,wait='True')
poppy.r_arm_z.goto_position(0,3)
poppy.l_arm_z.goto_position(0,3,wait='True')
poppy.l_arm_z.goto_position(90,3)
poppy.r_arm_z.goto_position(-90,3,wait='True')
poppy.r_arm_z.goto_position(0,3)
poppy.l_arm_z.goto_position(0,3,wait='True')
poppy.r_shoulder_x.goto_position(-90,3)
poppy.l_shoulder_x.goto_position(90,3,wait='True')
poppy.r_shoulder_y.goto_position(30,3)
poppy.l_shoulder_y.goto_position(-30,3,wait='True')
poppy.r_shoulder_y.goto_position(-30,3)
poppy.l_shoulder_y.goto_position(30,3,wait='True')
for m in poppy.motors :
m.goto_position(0,1)
Explanation: A présent que vous savez, faire bouger votre robot, soyez créatif et inventez une danse pour lui !
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
poppy.close()
Explanation: Pour terminer la simulation, il faut arréter le robot :
> nom_du_robot.close()
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
poppy = PoppyTorso()
Explanation: Deuxième partie : en utilisant un véritable robot :
Tout le code développé à l'aide du simulateur doit normalement être valide sur un véritable robot.
Il suffit d'instancier la class robot sans l'argument du simulateur :
> nom_du_robot = PoppyTorso()
Attention dans le cas du controle d'un véritable PoppyTorso, le code doit être éxécuté dans une interface Jupyter qui pointe sur le nom réseau du robot et non pas sur localhost comme pour le simulateur.
End of explanation |
2,367 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic notebook to look @ convergence of a 2D region in an FES. It will actually call sum hills with the stride you set in cell one , graph the FES and put the regions of convergence there
Step1: Graph the final FES and plot the two squares on top of it
Step2: The two functions below calculate the average free energy of a region by integrating over whichever boxes you defined above. Since the FES is discrete and points are equally spaced, this is trivially taken as a summation
Step3: Below this is all testing of different read-in options
Step4: Profiling speed of different read in options | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import glob
import os
from matplotlib.patches import Rectangle
# define all variables for convergence script
# these will pass to the bash magic below used to call plumed sum_hills
dir="MetaD_converge" #where the intermediate fes will be stored
hills="other/HILLS" #your HILLS file from the simulation
finalfes='other/fes.dat' #the final fes.dat file
stride=1000
kT=8.314e-3*300 #throughout we convert to kcal, but the HILLS are assumed to be in GROMACS units (kJ)
## here is where you set the boxes to define convergence regions
C1=[-1.5,1.0] #center of box 1
C2=[1.0,-.5]
edge1=1.0 #edge of box1
edge2=1.0
%%bash -s "$dir" "$hills" "$stride" "$kT"
# calling sum hills and output to devnul
HILLSFILE=HILLS
rm -rf $1
mkdir $1
cp $2 $1
cd $1
plumed sum_hills --hills $HILLSFILE --kt $4 --stride $3 >& /dev/null
Explanation: Basic notebook to look @ convergence of a 2D region in an FES. It will actually call sum hills with the stride you set in cell one , graph the FES and put the regions of convergence there
End of explanation
%matplotlib inline
#read the data in from a text file
fesdata = np.genfromtxt(finalfes,comments='#');
fesdata = fesdata[:,0:3]
#what was your grid size? this calculates it
dim=int(np.sqrt(np.size(fesdata)/3))
#some post-processing to be compatible with contourf
X=np.reshape(fesdata[:,0],[dim,dim],order="F") #order F was 20% faster than A/C
Y=np.reshape(fesdata[:,1],[dim,dim],order="F")
Z=np.reshape((fesdata[:,2]-np.min(fesdata[:,2]))/4.184,[dim,dim],order="F") #convert to kcal/mol
#what spacing do you want? assume units are in kJ/mol
spacer=1 #this means 1kcal/mol spacing
lines=20
levels=np.linspace(0,lines*spacer,num=(lines+1),endpoint=True)
fig=plt.figure(figsize=(8,6))
axes = fig.add_subplot(111)
xlabel='$\Phi$'
ylabel='$\Psi$'
plt.contourf(X, Y, Z, levels, cmap=plt.cm.bone,)
plt.colorbar()
axes.set_xlabel(xlabel, fontsize=20)
axes.set_ylabel(ylabel, fontsize=20)
currentAxis = plt.gca()
currentAxis.add_patch(Rectangle((C1[0]-edge1/2, C1[1]-edge1/2), edge1, edge1,facecolor='none',edgecolor='yellow',linewidth='3'))
currentAxis.add_patch(Rectangle((C2[0]-edge2/2, C2[1]-edge2/2), edge2, edge2,facecolor='none',edgecolor='yellow',linewidth='3'))
plt.show()
Explanation: Graph the final FES and plot the two squares on top of it
End of explanation
def diffNP(file):
#read the data in from a text file
# note - this is very slow
fesdata = np.genfromtxt(file,comments='#');
A=0.0
B=0.0
dim=np.shape(fesdata)[0]
for i in range(0, dim):
x=fesdata[i][0]
y=fesdata[i][1]
z=fesdata[i][2]
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184 #output in kcal
return diff
def diff(file):
kT=8.314e-3*300
A=0.0
B=0.0
f = open(file, 'r')
for line in f:
if line[:1] != '#':
line=line.strip()
if line:
columns = line.split()
x=float(columns[0])
y=float(columns[1])
z=float(columns[2])
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
f.close
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184
return diff
diffvec=None
rootdir = '/Users/jpfaendt/Learning/Python/ALA2_MetaD/MetaD_converge'
i=0
diffvec=np.zeros((1,2))
#the variable func defines which function you are going to call to read in your data files fes_*.dat
#func=diffNP uses the numpy read in (SLOW)
#func=diff streams in data from a text file
#to experience the differnece , uncomment out the print statements and run each way
func=diff
for infile in glob.glob( os.path.join(rootdir, 'fes_?.dat') ):
if i >= i:
diffvec.resize((i+1,2))
#print "current file is: " + infile
diffvec[i][0]=i*1.0
diffvec[i][1]=func(infile)
i+=1
for infile in glob.glob( os.path.join(rootdir, 'fes_??.dat') ):
if i >= i:
diffvec.resize((i+1,2))
#print "current file is: " + infile
diffvec[i][0]=i*1.0
diffvec[i][1]=func(infile)
i+=1
for infile in glob.glob( os.path.join(rootdir, 'fes_???.dat') ):
if i >= i:
diffvec.resize((i+1,2))
#print "current file is: " + infile
diffvec[i][0]=i*1.0
diffvec[i][1]=func(infile)
i+=1
fig = plt.figure(figsize=(6,6))
axes = fig.add_subplot(111)
xlabel='time (generic)'
ylabel='diff (A-B) (kcal/mol)'
axes.plot(diffvec[:,0],diffvec[:,1])
axes.set_xlabel(xlabel, fontsize=20)
axes.set_ylabel(ylabel, fontsize=20)
plt.show()
Explanation: The two functions below calculate the average free energy of a region by integrating over whichever boxes you defined above. Since the FES is discrete and points are equally spaced, this is trivially taken as a summation:
$F_A = -k_BT * ln \sum_A exp\left(-F_{Ai}/k_BT\right) $
Don't forget that this is formally a free-energy plus some trivial constant but that the constant is equal for both regions $A$ and $B$ so that you will obtain the same free-energy difference irrespective of the reference point.
On the other hand, it doesn't make much sense to just use the arbitrary nubmers coming from sum_hills, which are related only to the amount of aggregate bias produced in your simulation. This is why we reference the lowest point to zero on the contour plots.
I left both functions in as a teaching tool to show how slow np.genfromtext is
End of explanation
##
#read the data in from a text file using genfrom txt
fesdata = np.genfromtxt('MetaD_converge/fes_1.dat',comments='#');
kT=8.314e-3*300
A=0.0
B=0.0
dim=np.shape(fesdata)[0]
for i in range(0, dim):
x=fesdata[i][0]
y=fesdata[i][1]
z=fesdata[i][2]
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184
diff
##
#read the data in from a text file using read in commands
kT=8.314e-3*300
A=0.0
B=0.0
f = open('MetaD_converge/fes_1.dat', 'r')
for line in f:
if line[:1] != '#':
line=line.strip()
if line:
columns = line.split()
x=float(columns[0])
y=float(columns[1])
z=float(columns[2])
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
f.close
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184
diff
Explanation: Below this is all testing of different read-in options:
End of explanation
file='MetaD/fes.dat'
%timeit diffNP(file)
%timeit diff(file)
Explanation: Profiling speed of different read in options:
End of explanation |
2,368 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing C4.5 and ID3 Decision Tree Algorithms with NumPy
We will apply these trees to the the UCI car evaluation dataset $^1$
$^1$ https
Step1: The backbone of the decision tree algorithms is a criterion (e.g. entropy, Gini, error) with which we can choose the best (in a greedy sense) attribute to add to the tree. ID3 and C4.5 use information gain (entropy) and normalized information gain, respectively.
Step2: To store our tree, we wll use dictionaries. Each node of the tree is a Python dict.
Step3: Functions that helps us build our tree and classify its leaves. find_best_split acts on a node, and returns the attribute that leads to the best (possibly normalized) information gain.
Step4: This function is recursive and will construct a decision tree out of a root node that contains your training data.
Step5: Lastly, before building the tree, we need a function to check the tree's accuracy.
Step6: Let's make a tree!
But first, a quick look at the class distribution after splitting on safety, an important attribute according to our algorithm
Step7: On this dataset, C4.5 and ID3 get similar accuracies... | Python Code:
import numpy as np
#you only need matplotlib if you want to create some plots of the data
import matplotlib.pyplot as plt
%matplotlib inline
data_path = "/home/brb/repos/examples/decision trees/UCI_cars"
data = np.genfromtxt(data_path, delimiter=",", dtype=str)
labels = ["buying", "maint", "doors", "persons", "lug_boot", "safety", "class"]
print("records: {}".format(len(data)))
print("example record: {}".format(data[0]))
print("\ncolumns:\n")
columns = []
for col in range(len(data[0])):
print("\t" + labels[col] + ": {}".format(np.unique(data[:,col])))
columns.append(np.unique(data[:,col]))
Explanation: Implementing C4.5 and ID3 Decision Tree Algorithms with NumPy
We will apply these trees to the the UCI car evaluation dataset $^1$
$^1$ https://archive.ics.uci.edu/ml/datasets/car+evaluation
Disclaimer: I have neither verified nor validated this code, so use it at your own risk! Also, it was not designed to accommodate continuous attributes (data that must be split by creating buckets such as "x<5.236").
Start by importing NumPy, and then explore the data
End of explanation
def weighted_entropy(data, col_num):
entropies = []
n_s = []
entropy_of_attribute = entropy(data[:,col_num])
for value in columns[col_num]:
candidate_child = data[data[:,col_num] == value]
n_s.append(len(candidate_child))
entropies.append(entropy(candidate_child[:,6]))
n_s = np.array(n_s)
n_s = n_s / np.sum(n_s)
weighted_entropy = n_s.dot(entropies)
return weighted_entropy, entropy_of_attribute
def entropy(data):
classes = np.unique(data)
n = len(data)
n_s = []
for class_ in classes:
n_s.append(len(data[data==class_]))
n_s = np.array(n_s)
n_s = n_s/n
n_s = n_s * np.log2(n_s)
return max(0,-np.sum(n_s))
Explanation: The backbone of the decision tree algorithms is a criterion (e.g. entropy, Gini, error) with which we can choose the best (in a greedy sense) attribute to add to the tree. ID3 and C4.5 use information gain (entropy) and normalized information gain, respectively.
End of explanation
def build_node(data, entropy, label, depth, class_="TBD", parent=None):
new_node = dict()
new_node['data'] = data
new_node['entropy'] = entropy
new_node['label'] = label
new_node['depth'] = depth
new_node['class'] = class_
new_node['parent'] = parent
new_node['children'] = []
return new_node
root = build_node(data, entropy(data[:,6]), "all data", 0)
classes = np.unique(root['data'][:,6])
print(classes)
Explanation: To store our tree, we wll use dictionaries. Each node of the tree is a Python dict.
End of explanation
def find_best_split(node, c45 = False):
data = node['data']
entropy = node['entropy']
gains = []
for col_num in range(len(columns) - 1):
new_entropy, entropy_of_attribute = weighted_entropy(data, col_num)
if c45:
if entropy_of_attribute==0:
gains.append(0)
else:
gains.append((entropy - new_entropy) / (entropy_of_attribute))
else:
gains.append(entropy - new_entropy)
if np.max(gains) > 10**-3 :
best_attribute = np.argmax(gains)
return best_attribute
else:
return -1
def classify(node_data):
data = node_data[:, 6]
n_s = []
for class_ in classes:
n_s.append(len(data[data==class_]))
return columns[-1][np.argmax(n_s)]
labels[find_best_split(root)], classify(root['data'])
Explanation: Functions that helps us build our tree and classify its leaves. find_best_split acts on a node, and returns the attribute that leads to the best (possibly normalized) information gain.
End of explanation
def build_tree(node, c45 = False, max_depth = 999, noisy=False):
next_split_attribute = find_best_split(node, c45)
if next_split_attribute == -1 or node['depth'] == max_depth:
node['class'] = classify(node['data'])
#this if statement just handles some printing of the tree (rudimentary visualization)
if noisy:
label = []
label.append(node['label'])
temp_parent = node
while temp_parent['parent']:
temp_parent = temp_parent['parent']
label.append(temp_parent['label'])
depth = node['depth']
for i, layer_label in enumerate(reversed(label)):
for _ in range(i):
print("\t", end="")
if i==depth:
print("{} -> class {}".format(layer_label, node['class']))
else:
print("{}".format(layer_label))
else:
for value in columns[next_split_attribute]:
data = node['data'][ node['data'][:, next_split_attribute] == value ]
entropy_ = entropy(data[:, 6])
new_node = build_node(data, entropy_, "{} == {}".format(
labels[next_split_attribute],value),
node['depth'] + 1, parent=node)
build_tree(new_node, c45, max_depth, noisy)
node['children'].append(new_node)
Explanation: This function is recursive and will construct a decision tree out of a root node that contains your training data.
End of explanation
def correct(decision_tree):
if not decision_tree['children']:
return np.sum(classify(decision_tree['data'])==decision_tree['data'][:,6])
else:
n_correct = 0
for child in decision_tree['children']:
n_correct += correct(child)
return n_correct
correct(root)/1728
Explanation: Lastly, before building the tree, we need a function to check the tree's accuracy.
End of explanation
for safety in columns[5]:
plt.hist(data[data[:,5]==safety, 6])
plt.title(safety + " safety")
plt.show()
root = build_node(data, entropy(data[:,6]), "all data", 0)
build_tree(root, max_depth=1, noisy=True)
print("\nTree Accuracy: {}".format(correct(root)/1728))
root = build_node(data, entropy(data[:,6]), "all data", 0)
build_tree(root, max_depth=2, noisy=True)
print("\nTree Accuracy: {}".format(correct(root)/1728))
for persons in columns[3]:
indices1 = data[:,5]=="high"
indices2 = data[:,3]==persons
indices = np.alltrue([indices1,indices2], axis=0)
plt.hist(data[indices, 6])
plt.title("high safety and {} persons".format(persons))
plt.show()
Explanation: Let's make a tree!
But first, a quick look at the class distribution after splitting on safety, an important attribute according to our algorithm
End of explanation
print("Training Accuracy Comparison")
print("---------")
print(" ID3 C4.5")
for depth in range(7):
root = build_node(data, entropy(data[:,6]), "all data", 0)
build_tree(root, max_depth=depth, c45=False)
id3=correct(root)/1728
root = build_node(data, entropy(data[:,6]), "all data", 0)
build_tree(root, max_depth=depth, c45=True)
c45=correct(root)/1728
print('{:.3f} '.format(round(id3,3)), ' {:.3f}'.format(round(c45,3)))
Explanation: On this dataset, C4.5 and ID3 get similar accuracies...
End of explanation |
2,369 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 03
1 Least Square Coefficient Estimate
$$\hat{\beta_1}=\frac{\sum_{i=1}^{n}(x_i-\bar{x})(y_i-\bar{y})}{\sum_{i=1}^{n}(x_i-\bar{x})^2}$$
$$\hat{\beta_0}=\bar{y}-\hat{\beta_1}\bar{x}$$
Step1: The difference between the population regression line adn the least squres lien many seem quite confusing. The answer is using a sample to estimate the characteristics of a large population.
How accurate is the sample mean $\hat{\mu}$ as an estimate of $\mu$
$$
Var(\hat{\mu}) = SE(\hat{\mu})^2=\frac{\sigma^2}{n}
$$
computing the standard errors associated with $\hat{\beta_0}$ and $\hat{\beta_1}$
$$
SE(\hat{\beta_0})^2 = \sigma^2\left[ \frac{1}{n} + \frac{\bar{x}^2}{\sum_{i=1}^n(x_i-\bar{x})^2} \right],
SE(\hat{\beta_1})^2 = \frac{\sigma^2}{\sum_{i=1}^{n}(x_i-\bar{x})^2}
$$
where $\sigma^2=Var(\epsilon)$, $\sigma$ is known as the residual standard error, and is given by the formula $RSE=\sqrt{RSS/(n-2)}$
1.1 Confidence intervals
For linear regression, the $95\%$ confidence interval for $\beta_1$ and $\beta_0$ approximately takes the form $$\hat{\beta_1}\pm 2 SE(\hat{\beta_1}),
\hat{\beta_0} \pm 2 SE(\hat{\beta_0})
$$
1.2 Hypothesis
The most common hypothesis test involves testing the null hypothesis of $$H_0
Step2: 2.1 Import Question
Is at least one of the predictors $X_1,X_2,\ldots,X_p$ useful in predicting the response
Do all the predictors help to explain $Y$,, or is only a subset of the predcictor useful.
How well does the model fit the data
Given a set of predictor values, what response value should we predict, and how accurate is our prediction
2.1.1 Is There a Relationship Between the Response and Predictors?
null hypothesis
$$H_0
Step3: 3.6 Potential Probelms
3.6.1 Non-linearity of the Data
Residual plots are a useful graphical tool for identiying non-linearity.
Step4: 3.6.2 Correlation of Error Terms
Why might correlations among the error terms occur? Such correlationsfrequently occur in the context of time series data
3.6.3 Non-constant Variance of Error Terms
Another important assumption of the linear regression model is that the error terms have a constant variance, $Var(\epsilon_i) = \sigma ^2$. The standard errors, confidence intervals, and hypothesis tests associated with the linear model rely upon this assumption.
3.6.4 Outlier
An outlier is a point for which $y_i$ is far from the value predicted by the model. Use residual plot can recognize the outliers.
3.6.5 High leverage
For simple linear regression
$$
h_i=\frac{1}{n}+\frac{(x_i-\bar{x})^2}{\sum_{i=1}^n(x_{i'}-\bar{x})^2}
$$
Collinearity
ollinearity refers to the situation in which two or more predictor variables are closely related to one another. We detect from correlation matrix.
4 K-Nearest Neighbors Regression
$$
\hat{f}(x_0)=\frac{1}{k}\sum_{x_i \in N_0}y_i
$$
where $y_i$s are k nearest neighbors points
5 Exercises
5.1 Auto Data
$$mpg = \beta_0 + \beta_1 \times horsepower$$ | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
def LSCE(x, y):
beta_1 = np.sum((x - np.mean(x))*(y-np.mean(y))) / np.sum((x-np.mean(x))*(x-np.mean(x)))
beta_0 = np.mean(y) - beta_1 * np.mean(x)
return beta_0, beta_1
advertising = pd.read_csv('Advertising.csv',index_col=0)
tv = advertising['TV']
sales = advertising['Sales']
beta_0, beta_1 = LSCE(tv,sales)
x = np.linspace(-10,310,1000)
y = beta_1 * x + beta_0
plt.scatter(tv, sales, marker='+')
plt.plot(x, y,c='k')
plt.xlim(-10,310)
plt.show()
beta_1 = 3
beta_0 = 2
random = np.random.normal(size=100, loc=0, scale=1)
X = np.linspace(-2,2,500)
X = np.random.choice(X, size=100, replace=False)
Y = X*beta_1+beta_0 +random
y_true = X*beta_1+beta_0
beta_0_, beta_1_ = LSCE(X, Y)
y_predict = X *beta_1_ + beta_0_
plt.scatter(X,Y)
plt.plot(X,y_true, c='g')
plt.plot(X, y_predict, c='r')
plt.show()
Explanation: Chapter 03
1 Least Square Coefficient Estimate
$$\hat{\beta_1}=\frac{\sum_{i=1}^{n}(x_i-\bar{x})(y_i-\bar{y})}{\sum_{i=1}^{n}(x_i-\bar{x})^2}$$
$$\hat{\beta_0}=\bar{y}-\hat{\beta_1}\bar{x}$$
End of explanation
# calculate the parameter
from numpy.linalg import inv
X = advertising[['TV','Radio','Newspaper']].values
Y = advertising['Sales'].values
X = np.hstack((X, np.full((len(Y),1), 1.0)))
beta = inv(X.T.dot(X)).dot(X.T).dot(Y)
print ('the parameters are: ',beta[0], beta[1], beta[2], beta[-1])
# calculate the correlation
# X = advertising[['TV', 'Radio','Newspaper','Sales']].values
# X_mean = np.mean(X,axis=0)
# X -= X_mean
# numerator =X.T.dot(X)
# XX = X*X
# XX = np.sum(XX, axis=0)
# denumorator = np.sqrt((XX.T.dot(XX)))
# numerator/denumorator
advertising.corr()
Explanation: The difference between the population regression line adn the least squres lien many seem quite confusing. The answer is using a sample to estimate the characteristics of a large population.
How accurate is the sample mean $\hat{\mu}$ as an estimate of $\mu$
$$
Var(\hat{\mu}) = SE(\hat{\mu})^2=\frac{\sigma^2}{n}
$$
computing the standard errors associated with $\hat{\beta_0}$ and $\hat{\beta_1}$
$$
SE(\hat{\beta_0})^2 = \sigma^2\left[ \frac{1}{n} + \frac{\bar{x}^2}{\sum_{i=1}^n(x_i-\bar{x})^2} \right],
SE(\hat{\beta_1})^2 = \frac{\sigma^2}{\sum_{i=1}^{n}(x_i-\bar{x})^2}
$$
where $\sigma^2=Var(\epsilon)$, $\sigma$ is known as the residual standard error, and is given by the formula $RSE=\sqrt{RSS/(n-2)}$
1.1 Confidence intervals
For linear regression, the $95\%$ confidence interval for $\beta_1$ and $\beta_0$ approximately takes the form $$\hat{\beta_1}\pm 2 SE(\hat{\beta_1}),
\hat{\beta_0} \pm 2 SE(\hat{\beta_0})
$$
1.2 Hypothesis
The most common hypothesis test involves testing the null hypothesis of $$H_0: \text{There is no relationship between}\quad X \text{and} Y$$
versus the alternative hypothesis
$$H_a: \text{There is some relationship between }X \text{and} Y$$
We use t-statistic given by $$t= \frac{\hat{\beta_1}-0}{SE(\hat{\beta_1})}$$
we except that will have a $t$-distribution with $n-2$ degrees of freedom, asumming $\beta_1=0$ we call this probability the $p-value$ . small $p-value$ indicates that it is unlikely to observe such a substantial association between the predictor and the response due to chance. The typical p-value cutoffs for rejecting the null hypothesis ares 5 or 1%
1.3 Assessing the accuracy of the model
Residual Standard Error(RSE)
$$RSE=\sqrt{\frac{1}{n-2}RSS}=\sqrt{\frac{1}{n-2}\sum_{i=1}^{n}(y-\hat{y_i})^2}$$
$R^2$ statistic provides an alternative measurs of fit. It takes the form of $proportion$, taking on a value between 0 and 1, and is independent of the scale of $Y$.
$$R^2 = \frac{TSS-RSS}{TSS}=1-\frac{RSS}{TSS}$$
where $TSS=\sum(y_i-\bar{y})^2$ is the total sum of squares. Hence, $R^2$ measures the proportion of variability in Y that can be explained using X.
Correlation
$$
Cor(X,Y)=\frac{\sum_{i=1}^n(x_i-\bar{x})(y_i-\bar{y})}{\sqrt{\sum_{i=1}^n(x_i-\bar{x})^2}\sqrt{\sum_{i=1}^n(y_i-\bar{y})^2}}
$$
Using $r=Cor(X,Y)$ instead of $R^2$ in order to assess the fit of the linear model.
2 Multiple linear regression
Then the multiple linear regression model takes the form
$$
Y = \beta_0 + \beta_1X_1 + \beta_2X_2+\ldots+\beta_pX_p+\epsilon
$$
Given estimates $\hat{\beta_0},\hat{\beta_1},\ldots,\hat{\beta_p}$, we can make predictions using the formula
$$
\hat{y}=\hat{\beta_0}+\hat{\beta_1}x_1+\ldots+\hat{\beta_p}x_p
$$
The sum of squared residuals
$$
RSS = \sum_{i=1}^{n}(y_i-\hat{y_i})^2
$$
We assume that:
$$Sales=\beta_0 + \beta_1 \times TV + \beta_2 \times Radio + \beta_3 \times Newspaper$$
We reform the above formula
$$
\begin{bmatrix}
Sale_1 \ Sale_2 \ \vdots \ Sale_p
\end{bmatrix} =
\begin{bmatrix}
tv_1 & radio_1 & newspaper_1 & 1 \
tv_2 & radio_2 & newspaper_2 & 1 \
\vdots & \vdots & \vdots & \vdots \
tv_p & radio_p & newpaper_p & 1 \
\end{bmatrix} \times
\begin{bmatrix}
\beta_1 \ \beta_2 \ \beta_3 \ \beta_0 \
\end{bmatrix}
$$
Using LSE formula
$$y=X\beta \rightarrow \beta=(X^TX)^{-1}X^Ty$$
End of explanation
auto = pd.read_table('Auto',sep='\s+')
rows=np.sum(auto.values=='?',axis=1)
delete_rows = []
for idx,_ in enumerate(rows):
if _!=0:
delete_rows.append(idx)
auto=auto.drop(auto.index[delete_rows])
data = auto[['mpg','horsepower']]
horsepower= data['horsepower'].values.astype('float')
data['horsepower_2'] = horsepower * horsepower
data['beta_0'] = np.full(horsepower.shape,1.0)
# plot the scatter
plt.scatter(horsepower, auto['mpg'])
# calcault linear
X = data[['horsepower','beta_0']].values.astype('float')
y = data['mpg'].values.astype('float').reshape(X.shape[0],1)
beta_linear =inv(X.T.dot(X)).dot(X.T).dot(y)
X = data[['horsepower','horsepower_2','beta_0']].values.astype('float')
beta_linear2 = inv(X.T.dot(X)).dot(X.T).dot(y)
x = np.linspace(40,230, 500)
y_linear = x*beta_linear[0] + beta_linear[1]
y_linear2 = x*beta_linear2[0] + x*x*beta_linear2[1] +beta_linear2[2]
plt.plot(x, y_linear, c='b')
plt.plot(x, y_linear2, c='g')
plt.show()
Explanation: 2.1 Import Question
Is at least one of the predictors $X_1,X_2,\ldots,X_p$ useful in predicting the response
Do all the predictors help to explain $Y$,, or is only a subset of the predcictor useful.
How well does the model fit the data
Given a set of predictor values, what response value should we predict, and how accurate is our prediction
2.1.1 Is There a Relationship Between the Response and Predictors?
null hypothesis
$$H_0:\beta_1=\beta_2=\ldots=\beta_p=0$$
alternative hypothesis
$$H_a: \text{at least one}\quad \beta_j \quad\text{is non-zero}$$
F-statisitc
$$F=\frac{(TSS-RSS)/p}{RSS/(n-p-1)}$$
If there is no relationshio between the response and predictors, that F-statistic takes on a value close to 1. On ther other hand, if $H_a$ is true, we except F to be greater than 1.
2.1.2 Deciding on important variables
By trying out every possible combinations of variables
Forward selection.
backward selction.
mixed selection.
2.1.3 Model Fit
RSE and $R^2$ are the most common numerial measures of model fit. And $R^2$ value close to 1 indicates that the model explains a large portion of variance in the response variable.
2.1.4 Predictions
The coefficient estimates $\hat{\beta_0},\hat{\beta_1},\ldots,\hat{\beta_p}$ are estimates for $\beta_0, \beta_1,\ldots,\beta_p$. the inaccuracy in the coefficient estimate is related to the $reducibel error$, we can use confidence interval. So does $Y$.
3.4 Other Considerations
3.4.1 Predicators with only two levels
$$
x_i=
\begin{cases}
1 & \text{if ith person is female} \
0 & \text{if ith person is male} \
\end{cases}
$$
Use this variable as the predictor in the regression equation
$$
y_i = \beta_0+\beta_1x_i+\epsilon =
\begin{cases}
\beta_0+\beta_1+\epsilon & \text{if ith person is female} \
\beta_0 + \epsilon & \text{if ith person is male} \
\end{cases}
$$
3.4.2 Qualitative predictors with more than two levels
In this situation, we can create additional dummy variables.
3.5 Extension of Linear Model
Above all, we assumpt that the relationship between the predictors and response are additive and linear.
removing the additive assumption
We can assumpt that
$$
Y = \beta_0 + \beta_1X_1+\beta_2X_2+\beta_3X_1X_2+\epsilon=\beta_0+\overline{\beta_1}X_1+\beta_2X_2+\epsilon
$$
An interaction between a quantitative and qualitative variables has a particularly nice interpertation.
$$
balance_i = \beta_0 +\beta_1\times income_i +
\begin{cases}
\beta_2 & \text{if ith person is a student} \
0 & \text{if ith person is not a student} \
\end{cases}
$$
Non-linear Relationships
End of explanation
horsepower = data['horsepower'].values.astype('float')
mpg = data['mpg'].values.astype('float')
residual_linear = mpg - (horsepower*beta_linear[0]+beta_linear[1])
plt.scatter(mpg, residual_linear)
plt.show()
residual_quadratic = mpg - (horsepower*beta_linear2[0]+horsepower*horsepower*beta_linear2[1]+beta_linear2[-1])
plt.scatter(mpg, residual_quadratic)
plt.show()
Explanation: 3.6 Potential Probelms
3.6.1 Non-linearity of the Data
Residual plots are a useful graphical tool for identiying non-linearity.
End of explanation
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from numpy.linalg import inv
auto = pd.read_table('Auto',sep='\s+')
rows=np.sum(auto.values=='?',axis=1)
delete_rows = []
for idx,_ in enumerate(rows):
if _!=0:
delete_rows.append(idx)
auto=auto.drop(auto.index[delete_rows])
horsepower= auto['horsepower'].values.astype('float')
auto['ones'] = np.full(horsepower.shape, 1.0)
X = auto[['horsepower','ones']].values.astype('float')
y = auto['mpg'].values.astype('float').reshape(X.shape[0],1)
beta_linear =inv(X.T.dot(X)).dot(X.T).dot(y)
print('β0 :',beta_linear[-1][0])
print('β1 :', beta_linear[0][0])
sample_num = len(y)
residual = np.power(X.dot(beta_linear)-y,2).sum()
sigma = np.sqrt(residual/(sample_num-2))
horsepower_98 = np.array([[98.0,1.0]])
mpg_98 = horsepower_98.dot(beta_linear)[0,0]
mpg_98_uppper_bound = mpg_98+2*sigma
mpg_98_lower_bound = mpg_98-2*sigma
print('predict value is %f when horsepower is 98'%mpg_98)
print('The range is [%f,%f]' %(mpg_98_lower_bound,mpg_98_uppper_bound))
mpg = auto['mpg'].values.astype('float')
plt.scatter(horsepower, mpg)
plt.show()
auto_cor=auto[['mpg','displacement','horsepower','weight','acceleration']]
auto_cor.corr()
Explanation: 3.6.2 Correlation of Error Terms
Why might correlations among the error terms occur? Such correlationsfrequently occur in the context of time series data
3.6.3 Non-constant Variance of Error Terms
Another important assumption of the linear regression model is that the error terms have a constant variance, $Var(\epsilon_i) = \sigma ^2$. The standard errors, confidence intervals, and hypothesis tests associated with the linear model rely upon this assumption.
3.6.4 Outlier
An outlier is a point for which $y_i$ is far from the value predicted by the model. Use residual plot can recognize the outliers.
3.6.5 High leverage
For simple linear regression
$$
h_i=\frac{1}{n}+\frac{(x_i-\bar{x})^2}{\sum_{i=1}^n(x_{i'}-\bar{x})^2}
$$
Collinearity
ollinearity refers to the situation in which two or more predictor variables are closely related to one another. We detect from correlation matrix.
4 K-Nearest Neighbors Regression
$$
\hat{f}(x_0)=\frac{1}{k}\sum_{x_i \in N_0}y_i
$$
where $y_i$s are k nearest neighbors points
5 Exercises
5.1 Auto Data
$$mpg = \beta_0 + \beta_1 \times horsepower$$
End of explanation |
2,370 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Extras
This covers additional useful material that we may or may not have time to go over in the course.
Generators
Consider the following code that computes the sum of squared numbers up to N.
Step1: The code works and is all great, but it has one flaw
Step2: At this you may wonder, doesn't range() return a list? The short answer is no, but the details are complicated.
Synthesis
Python is often used to process text files, some of which may be quite large. Typically a single row in a text file isn't large, however. The following type of pattern permits one to cleanly read in a file much larger than what would fit memory one line at a time.
Step3: Lambdas
One of the paradigms Python supports are lambdas
square = lambda x
Step4: A typical use case for lambda might be in accessing members of an object in a generic function.
For instance the sort-function takes in a keyword parameter key. It is trivial to do simple operations, like invert a value etc.
Step5: Lambda has many other uses but those are left as a thought exercise.
Write a mean function that takes in a list and computes a mean of the values accessed by function key that is given as a parameter to the function.
The default key is best left as a function that returns it's parameter, i.e.
``lambda x | Python Code:
def squared_numbers(n):
return [x*x for x in range(n)]
def sum_squares(n):
return sum(squared_numbers(n+1))
sum_squares(20000000)
Explanation: Extras
This covers additional useful material that we may or may not have time to go over in the course.
Generators
Consider the following code that computes the sum of squared numbers up to N.
End of explanation
def squared_numbers_alternate(n):
for x in range(n):
yield x*x
def sum_squares_alternate(n):
return sum(squared_numbers_alternate(n+1))
sum_squares(20000000)
Explanation: The code works and is all great, but it has one flaw: it creates a list of all the numbers from 1 to N in memory. If N were large, we would use a lot of extra memory, which might lead to the system swapping or running out of memory.
In this case it is not necessary to create the entire list in memory. The sum function iterates over it's input and only needs the cumulative sum and the next value at a time.
The Python keyword yieldis used to achieve this. Using yield in a statement automatically makes that statement a generator expression. A generator expression can be iterated over like a list, but it only creates new values as they are needed.
End of explanation
import os
print(os.getcwd())
def grep(fileobject, pattern):
for index, line in enumerate(fileobject):
if pattern in line:
# start indexing from 1 for humans
# remove the white space at the end
yield index+1, line.strip()
def process_file(input_, pattern):
with open(input_, "r") as file_:
for idx, line in grep(file_, pattern):
print("line {} matches: {}".format(idx, line))
print("done searching")
process_file("../data/grep.txt", "test")
Explanation: At this you may wonder, doesn't range() return a list? The short answer is no, but the details are complicated.
Synthesis
Python is often used to process text files, some of which may be quite large. Typically a single row in a text file isn't large, however. The following type of pattern permits one to cleanly read in a file much larger than what would fit memory one line at a time.
End of explanation
square = lambda x: x*x
print(square(4))
(lambda x: x-1).__call__(1)
Explanation: Lambdas
One of the paradigms Python supports are lambdas
square = lambda x: x*x
Here the result of the lambda statement, a function object is assigned to the variable square. The statement lambda xdenotes that this lambda statement takes in one parameter, x. The : x*xsay that the return value of the lambda statement is x*x.
It is equivalent to
def square(x):
return x*x
The beauty of lambda statements is that they don't need to be assigned, but can rather created on the fly.
End of explanation
my_list = [
("apple", 5),
("banana", 3),
("pear", 10)
]
my_list.sort(key= lambda x: x[1]) #sort by the number
my_list
Explanation: A typical use case for lambda might be in accessing members of an object in a generic function.
For instance the sort-function takes in a keyword parameter key. It is trivial to do simple operations, like invert a value etc.
End of explanation
def mean(...):
pass
Explanation: Lambda has many other uses but those are left as a thought exercise.
Write a mean function that takes in a list and computes a mean of the values accessed by function key that is given as a parameter to the function.
The default key is best left as a function that returns it's parameter, i.e.
``lambda x: x```
.
End of explanation |
2,371 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Template for test
Step1: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Included is N Phosphorylation however no benchmarks are available, yet.
Training data is from phospho.elm and benchmarks are from dbptm.
Step2: Y Phosphorylation
Step3: T Phosphorylation | Python Code:
from pred import Predictor
from pred import sequence_vector
from pred import chemical_vector
Explanation: Template for test
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_s_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/phos.csv", "S")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_s_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/phos.csv", "S")
del x
Explanation: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Included is N Phosphorylation however no benchmarks are available, yet.
Training data is from phospho.elm and benchmarks are from dbptm.
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_Y_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/phos.csv", "Y")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_Y_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/phos.csv", "Y")
del x
Explanation: Y Phosphorylation
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_t_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/phos.csv", "T")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_t_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/phos.csv", "T")
del x
Explanation: T Phosphorylation
End of explanation |
2,372 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Co-Occurring Tag Analysis
Analysing how tags co-occur across various Parliamentary publications. The idea behind this is to see whether there are naturally occurring groupings of topic tags by virtue of their co-occurence when used to tag different classes of Parlimanetary publication.
The data is provided as a set of Linked Data triples exported as Turtle (.ttl) data files. The data represents, among other things, Parlimentary resources (such as early day motions or other proceedings records) and subject/topic labels they are tagged with.
The data allows us to generate a graph that associates tags with resources, and from that a graph that directly associates tags with other tags by virtue of their commonly tagging the same resource or set of resources.
Step1: Utils
Import a library that lets us work with the data files
Step2: Simple utility to load all the .ttl files in a particular directory into a graph
Step3: Tools for running queries over a graph and either printing the result or putting it into a pandas dataframe
Step4: Tools to support the export and display of graphs - networkx package is handy in this respect, eg exporting to GEXF format for use with Gephi. We can also run projections on the graph quite easily.
Step5: Exploring the Data - Terms
Step6: Looks like the prefLabel is what we want
Step7: Exploring the Data - EDMS
Step8: Let's merge the EDM graph data with the terms data.
Step9: Now we can look at the term labels associated with a particular EDM.
Step10: We can also create a table that links topic labels with EDMs.
Step11: From this table, we can a generate a bipartite networkx graph that links topic labels with EDMs.
Step12: We can then project this bipartite graph onto just the topic label nodes - edges will now connect nodes that are linked through one or more common EDMs.
Step13: We can also generate a weighted graph, where edges are weighted relative to how many times topics are linked through different EDMs.
Step14: Predicting Topics
Step15: Exploring the Data - proceedings | Python Code:
#Data files
!ls ../data/dataexport
Explanation: Co-Occurring Tag Analysis
Analysing how tags co-occur across various Parliamentary publications. The idea behind this is to see whether there are naturally occurring groupings of topic tags by virtue of their co-occurence when used to tag different classes of Parlimanetary publication.
The data is provided as a set of Linked Data triples exported as Turtle (.ttl) data files. The data represents, among other things, Parlimentary resources (such as early day motions or other proceedings records) and subject/topic labels they are tagged with.
The data allows us to generate a graph that associates tags with resources, and from that a graph that directly associates tags with other tags by virtue of their commonly tagging the same resource or set of resources.
End of explanation
#Data is provided as Turtle/ttl files - rdflib handles those
#!pip3 install rdflib
from rdflib import Graph
Explanation: Utils
Import a library that lets us work with the data files:
End of explanation
import os
def ttl_graphbuilder(path,g=None,debug=False):
#We can add the triples to an existing graph or create a new one for them
if g is None:
g=Graph()
#Loop through all the files in the directory and then load the ones that have a .ttl suffix
for ttl in [f for f in os.listdir(path) if f.endswith('.ttl')]:
if debug: print(ttl)
g.parse('{}/{}'.format(path,ttl), format='turtle')
return g
Explanation: Simple utility to load all the .ttl files in a particular directory into a graph:
End of explanation
def rdfQuery(graph,q):
ans=graph.query(q)
for row in ans:
for el in row:
print(el,end=" ")
print()
#ish via https://github.com/schemaorg/schemaorg/blob/sdo-callisto/scripts/dashboard.ipynb
import pandas as pd
def sparql2df(graph,q, cast_to_numeric=True):
a=graph.query(q)
c = []
for b in a.bindings:
rowvals=[]
for k in a.vars:
rowvals.append(b[k])
c.append(rowvals)
df = pd.DataFrame(c)
df.columns = [str(v) for v in a.vars]
if cast_to_numeric:
df = df.apply(lambda x: pd.to_numeric(x, errors='ignore'))
return df
Explanation: Tools for running queries over a graph and either printing the result or putting it into a pandas dataframe:
End of explanation
import networkx as nx
Explanation: Tools to support the export and display of graphs - networkx package is handy in this respect, eg exporting to GEXF format for use with Gephi. We can also run projections on the graph quite easily.
End of explanation
path='../data/dataexport/terms'
termgraph=ttl_graphbuilder(path)
#What's in the graph generally?
q='''
SELECT DISTINCT ?x ?y ?z {
?x ?y ?z.
} LIMIT 10
'''
rdfQuery(termgraph,q)
#What does a term have associated with it more specifically?
q='''
SELECT DISTINCT ?y ?z {
<http://data.parliament.uk/terms/95551> ?y ?z.
} LIMIT 10
'''
rdfQuery(termgraph,q)
Explanation: Exploring the Data - Terms
End of explanation
q='''
SELECT DISTINCT ?z ?topic {
?z <http://www.w3.org/2004/02/skos/core#prefLabel> ?topic.
} LIMIT 10
'''
sparql2df(termgraph,q)
Explanation: Looks like the prefLabel is what we want:
End of explanation
path='../data/dataexport/edms'
g=ttl_graphbuilder(path)
#See what's there generally...
q='''
SELECT DISTINCT ?x ?y ?z {
?x ?y ?z.
} LIMIT 10
'''
rdfQuery(g,q)
#Explore a specific EDM
q='''
SELECT DISTINCT ?y ?z {
<http://data.parliament.uk/edms/50457> ?y ?z.
}
'''
rdfQuery(g,q)
Explanation: Exploring the Data - EDMS
End of explanation
path='../data/dataexport/edms'
g=ttl_graphbuilder(path,termgraph)
Explanation: Let's merge the EDM graph data with the terms data.
End of explanation
q='''
SELECT DISTINCT ?t ?z {
<http://data.parliament.uk/edms/50114> <http://data.parliament.uk/schema/parl#topic> ?z.
?z <http://www.w3.org/2004/02/skos/core#prefLabel> ?t.
} LIMIT 10
'''
rdfQuery(g,q)
Explanation: Now we can look at the term labels associated with a particular EDM.
End of explanation
q='''
SELECT DISTINCT ?edms ?topic {
?edms <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://data.parliament.uk/schema/parl#EarlyDayMotion>.
?edms <http://data.parliament.uk/schema/parl#topic> ?z.
?z <http://www.w3.org/2004/02/skos/core#prefLabel> ?topic.
}
'''
g_df=sparql2df(g,q)
g_df.head()
Explanation: We can also create a table that links topic labels with EDMs.
End of explanation
nxg=nx.from_pandas_dataframe(g_df, 'edms', 'topic')
#nx.write_gexf(nxg,'edms.gexf')
Explanation: From this table, we can a generate a bipartite networkx graph that links topic labels with EDMs.
End of explanation
from networkx.algorithms import bipartite
#We can find the sets of names/tags associated with the disjoint sets in the graph
#I think the directedness of the graph means we can be reasonably sure the variable names are correctly ordered?
edms,topic=bipartite.sets(nxg)
#Collapse the bipartite graph to a graph of topic labels connected via a common EDM
topicgraph= bipartite.projected_graph(nxg, topic)
nx.write_gexf(topicgraph,'edms_topics.gexf')
Explanation: We can then project this bipartite graph onto just the topic label nodes - edges will now connect nodes that are linked through one or more common EDMs.
End of explanation
topicgraph_weighted= bipartite.weighted_projected_graph(nxg, topic)
nx.write_gexf(topicgraph_weighted,'edms_topics_weighted.gexf')
Explanation: We can also generate a weighted graph, where edges are weighted relative to how many times topics are linked through different EDMs.
End of explanation
#!pip3 install sklearn
#via https://stackoverflow.com/a/19172087/454773
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.svm import LinearSVC
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.multiclass import OneVsRestClassifier
from sklearn.preprocessing import MultiLabelBinarizer
#via https://stackoverflow.com/questions/22219004/grouping-rows-in-list-in-pandas-groupby
g_df['topic']=g_df['topic'].astype(str)
topicsbyedm_df=g_df.groupby('edms')['topic'].apply(list).to_frame().reset_index()
topicsbyedm_df.head()
q='''
SELECT DISTINCT ?edms ?motiontext {
?edms <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://data.parliament.uk/schema/parl#EarlyDayMotion>.
?edms <http://data.parliament.uk/schema/parl#motionText> ?motiontext.
}
'''
m_df=sparql2df(g,q)
m_df=m_df.merge(topicsbyedm_df,on='edms')
m_df.head()
X_train= np.array(m_df['motiontext'][:-100].tolist())
X_test = np.array(m_df['motiontext'][-100:].tolist())
target_names=g_df['topic'].astype(str).tolist()
target_names[:3]
#ytrain= [[target_names.index(i) for i in t] for t in m_df['topic'][:-100] ]
#ytrain[:3]
y_train_text = [ t for t in m_df['topic'][:-100] ]
y_train_text[:3]
mlb = MultiLabelBinarizer()
Y = mlb.fit_transform(y_train_text)
classifier = Pipeline([
('vectorizer', CountVectorizer(analyzer='word',stop_words='english')),
('tfidf', TfidfTransformer()),
('clf', OneVsRestClassifier(LinearSVC()))])
classifier.fit(X_train, Y)
predicted = classifier.predict(X_test)
all_labels = mlb.inverse_transform(predicted)
hits=[]
misses=[]
for item, labels in zip(X_test, all_labels):
if labels!=(): hits.append('{0} => {1}'.format(item, ', '.join(labels)))
else: misses.append('{0} => {1}'.format(item, ', '.join(labels)))
print("some hits:\n{}\n\nsome misses:\n{}".format('\n'.join(hits[:3]),'\n'.join(misses[:3])))
labels
Explanation: Predicting Topics
End of explanation
path='../data/dataexport/proceedings'
p=ttl_graphbuilder(path,debug=True)
!ls {path}
!cat {path}/0006D323-D0B5-4E22-A26E-75ABB621F58E.ttl
Explanation: Exploring the Data - proceedings
End of explanation |
2,373 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
More on data structures
Iterable vs. Iterators
Lists are examples of iterable data structures, which means that you can iterate over the actual objects in these data structures.
Step1: generators return their contents 'lazily'. This leaves a minimal memory footprint, at the cost of making the generator nonreusable.
Step2: 'range' is something like a generator, but with special properties because of its intended use case (in 'for' loops or similar structures.
Step3: From the docs (https
Step4: zips produced iterators from pairs
Step5: More on Dicts
The dict data structure shows up all over Python.
Step6: from assignment
Step7: from iterator
Step8: In function definitions
Step9: In function calls
Step10: This allows, for instance, matplotlibs plot function to accept a huge range of different plotting options, or few to none at all. | Python Code:
# iterating over a list by object
x = ['bob', 'sue', 'mary']
for name in x:
print(name.upper() + ' WAS HERE')
# alternatively, you could iterate over position
for i in range(len(x)):
print(x[i].upper() + ' WAS HERE')
dir(x) # ignore the __ methods for now
Explanation: More on data structures
Iterable vs. Iterators
Lists are examples of iterable data structures, which means that you can iterate over the actual objects in these data structures.
End of explanation
y = (x*x for x in [1, 2, 3])
type(y)
dir(y)
y.send??
y[5]
next(y)
y.send(1)
next(y) # run this cell twice - what happens?
Explanation: generators return their contents 'lazily'. This leaves a minimal memory footprint, at the cost of making the generator nonreusable.
End of explanation
z = range(10, 5, -1)
dir(range)
# let's filter that list a little
[x for x in dir(range) if not x.startswith('_')]
z.start
len(z) # __ function - overloaded operator
Explanation: 'range' is something like a generator, but with special properties because of its intended use case (in 'for' loops or similar structures.
End of explanation
for i in z:
print(i)
Explanation: From the docs (https://docs.python.org/3/library/stdtypes.html#typesseq-range): The advantage of the range type over a regular list or tuple is that a range object will always take the same (small) amount of memory, no matter the size of the range it represents (as it only stores the start, stop and step values, calculating individual items and subranges as needed).
Range objects implement the collections.abc.Sequence ABC, and provide features such as containment tests, element index lookup, slicing and support for negative indices (see Sequence Types — list, tuple, range):
End of explanation
GPA = zip(['bob', 'sue', 'mary'], [2.3, 4.0, 3.7])
type(GPA)
dir(GPA)
next(GPA)
next(GPA)[1]
Explanation: zips produced iterators from pairs:
End of explanation
dict?
Explanation: More on Dicts
The dict data structure shows up all over Python.
End of explanation
GPA_2 = dict(bob=2.0, sue=3.4, mary=4.0)
Explanation: from assignment:
End of explanation
names = ['bob', 'mary', 'sue', 'lisa']
gpas = [3.2, 4.0, 3.1, 2.8]
GPA_3 = dict(zip(names, gpas))
GPA_3
Explanation: from iterator:
End of explanation
# explicitly named arguments are also positional
# Anything after * in a function is a positional argument - tuple
# Anything after ** is a named argument
# the latter are unpacked as dicts
def arg_explainer(x, y, *args, **kwargs):
print('-'*30)
print('x is %d, even though you didn\'t specify it, because of its position.' % x)
print('same with y, which is %d.' %y)
if args:
print('-'*30)
print('type(*args) = %s' % type(args))
print('these are the *args arguments: ')
for arg in args:
print(arg)
else:
print('-'*30)
print('no *args today!')
if kwargs:
print('-'*30)
print('type(**kwargs) == %s' % type(kwargs))
for key in kwargs:
print(key, kwargs[key])
else:
print('-'*30)
print('no **kwargs today!')
print('-'*30)
arg_explainer(2, 4, 3, 7, 8, 9, 10, plot=True, sharey=True, rotate=False)
Explanation: In function definitions:
End of explanation
my_kwargs = {'plot': False, 'sharey': True}
arg_explainer(1, 2, **my_kwargs)
Explanation: In function calls:
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
?plt.plot
x = np.linspace(-5, 5, 100)
y1 = np.sin(x)
y2 = np.cos(x)
plt.plot(x, y1) # all of these arguments are *args
plt.plot(x, y2, color='red', label='just on the cosine, for no reason at all') # starting w/ color, **kwargs
plt.legend(loc='center');
Explanation: This allows, for instance, matplotlibs plot function to accept a huge range of different plotting options, or few to none at all.
End of explanation |
2,374 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is a Jupyter notebook for David Dobrinskiy's HSE Thesis
How Venture Capital Affects Startups' Success
Step1: Let us look at the dynamics of total US VC investment
Step3: Deals and investments are in alternating rows of frame, let's separate them
Step5: Plot data from MoneyTree report
http
Step6: WSJ Unicorns
Step7: Most funded IPO-reaching US startups
Step8: Facebook is an extreme outlier in venture capital, let's exclude it from our analysis | Python Code:
# You should be running python3
import sys
print(sys.version)
import pandas as pd # http://pandas.pydata.org/
import numpy as np # http://numpy.org/
import statsmodels.api as sm # http://statsmodels.sourceforge.net/stable/index.html
import statsmodels.formula.api as smf
import statsmodels
print("Pandas Version: {}".format(pd.__version__)) # pandas version
print("StatsModels Version: {}".format(statsmodels.__version__)) # StatsModels version
Explanation: This is a Jupyter notebook for David Dobrinskiy's HSE Thesis
How Venture Capital Affects Startups' Success
End of explanation
# load the pwc dataset from azure
from azureml import Workspace
ws = Workspace()
ds = ws.datasets['pwc_moneytree.csv']
frame = ds.to_dataframe()
frame.head()
del frame['Grand Total']
frame.columns = ['year', 'type', 'q1', 'q2', 'q3', 'q4']
frame['year'] = frame['year'].fillna(method='ffill')
frame.head()
Explanation: Let us look at the dynamics of total US VC investment
End of explanation
deals_df = frame.iloc[0::2]
investments_df = frame.iloc[1::2]
# once separated, 'type' field is identical within each df
# let's delete it
del deals_df['type']
del investments_df['type']
deals_df.head()
investments_df.head()
def unstack_to_series(df):
Takes q1-q4 in a dataframe and converts it to a series
input: a dataframe containing ['q1', 'q2', 'q3', 'q4']
ouput: a pandas series
quarters = ['q1', 'q2', 'q3', 'q4']
d = dict()
for i, row in df.iterrows():
for q in quarters:
key = str(int(row['year'])) + q
d[key] = row[q]
# print(key, q, row[q])
return pd.Series(d)
deals = unstack_to_series(deals_df ).dropna()
investments = unstack_to_series(investments_df).dropna()
def string_to_int(money_string):
numerals = [c if c.isnumeric() else '' for c in money_string]
return int(''.join(numerals))
# convert deals from string to integers
deals = deals.apply(string_to_int)
deals.tail()
# investment in billions USD
# converts to integers - which is ok, since data is in dollars
investments_b = investments.apply(string_to_int)
# in python3 division automatically converts numbers to floats, we don't loose precicion
investments_b = investments_b / 10**9
# round data to 2 decimals
investments_b = investments_b.apply(round, ndigits=2)
investments_b.tail()
Explanation: Deals and investments are in alternating rows of frame, let's separate them
End of explanation
import matplotlib.pyplot as plt # http://matplotlib.org/
import matplotlib.patches as mpatches
import matplotlib.ticker as ticker
%matplotlib inline
# change matplotlib inline display size
# import matplotlib.pylab as pylab
# pylab.rcParams['figure.figsize'] = (8, 6) # that's default image size for this interactive session
fig, ax1 = plt.subplots()
ax1.set_title("VC historical trend (US Data)")
t = range(len(investments_b)) # need to substitute tickers for years later
width = t[1]-t[0]
y1 = investments_b
# create filled step chart for investment amount
ax1.bar(t, y1, width=width, facecolor='0.80', edgecolor='', label = 'Investment ($ Bln.)')
ax1.set_ylabel('Investment ($ Bln.)')
# set up xlabels with years
years = [str(year)[:-2] for year in deals.index][::4] # get years without quarter
ax1.set_xticks(t[::4]) # set 1 tick per year
ax1.set_xticklabels(years, rotation=50) # set tick names
ax1.set_xlabel('Year') # name X axis
# format Y1 tickers to $ billions
formatter = ticker.FormatStrFormatter('$%1.0f Bil.')
ax1.yaxis.set_major_formatter(formatter)
for tick in ax1.yaxis.get_major_ticks():
tick.label1On = False
tick.label2On = True
# create second Y2 axis for Num of Deals
ax2 = ax1.twinx()
y2 = deals
ax2.plot(t, y2, color = 'k', ls = '-', label = 'Num. of Deals')
ax2.set_ylabel('Num. of Deals')
# add annotation bubbles
ax2.annotate('1997-2000 dot-com bubble', xy=(23, 2100), xytext=(6, 1800),
bbox=dict(boxstyle="round4", fc="w"),
arrowprops=dict(arrowstyle="-|>",
connectionstyle="arc3,rad=0.2",
fc="w"),
)
ax2.annotate('2007-08 Financial Crisis', xy=(57, 800), xytext=(40, 1300),
bbox=dict(boxstyle="round4", fc="w"),
arrowprops=dict(arrowstyle="-|>",
connectionstyle="arc3,rad=-0.2",
fc="w"),
)
# add legend
ax1.legend(loc="best")
ax2.legend(bbox_to_anchor=(0.95, 0.88))
fig.tight_layout() # solves cropping problems when saving png
fig.savefig('vc_trend_3.png', dpi=250)
plt.show()
def tex(df):
Print dataframe contents in latex-ready format
for line in df.to_latex().split('\n'):
print(line)
ds = ws.datasets['ipo_mna.csv']
frame = ds.to_dataframe()
frame.tail()
frame = frame.iloc[:-2]
frame = frame.set_index('q')
frame
Explanation: Plot data from MoneyTree report
http://www.pwcmoneytree.com
End of explanation
ds = ws.datasets['wsj_unicorns.csv']
frame = ds.to_dataframe()
frame.tail()
Explanation: WSJ Unicorns
End of explanation
# data from Founder Collective
# http://www.foundercollective.com/
ds = ws.datasets['most_funded_ipo.csv']
frame = ds.to_dataframe()
most_funded = frame.copy()
most_funded.tail()
from datetime import datetime
most_funded['Firm age'] = datetime.now().year - most_funded['Founded']
most_funded['Years to IPO'] = most_funded['IPO Year'] - most_funded['Founded']
# extract all funding rounds
# R1, R2, ... are funding rounds (Raising VC)
most_funded.iloc[:,2:22:2].tail()
# [axis = 1] to sum by row instead of by-column
most_funded['VC'] = most_funded.iloc[:,2:22:2].sum(axis=1)
# VC data is in MILLIONS of $
most_funded['IPO Raise'].head(3)
# convert IPO string to MILLIONS of $
converter = lambda x: round(int((x.replace(',',''))[1:])/10**6, 2)
most_funded['IPO Raise'] = most_funded['IPO Raise' ].apply(converter)
most_funded['Current Market Cap'] = most_funded['Current Market Cap '].apply(converter)
del most_funded['Current Market Cap ']
most_funded['IPO Raise'].head(3)
# MILLIONS of $
most_funded['VC and IPO'] = most_funded['VC'] + most_funded['IPO Raise']
# Price in ordinary $
most_funded['$ Price change'] = most_funded['Current Share Price'] - most_funded['IPO Share Price']
most_funded['% Price change'] = round(most_funded['$ Price change'] / most_funded['IPO Share Price'], 2)
Explanation: Most funded IPO-reaching US startups
End of explanation
mask = most_funded['Firm'] == 'Facebook'
most_funded[mask]
# removing Facebook
most_funded = most_funded[~mask]
# look at all the columns
[print(c) for c in most_funded.columns]
None
cols = most_funded.columns[:2].append(most_funded.columns[22:])
cols
# remove individual funding rounds - we'll only analyze aggregates
most_funded = most_funded[cols]
from matplotlib.ticker import FuncFormatter
x = most_funded['Firm']
y = sorted(most_funded['VC'], reverse=True)
def millions(x, pos):
'The two args are the value and tick position'
return '$%1.0fM' % (x)
formatter = FuncFormatter(millions)
fig, ax = plt.subplots(figsize=(6,4), dpi=200)
ax.yaxis.set_major_formatter(formatter)
#plt.figure(figsize=(6,4), dpi=200)
# Create a new subplot from a grid of 1x1
# plt.subplot(111)
plt.title("Total VC raised for unicorns")
plt.bar(range(len(x)+2), [0,0]+y, width = 1, facecolor='0.80', edgecolor='k', linewidth=0.3)
plt.ylabel('VC raised per firm\n(before IPO)')
# plt.set_xticks(x) # set 1 tick per year
plt.xlabel('Firms')
plt.xticks([])
plt.show()
cols = ['Firm', 'Sector', 'VC', 'Current Market Cap']
df = most_funded[cols]
df.set_index('Firm', inplace = True)
df.head(2)
tmp = df.groupby('Sector').sum().applymap(int)
tmp.index += ' Total'
tmp.sort_index(ascending=False, inplace = True)
tmp
tmp2 = df.groupby('Sector').mean().applymap(int)
tmp2.index += ' Average'
tmp2.sort_index(ascending=False, inplace = True)
tmp2
tmp.append(tmp2).applymap(lambda x: "${:,}".format(x))
tex(tmp.append(tmp2).applymap(lambda x: "${:,}".format(x)))
most_funded['Mult'] = (most_funded['Current Market Cap'] / most_funded['VC']).replace([np.inf, -np.inf], np.nan)
most_funded.head()
tex(most_funded.iloc[:,list(range(8))+list(range(11,20))].head().T)
most_funded.head()
most_funded['Current Market Cap']
least_20 = most_funded.dropna().sort_values('VC')[1:21]
least_20 = least_20[['VC', 'Current Market Cap','Mult']].mean()
least_20
most_20 = most_funded.dropna().sort_values('VC')[-20:]
most_20 = most_20[['VC', 'Current Market Cap','Mult']].mean()
most_20
pd.DataFrame([most_20, least_20], index=['most_20', 'least_20']).applymap(lambda x: round(x, 2))
tex(pd.DataFrame([most_20, least_20], index=['most_20', 'least_20']).applymap(lambda x: round(x, 2)))
cols = ['Sector', 'VC', '% Price change', 'Firm age', 'Years to IPO', 'Current Market Cap', 'Mult']
df = most_funded[cols]
df.columns = ['Sector', 'VC', 'Growth', 'Age', 'yearsIPO', 'marketCAP', 'Mult']
df.head(2)
res = smf.ols(formula='Growth ~ VC + yearsIPO + C(Sector)', data=df).fit()
print(res.summary())
res = smf.ols(formula='Growth ~ VC + Age + yearsIPO + C(Sector)', data=df).fit()
print(res.summary())
print(res.summary().as_latex())
res = smf.ols(formula='Mult ~ VC + yearsIPO + C(Sector)', data=df).fit()
print(res.summary())
Explanation: Facebook is an extreme outlier in venture capital, let's exclude it from our analysis
End of explanation |
2,375 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Activate logging for Gensim, so we can see that everything is working correctly. Gensim will, for example, complain if no C compiler is installed to let you know that Word2Vec will be awfully slow.
Step1: Import NLTK to seperate a document into sentences and import Gensim to actually train the Word2Vec models. | Python Code:
import re
import nltk
import os.path as path
from random import shuffle
from gensim.models import Word2Vec
Explanation: Activate logging for Gensim, so we can see that everything is working correctly. Gensim will, for example, complain if no C compiler is installed to let you know that Word2Vec will be awfully slow.
End of explanation
pattern = re.compile(r'[\W\d]')
try:
nltk.data.find("tokenizers/punkt")
except LookupError:
punkt = nltk.download('punkt')
def init_tokenizer(lang):
model = 'tokenizers/punkt/{}.pickle'.format(lang.lower())
return nltk.data.load(model)
def sentences(doc):
tokenizer = init_tokenizer("german")
sent = tokenizer.tokenize(doc.strip())
return sent
def token(sentence):
letters = pattern.sub(" ", sentence)
words = letters.lower().split()
words = [word for word in words if len(word) > 1]
return words
class FileCorpus:
def __init__(self, files, encoding='UTF-8'):
self.files = files
self.encoding = encoding
def __iter__(self):
for file in self.files:
for doc in open(file, encoding=self.encoding):
yield doc
def sentences(self):
documents = [sentences(document) for document in self]
t = [token(sentence) for document in documents for sentence in document]
return t
def build_model(name, coll):
sentences = FileCorpus(coll).sentences()
shuffle(sentences)
model = Word2Vec(sentences, workers=4, iter=10, size=100, window=2, sg=1, hs=1)
model.save(path.join('models','{}.w2v'.format(name)))
return model
npd = build_model('NPD', [path.join('data', file)
for file in ['NPD.txt', 'NPD_MV.txt', 'NPD_Sachsen.txt']])
spd = build_model('SPD', [path.join('data', file)
for file in ['SPD_Inland.txt', 'SPD_International.txt', 'SPD_Parteileben.txt']])
cdu = build_model('CDU', [path.join('data', file)
for file in ['CDU.txt', 'CDU_EU.txt', 'CDU_Fraktion.txt']])
fdp = build_model('FDP', [path.join('data', file)
for file in ['FDP.txt', 'FDP_Fraktion.txt']])
grüne = build_model('GRÜNE', [path.join('data', file)
for file in ['Grüne.txt', 'Grüne_Fraktion.txt']])
linke = build_model('LINKE', [path.join('data', file)
for file in ['Linke.txt', 'Linke_PR.txt', 'Linke_Fraktion.txt']])
npd.most_similar('flüchtlinge')
spd.most_similar('flüchtlinge')
cdu.most_similar('flüchtlinge')
fdp.most_similar('flüchtlinge')
grüne.most_similar('flüchtlinge')
linke.most_similar('flüchtlinge')
Explanation: Import NLTK to seperate a document into sentences and import Gensim to actually train the Word2Vec models.
End of explanation |
2,376 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 3
Imports
Step2: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string
Step4: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as
Step5: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
Explanation: Algorithms Exercise 3
Imports
End of explanation
def char_probs(s):
Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
alp={'a':0,'b':0,'c':0,'d':0,'e':0,'f':0,'g':0,'h':0,'i':0,'j':0,'k':0,'l':0,'m':0,'n':0,'o':0,'p':0,'q':0,'r':0,'s':0,'t':0,'u':0,'v':0,'w':0,'x':0,'y':0,'z':0}
a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z=0
alph=[a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z]
for let in s:
if let=='q':
q+=1
if let=='w':
w+=1
if let=='e':
e+=1
if let=='r':
r+=1
if let=='t':
t+=1
if let=='y':
y+=1
if let=='u':
u+=1
if let=='i':
i+=1
if let=='o':
o+=1
if let=='p':
p+=1
if let=='a':
a+=1
if let=='s':
s+=1
if let=='d':
d+=1
if let=='f':
f+=1
if let=='g':
g+=1
if let=='h':
h+=1
if let=='j':
j+=1
if let=='k':
k+=1
if let=='l':
l+=1
if let=='z':
z+=1
if let=='x':
x+=1
if let=='c':
c+=1
if let=='v':
v+=1
if let=='b':
b+=1
if let=='n':
n+=1
if let=='m':
m+=1
Prob=[]
L=len(s)
for let in alph:
prob=let/L
Prob.append(prob)
return Prob
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
Explanation: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string:
First do a character count and store the result in a dictionary.
Then divide each character counts by the total number of character to compute the normalized probabilties.
Return the dictionary of characters (keys) and probabilities (values).
End of explanation
def entropy(d):
Compute the entropy of a dict d whose values are probabilities.
# YOUR CODE HERE
raise NotImplementedError()
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
Explanation: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as:
$$H = - \Sigma_i P_i \log_2(P_i)$$
In this expression $\log_2$ is the base 2 log (np.log2), which is commonly used in information science. In Physics the natural log is often used in the definition of entropy.
Write a funtion entropy that computes the entropy of a probability distribution. The probability distribution will be passed as a Python dict: the values in the dict will be the probabilities.
To compute the entropy, you should:
First convert the values (probabilities) of the dict to a Numpy array of probabilities.
Then use other Numpy functions (np.log2, etc.) to compute the entropy.
Don't use any for or while loops in your code.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for grading the pi digits histogram
Explanation: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
End of explanation |
2,377 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Modular neural nets
In the previous exercise, we started to build modules/general layers for implementing large neural networks. In this exercise, we will expand on this by implementing a convolutional layer, max pooling layer and a dropout layer.
For each layer we will implement forward and backward functions. The forward function will receive data, weights, and other parameters, and will return both an output and a cache object that stores data needed for the backward pass. The backward function will recieve upstream derivatives and the cache object, and will return gradients with respect to the data and all of the weights. This will allow us to write code that looks like this
Step2: Dropout layer
Step3: Dropout layer
Step4: Convolution layer
Step6: Aside
Step7: Convolution layer
Step8: Max pooling layer
Step9: Max pooling layer
Step10: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory
Step11: Sandwich layers
There are a couple common layer "sandwiches" that frequently appear in ConvNets. For example convolutional layers are frequently followed by ReLU and pooling, and affine layers are frequently followed by ReLU. To make it more convenient to use these common patterns, we have defined several convenience layers in the file cs231n/layer_utils.py. Lets grad-check them to make sure that they work correctly | Python Code:
# As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
Explanation: Modular neural nets
In the previous exercise, we started to build modules/general layers for implementing large neural networks. In this exercise, we will expand on this by implementing a convolutional layer, max pooling layer and a dropout layer.
For each layer we will implement forward and backward functions. The forward function will receive data, weights, and other parameters, and will return both an output and a cache object that stores data needed for the backward pass. The backward function will recieve upstream derivatives and the cache object, and will return gradients with respect to the data and all of the weights. This will allow us to write code that looks like this:
```python
def two_layer_net(X, W1, b1, W2, b2, reg):
# Forward pass; compute scores
s1, fc1_cache = affine_forward(X, W1, b1)
a1, relu_cache = relu_forward(s1)
scores, fc2_cache = affine_forward(a1, W2, b2)
# Loss functions return data loss and gradients on scores
data_loss, dscores = svm_loss(scores, y)
# Compute backward pass
da1, dW2, db2 = affine_backward(dscores, fc2_cache)
ds1 = relu_backward(da1, relu_cache)
dX, dW1, db1 = affine_backward(ds1, fc1_cache)
# A real network would add regularization here
# Return loss and gradients
return loss, dW1, db1, dW2, db2
```
End of explanation
# Check the dropout forward pass
x = np.random.randn(100, 100)
dropout_param_train = {'p': 0.25, 'mode': 'train'}
dropout_param_test = {'p': 0.25, 'mode': 'test'}
out_train, _ = dropout_forward(x, dropout_param_train)
out_test, _ = dropout_forward(x, dropout_param_test)
# Test dropout training mode; about 25% of the elements should be nonzero
print np.mean(out_train != 0) # expected to be ~0.25
# Test dropout test mode; all of the elements should be nonzero
print np.mean(out_test != 0) # expected to be = 1
Explanation: Dropout layer: forward
Open the file cs231n/layers.py and implement the dropout_forward function. You should implement inverted dropout rather than regular dropout. We can check the forward pass by looking at the statistics of the outputs in train and test modes.
End of explanation
from cs231n.gradient_check import eval_numerical_gradient_array
# Check the dropout backward pass
x = np.random.randn(5, 4)
dout = np.random.randn(*x.shape)
dropout_param = {'p': 0.8, 'mode': 'train', 'seed': 123}
dx_num = eval_numerical_gradient_array(lambda x: dropout_forward(x, dropout_param)[0], x, dout)
_, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
# The error should be around 1e-12
print 'Testing dropout_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
Explanation: Dropout layer: backward
Open the file cs231n/layers.py and implement the dropout_backward function. We can check the backward pass using numerical gradient checking.
End of explanation
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]]])
# Compare your output to ours; difference should be around 1e-8
print 'Testing conv_forward_naive'
print 'difference: ', rel_error(out, correct_out)
Explanation: Convolution layer: forward naive
We are now ready to implement the forward pass for a convolutional layer. Implement the function conv_forward_naive in the file cs231n/layers.py.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d/2:-d/2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
Tiny helper to show images as uint8 and remove axis labels
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-9'
print 'Testing conv_backward_naive function'
print 'dx error: ', rel_error(dx, dx_num)
print 'dw error: ', rel_error(dw, dw_num)
print 'db error: ', rel_error(db, db_num)
Explanation: Convolution layer: backward naive
Next you need to implement the function conv_backward_naive in the file cs231n/layers.py. As usual, we will check your implementation with numeric gradient checking.
End of explanation
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print 'Testing max_pool_forward_naive function:'
print 'difference: ', rel_error(out, correct_out)
Explanation: Max pooling layer: forward naive
The last layer we need for a basic convolutional neural network is the max pooling layer. First implement the forward pass in the function max_pool_forward_naive in the file cs231n/layers.py.
End of explanation
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print 'Testing max_pool_backward_naive function:'
print 'dx error: ', rel_error(dx, dx_num)
Explanation: Max pooling layer: backward naive
Implement the backward pass for a max pooling layer in the function max_pool_backward_naive in the file cs231n/layers.py. As always we check the correctness of the backward pass using numerical gradient checking.
End of explanation
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print 'Testing conv_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'Difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting conv_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
print 'dw difference: ', rel_error(dw_naive, dw_fast)
print 'db difference: ', rel_error(db_naive, db_fast)
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print 'Testing pool_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'fast: %fs' % (t2 - t1)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting pool_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print 'Testing conv_relu_pool_forward:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print 'Testing conv_relu_forward:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print 'Testing affine_relu_forward:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
Explanation: Sandwich layers
There are a couple common layer "sandwiches" that frequently appear in ConvNets. For example convolutional layers are frequently followed by ReLU and pooling, and affine layers are frequently followed by ReLU. To make it more convenient to use these common patterns, we have defined several convenience layers in the file cs231n/layer_utils.py. Lets grad-check them to make sure that they work correctly:
End of explanation |
2,378 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pima Indian Diabetes Prediction
### Update History
2021-04-15 Added bypass of imputation of Num of Pregnancies field, switched to using only transform for test data, and added code to load data from client system when running in Colab.
Import some basic libraries.
* Pandas - provided data frames
* matplotlib.pyplot - plotting support
Use Magic %matplotlib to display graphics inline instead of in a popup window.
Step1: Loading and Reviewing the Data
Step2: Definition of features
From the metadata on the data source we have the following definition of the features.
| Feature | Description | Comments |
|--------------|-------------|--------|
| num_preg | number of pregnancies | 0 is valid
| glucose_conc | Plasma glucose concentration a 2 hours in an oral glucose tolerance test |
| diastolic_bp | Diastolic blood pressure (mm Hg) |
| thickness | Triceps skin fold thickness (mm) |
|insulin | 2-Hour serum insulin (mu U/ml) |
| bmi | Body mass index (weight in kg/(height in m)^2) |
| diab_pred | Diabetes pedigree function |
| Age (years) | Age (years)|
| skin | ???? | What is this? |
| diabetes | Class variable (1=True, 0=False) | Why is our data boolean (True/False)? |
Check for null values
Step4: Correlated Feature Check
Helper function that displays correlation by color. Red is most correlated, Blue least.
Step5: The skin and thickness columns are correlated 1 to 1. Dropping the skin column
Step6: Check for additional correlations
Step7: The correlations look good. There appear to be no coorelated columns.
Mold Data
Data Types
Inspect data types to see if there are any issues. Data should be numeric.
Step8: Change diabetes from boolean to integer, True=1, False=0
Step9: Verify that the diabetes data type has been changed.
Step10: Check for null values
Step11: No obvious null values.
Check class distribution
Rare events are hard to predict
Step12: Good distribution of true and false cases. No special work needed.
Spliting the data
70% for training, 30% for testing
Step13: We check to ensure we have the the desired 70% train, 30% test split of the data
Step14: Verifying predicted value was split correctly
Step15: Post-split Data Preparation
Hidden Missing Values
Step16: Are these 0 values possible?
How many rows have have unexpected 0 values?
Step17: Impute with the mean
Step18: Training Initial Algorithm - Naive Bayes
Step19: Performance on Training Data
Step20: Performance on Testing Data
Step21: Metrics
Step22: Random Forest
Step23: Predict Training Data
Step24: Predict Test Data
Step25: Logistic Regression
Step26: Setting regularization parameter
Step27: Logisitic regression with class_weight='balanced'
Step28: LogisticRegressionCV
Step29: Predict on Test data
Step30: Using your trained Model
Save trained model to file
Step31: Load trained model from file
Step32: Test Prediction on data
Once the model is loaded we can use it to predict on some data. In this case the data file contains a few rows from the original Pima CSV file.
Step33: The truncated file contained 4 rows from the original CSV.
Data is the same is in same format as the original CSV file's data. Therefore, just like the original data, we need to transform it before we can make predictions on the data.
Note
Step34: We need to drop the diabetes column since that is what we are predicting.
Store data without the column with the prefix X as we did with the X_train and X_test to indicate that it contains only the columns we are prediction.
Step35: Data has 0 in places it should not.
Just like test or test datasets we will use imputation to fix this.
Step36: At this point our data is ready to be used for prediction.
Predict diabetes with the prediction data. Returns 1 if True, 0 if false | Python Code:
import pandas as pd # pandas is a dataframe library
import matplotlib.pyplot as plt # matplotlib.pyplot plots data
%matplotlib inline
Explanation: Pima Indian Diabetes Prediction
### Update History
2021-04-15 Added bypass of imputation of Num of Pregnancies field, switched to using only transform for test data, and added code to load data from client system when running in Colab.
Import some basic libraries.
* Pandas - provided data frames
* matplotlib.pyplot - plotting support
Use Magic %matplotlib to display graphics inline instead of in a popup window.
End of explanation
# COLAB VERSION ONLY - load file from user's computer
#from google.colab import files
#uploaded = files.upload()
# once uploaded, read the files contents into the dataframe
#import io
#df = pd.read_csv(io.BytesIO(uploaded['pima-data.csv']))
# regular version pointing to file on users system
df = pd.read_csv("./data/pima-data.csv")
df.shape
df.head(5)
df.tail(5)
Explanation: Loading and Reviewing the Data
End of explanation
df.isnull().values.any()
Explanation: Definition of features
From the metadata on the data source we have the following definition of the features.
| Feature | Description | Comments |
|--------------|-------------|--------|
| num_preg | number of pregnancies | 0 is valid
| glucose_conc | Plasma glucose concentration a 2 hours in an oral glucose tolerance test |
| diastolic_bp | Diastolic blood pressure (mm Hg) |
| thickness | Triceps skin fold thickness (mm) |
|insulin | 2-Hour serum insulin (mu U/ml) |
| bmi | Body mass index (weight in kg/(height in m)^2) |
| diab_pred | Diabetes pedigree function |
| Age (years) | Age (years)|
| skin | ???? | What is this? |
| diabetes | Class variable (1=True, 0=False) | Why is our data boolean (True/False)? |
Check for null values
End of explanation
def plot_corr(df, size=11):
Function plots a graphical correlation matrix for each pair of columns in the dataframe.
Input:
df: pandas DataFrame
size: vertical and horizontal size of the plot
Displays:
matrix of correlation between columns. Blue-cyan-yellow-red-darkred => less to more correlated
0 ------------------> 1
Expect a darkred line running from top left to bottom right
corr = df.corr() # data frame correlation function
fig, ax = plt.subplots(figsize=(size, size))
ax.matshow(corr) # color code the rectangles by correlation value
plt.xticks(range(len(corr.columns)), corr.columns) # draw x tick marks
plt.yticks(range(len(corr.columns)), corr.columns) # draw y tick marks
plot_corr(df)
df.corr()
df.head(5)
Explanation: Correlated Feature Check
Helper function that displays correlation by color. Red is most correlated, Blue least.
End of explanation
del df['skin']
df.head(5)
Explanation: The skin and thickness columns are correlated 1 to 1. Dropping the skin column
End of explanation
plot_corr(df)
Explanation: Check for additional correlations
End of explanation
df.head(5)
Explanation: The correlations look good. There appear to be no coorelated columns.
Mold Data
Data Types
Inspect data types to see if there are any issues. Data should be numeric.
End of explanation
diabetes_map = {True : 1, False : 0}
df['diabetes'] = df['diabetes'].map(diabetes_map)
Explanation: Change diabetes from boolean to integer, True=1, False=0
End of explanation
df.head(5)
Explanation: Verify that the diabetes data type has been changed.
End of explanation
df.isnull().values.any()
Explanation: Check for null values
End of explanation
num_obs = len(df)
num_true = len(df.loc[df['diabetes'] == 1])
num_false = len(df.loc[df['diabetes'] == 0])
print("Number of True cases: {0} ({1:2.2f}%)".format(num_true, (num_true/num_obs) * 100))
print("Number of False cases: {0} ({1:2.2f}%)".format(num_false, (num_false/num_obs) * 100))
Explanation: No obvious null values.
Check class distribution
Rare events are hard to predict
End of explanation
#from sklearn.cross_validation import train_test_split
from sklearn.model_selection import train_test_split
feature_col_names = ['num_preg', 'glucose_conc', 'diastolic_bp', 'thickness', 'insulin', 'bmi', 'diab_pred', 'age']
predicted_class_names = ['diabetes']
X = df[feature_col_names] # predictor feature columns (8 X m)
y = df[predicted_class_names] # predicted class (1=true, 0=false) column (1 X m)
split_test_size = 0.30
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=split_test_size, random_state=42)
# test_size = 0.3 is 30%, 42 is the answer to everything
Explanation: Good distribution of true and false cases. No special work needed.
Spliting the data
70% for training, 30% for testing
End of explanation
print("{0:0.2f}% in training set".format((len(X_train)/len(df.index)) * 100))
print("{0:0.2f}% in test set".format((len(X_test)/len(df.index)) * 100))
Explanation: We check to ensure we have the the desired 70% train, 30% test split of the data
End of explanation
print("Original True : {0} ({1:0.2f}%)".format(len(df.loc[df['diabetes'] == 1]), (len(df.loc[df['diabetes'] == 1])/len(df.index)) * 100.0))
print("Original False : {0} ({1:0.2f}%)".format(len(df.loc[df['diabetes'] == 0]), (len(df.loc[df['diabetes'] == 0])/len(df.index)) * 100.0))
print("")
print("Training True : {0} ({1:0.2f}%)".format(len(y_train[y_train[:] == 1]), (len(y_train[y_train[:] == 1])/len(y_train) * 100.0)))
print("Training False : {0} ({1:0.2f}%)".format(len(y_train[y_train[:] == 0]), (len(y_train[y_train[:] == 0])/len(y_train) * 100.0)))
print("")
print("Test True : {0} ({1:0.2f}%)".format(len(y_test[y_test[:] == 1]), (len(y_test[y_test[:] == 1])/len(y_test) * 100.0)))
print("Test False : {0} ({1:0.2f}%)".format(len(y_test[y_test[:] == 0]), (len(y_test[y_test[:] == 0])/len(y_test) * 100.0)))
Explanation: Verifying predicted value was split correctly
End of explanation
df.head()
Explanation: Post-split Data Preparation
Hidden Missing Values
End of explanation
print("# rows in dataframe {0}".format(len(df)))
print("# rows missing glucose_conc: {0}".format(len(df.loc[df['glucose_conc'] == 0])))
print("# rows missing diastolic_bp: {0}".format(len(df.loc[df['diastolic_bp'] == 0])))
print("# rows missing thickness: {0}".format(len(df.loc[df['thickness'] == 0])))
print("# rows missing insulin: {0}".format(len(df.loc[df['insulin'] == 0])))
print("# rows missing bmi: {0}".format(len(df.loc[df['bmi'] == 0])))
print("# rows missing diab_pred: {0}".format(len(df.loc[df['diab_pred'] == 0])))
print("# rows missing age: {0}".format(len(df.loc[df['age'] == 0])))
Explanation: Are these 0 values possible?
How many rows have have unexpected 0 values?
End of explanation
# NEED CALLOUT MENTION CHANGE TO SIMPLEIMPUTER
from sklearn.impute import SimpleImputer
#Impute with mean all 0 readings
fill_0 = SimpleImputer(missing_values=0, strategy="mean")
# Notice the missing_values=0 will be replaced by mean. However, the num_preg can have a value of 0.
# To prevent replacing the 0 num_preg with the mean we need to skip imputing the 'num_preg' column
cols_not_num_preg = X_train.columns.difference(['num_preg']) # all columns but the num_preg column
pd.options.mode.chained_assignment = None # Supress warning message on transformed assignment
# impute the training data
X_train[cols_not_num_preg] = fill_0.fit_transform(X_train[cols_not_num_preg])
# impute the test data
X_test[cols_not_num_preg] = fill_0.transform(X_test[cols_not_num_preg])
Explanation: Impute with the mean
End of explanation
from sklearn.naive_bayes import GaussianNB
# create Gaussian Naive Bayes model object and train it with the data
nb_model = GaussianNB()
nb_model.fit(X_train, y_train.values.flatten())
Explanation: Training Initial Algorithm - Naive Bayes
End of explanation
# predict values using the training data
nb_predict_train = nb_model.predict(X_train)
# import the performance metrics library
from sklearn import metrics
# Accuracy
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_train, nb_predict_train)))
print()
Explanation: Performance on Training Data
End of explanation
# predict values using the testing data
nb_predict_test = nb_model.predict(X_test)
from sklearn import metrics
# training metrics
print("nb_predict_test", nb_predict_test)
print ("y_test", y_test)
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_test, nb_predict_test)))
Explanation: Performance on Testing Data
End of explanation
print("Confusion Matrix")
print("{0}".format(metrics.confusion_matrix(y_test, nb_predict_test)))
print("")
print("Classification Report")
print(metrics.classification_report(y_test, nb_predict_test))
Explanation: Metrics
End of explanation
from sklearn.ensemble import RandomForestClassifier
rf_model = RandomForestClassifier(random_state=42, n_estimators=10) # Create random forest object
rf_model.fit(X_train, y_train.values.flatten())
Explanation: Random Forest
End of explanation
rf_predict_train = rf_model.predict(X_train)
# training metrics
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_train, rf_predict_train)))
Explanation: Predict Training Data
End of explanation
rf_predict_test = rf_model.predict(X_test)
# training metrics
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_test, rf_predict_test)))
print(metrics.confusion_matrix(y_test, rf_predict_test) )
print("")
print("Classification Report")
print(metrics.classification_report(y_test, rf_predict_test))
Explanation: Predict Test Data
End of explanation
from sklearn.linear_model import LogisticRegression
lr_model =LogisticRegression(C=0.7, random_state=42, solver='liblinear', max_iter=10000)
lr_model.fit(X_train, y_train.values.flatten()) #.ravel())
lr_predict_test = lr_model.predict(X_test)
# training metrics
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_test, lr_predict_test)))
print(metrics.confusion_matrix(y_test, lr_predict_test) )
print("")
print("Classification Report")
print(metrics.classification_report(y_test, lr_predict_test))
Explanation: Logistic Regression
End of explanation
C_start = 0.1
C_end = 5
C_inc = 0.1
C_values, recall_scores = [], []
C_val = C_start
best_recall_score = 0
while (C_val < C_end):
C_values.append(C_val)
lr_model_loop = LogisticRegression(C=C_val, random_state=42, solver='liblinear')
lr_model_loop.fit(X_train, y_train.values.flatten()) #.ravel())
lr_predict_loop_test = lr_model_loop.predict(X_test)
recall_score = metrics.recall_score(y_test, lr_predict_loop_test)
recall_scores.append(recall_score)
if (recall_score > best_recall_score):
best_recall_score = recall_score
best_lr_predict_test = lr_predict_loop_test
C_val = C_val + C_inc
best_score_C_val = C_values[recall_scores.index(best_recall_score)]
print("1st max value of {0:.3f} occured at C={1:.3f}".format(best_recall_score, best_score_C_val))
%matplotlib inline
plt.plot(C_values, recall_scores, "-")
plt.xlabel("C value")
plt.ylabel("recall score")
Explanation: Setting regularization parameter
End of explanation
C_start = 0.1
C_end = 5
C_inc = 0.1
C_values, recall_scores = [], []
C_val = C_start
best_recall_score = 0
while (C_val < C_end):
C_values.append(C_val)
lr_model_loop = LogisticRegression(C=C_val, class_weight="balanced", random_state=42, solver='liblinear', max_iter=10000)
lr_model_loop.fit(X_train, y_train.values.flatten())
lr_predict_loop_test = lr_model_loop.predict(X_test)
recall_score = metrics.recall_score(y_test, lr_predict_loop_test)
recall_scores.append(recall_score)
if (recall_score > best_recall_score):
best_recall_score = recall_score
best_lr_predict_test = lr_predict_loop_test
C_val = C_val + C_inc
best_score_C_val = C_values[recall_scores.index(best_recall_score)]
print("1st max value of {0:.3f} occured at C={1:.3f}".format(best_recall_score, best_score_C_val))
%matplotlib inline
plt.plot(C_values, recall_scores, "-")
plt.xlabel("C value")
plt.ylabel("recall score")
from sklearn.linear_model import LogisticRegression
lr_model =LogisticRegression( class_weight="balanced", C=best_score_C_val, random_state=42, solver='liblinear')
lr_model.fit(X_train, y_train.values.flatten())
lr_predict_test = lr_model.predict(X_test)
# training metrics
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_test, lr_predict_test)))
print(metrics.confusion_matrix(y_test, lr_predict_test) )
print("")
print("Classification Report")
print(metrics.classification_report(y_test, lr_predict_test))
print(metrics.recall_score(y_test, lr_predict_test))
Explanation: Logisitic regression with class_weight='balanced'
End of explanation
from sklearn.linear_model import LogisticRegressionCV
lr_cv_model = LogisticRegressionCV(n_jobs=-1, random_state=42, Cs=3, cv=10, refit=False, class_weight="balanced", max_iter=500) # set number of jobs to -1 which uses all cores to parallelize
lr_cv_model.fit(X_train, y_train.values.flatten())
Explanation: LogisticRegressionCV
End of explanation
lr_cv_predict_test = lr_cv_model.predict(X_test)
# training metrics
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_test, lr_cv_predict_test)))
print(metrics.confusion_matrix(y_test, lr_cv_predict_test) )
print("")
print("Classification Report")
print(metrics.classification_report(y_test, lr_cv_predict_test))
Explanation: Predict on Test data
End of explanation
import joblib
joblib.dump(lr_cv_model, "./data/pima-trained-model.pkl")
Explanation: Using your trained Model
Save trained model to file
End of explanation
lr_cv_model = joblib.load("./data/pima-trained-model.pkl")
Explanation: Load trained model from file
End of explanation
# get data from truncated pima data file
df_predict = pd.read_csv("./data/pima-data-trunc.csv")
print(df_predict.shape)
df_predict
Explanation: Test Prediction on data
Once the model is loaded we can use it to predict on some data. In this case the data file contains a few rows from the original Pima CSV file.
End of explanation
del df_predict['skin']
df_predict
Explanation: The truncated file contained 4 rows from the original CSV.
Data is the same is in same format as the original CSV file's data. Therefore, just like the original data, we need to transform it before we can make predictions on the data.
Note: If the data had been previously "cleaned up" this would not be necessary.
We do this by executed the same transformations as we did to the original data
Start by dropping the "skin" which is the same as thickness, with different units.
End of explanation
X_predict = df_predict
del X_predict['diabetes']
Explanation: We need to drop the diabetes column since that is what we are predicting.
Store data without the column with the prefix X as we did with the X_train and X_test to indicate that it contains only the columns we are prediction.
End of explanation
#Impute with mean all 0 readings
from sklearn.impute import SimpleImputer
fill_0 = SimpleImputer(missing_values=0, strategy="mean")
pd.options.mode.chained_assignment = None
X_predict_cols_not_num_preg = X_predict.columns.difference(['num_preg']) # do not impute num_preg column
X_predict[X_predict_cols_not_num_preg] = fill_0.fit_transform(X_predict[X_predict_cols_not_num_preg])
X_predict
Explanation: Data has 0 in places it should not.
Just like test or test datasets we will use imputation to fix this.
End of explanation
lr_cv_model.predict(X_predict)
Explanation: At this point our data is ready to be used for prediction.
Predict diabetes with the prediction data. Returns 1 if True, 0 if false
End of explanation |
2,379 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ATM 623
Step1: <a id='section1'></a>
1. Recap of the global energy budget
Let's look again at the observations
Step2: Let's now deal with the shortwave (solar) side of the energy budget.
Absorbed Shortwave Radiation (ASR) and Planetary Albedo
Let's define a few terms.
Global mean insolation
From the observations, the area-averaged incoming solar radiation or insolation is 341.3 W m$^{-2}$.
Let's denote this quantity by $Q$.
Step3: Planetary albedo
Some of the incoming radiation is not absorbed at all but simply reflected back to space. Let's call this quantity $F_{reflected}$
From observations we have
Step4: The planetary albedo is the fraction of $Q$ that is reflected.
We will denote the planetary albedo by $\alpha$.
From the observations
Step5: That is, about 30% of the incoming radiation is reflected back to space.
Absorbed Shortwave Radiation
The Absorbed Shortwave Radiation or ASR is the part of the incoming sunlight that is not reflected back to space, i.e. that part that is absorbed somewhere within the Earth system.
Mathematically we write
$$ \text{ASR} = Q - F_{reflected} = (1-\alpha) Q $$
From the observations
Step6: As we noted last time, this number is just slightly greater than the observed OLR of 238.5 W m$^{-2}$.
<a id='section3'></a>
3. Equilibrium temperature
This is one of the central concepts in climate modeling.
The Earth system is in energy balance when energy in = energy out, i.e. when
$$ \text{ASR} = \text{OLR} $$
We want to know
Step7: And this equilibrium temperature is just slightly warmer than 288 K. Why?
A climate change scenario
Suppose that, due to global warming (changes in atmospheric composition and subsequent changes in cloudiness)
Step8: Most climate models are more complicated mathematically, and solving directly for the equilibrium temperature will not be possible!
Instead, we will be able to use the model to calculate the terms in the energy budget (ASR and OLR).
Python exercise
Write Python functions to calculate ASR and OLR for arbitrary parameter values.
Verify the following
Step9: Solving the energy balance model
This is a first-order Ordinary Differential Equation (ODE) for $T_s$ as a function of time. It is also our very first climate model.
To solve it (i.e. see how $T_s$ evolves from some specified initial condition) we have two choices
Step10: What happened? Why?
Try another timestep
Step11: Warmed up again, but by a smaller amount.
But this is tedious typing. Time to define a function to make things easier and more reliable
Step12: Try it out with an arbitrary temperature
Step13: Notice that our function calls other functions and variables we have already defined.
Python fact 10
Step14: What did we just do?
Created an array of zeros
set the initial temperature to 288 K
repeated our time step 20 times.
Stored the results of each time step into the array.
Python fact 11
Step15: Note how the temperature adjusts smoothly toward the equilibrium temperature, that is, the temperature at which
ASR = OLR.
If the planetary energy budget is out of balance, the temperature must change so that the OLR gets closer to the ASR!
The adjustment is actually an exponential decay process
Step16: This is actually our first estimate of what is often called the Planck feedback. It is the tendency for a warm surface to cool by increased longwave radiation to space.
It may also be refered to as the "no-feedback" climate response parameter. As we will see, $\lambda_0$ quantifies the sensitivity of the climate system in the absence of any actual feedback processes.
Solve the linear ODE
Now define
$$ t^* = \frac{C}{\lambda_0} $$
This is a positive constant with dimensions of time (seconds). With these definitions the temperature evolves according to
$$ \frac{d T_s^\prime}{d t} = - \frac{T_s^\prime}{t^*}$$
This is one of the simplest ODEs. Hopefully it looks familiar to most of you. It is the equation for an exponential decay process.
We can easily solve for the temperature evolution by integrating from an initial condition $T_s^\prime(0)$
Step17: This is a rather fast timescale relative to other processes that can affect the planetary energy budget.
But notice that the climate feedback parameter $\lambda$ is smaller, the timescale gets longer. We will come back to this later.
<a id='section8'></a>
8. Summary and take-away messages
We looked at the flows of energy in and out of the Earth system.
These are determined by radiation at the top of the Earth's atmosphere.
Any imbalance between shortwave absorption (ASR) and longwave emission (OLR) drives a change in temperature
Using this idea, we built a climate model!
This Zero-Dimensional Energy Balance Model solves for the global, annual mean surface temperature $T_s$
Two key assumptions | Python Code:
# Ensure compatibility with Python 2 and 3
from __future__ import print_function, division
Explanation: ATM 623: Climate Modeling
Brian E. J. Rose, University at Albany
Lecture 2: The zero-dimensional energy balance model
Warning: content out of date and not maintained
You really should be looking at The Climate Laboratory book by Brian Rose, where all the same content (and more!) is kept up to date.
Here you are likely to find broken links and broken code.
About these notes:
This document uses the interactive Jupyter notebook format. The notes can be accessed in several different ways:
The interactive notebooks are hosted on github at https://github.com/brian-rose/ClimateModeling_courseware
The latest versions can be viewed as static web pages rendered on nbviewer
A complete snapshot of the notes as of May 2017 (end of spring semester) are available on Brian's website.
Also here is a legacy version from 2015.
Many of these notes make use of the climlab package, available at https://github.com/brian-rose/climlab
Contents
Recap of global energy budget
Tuning radiative fluxes to the observations
Equilibrium temperature
A time-dependent Energy Balance Model
Representing time derivatives on a computer
Numerical solution of the Energy Balance Model
Analytical solution of the Energy Balance Model: e-folding time and feedback parameter
Summary and take-away messages
End of explanation
OLRobserved = 238.5 # in W/m2
sigma = 5.67E-8 # S-B constant
Tsobserved = 288. # global average surface temperature
tau = OLRobserved / sigma / Tsobserved**4 # solve for tuned value of transmissivity
print(tau)
Explanation: <a id='section1'></a>
1. Recap of the global energy budget
Let's look again at the observations:
<a id='section1'></a>
2. Tuning radiative fluxes to the observations
Recap of our simple greenhouse model
Last class we introduced a very simple model for the OLR or Outgoing Longwave Radiation to space:
$$ \text{OLR} = \tau \sigma T_s^4 $$
where $\tau$ is the transmissivity of the atmosphere, a number less than 1 that represents the greenhouse effect of Earth's atmosphere.
We also tuned this model to the observations by choosing $ \tau \approx 0.61$.
More precisely:
End of explanation
Q = 341.3 # the insolation
Explanation: Let's now deal with the shortwave (solar) side of the energy budget.
Absorbed Shortwave Radiation (ASR) and Planetary Albedo
Let's define a few terms.
Global mean insolation
From the observations, the area-averaged incoming solar radiation or insolation is 341.3 W m$^{-2}$.
Let's denote this quantity by $Q$.
End of explanation
Freflected = 101.9 # reflected shortwave flux in W/m2
Explanation: Planetary albedo
Some of the incoming radiation is not absorbed at all but simply reflected back to space. Let's call this quantity $F_{reflected}$
From observations we have:
End of explanation
alpha = Freflected / Q
print(alpha)
Explanation: The planetary albedo is the fraction of $Q$ that is reflected.
We will denote the planetary albedo by $\alpha$.
From the observations:
End of explanation
ASRobserved = Q - Freflected
print(ASRobserved)
Explanation: That is, about 30% of the incoming radiation is reflected back to space.
Absorbed Shortwave Radiation
The Absorbed Shortwave Radiation or ASR is the part of the incoming sunlight that is not reflected back to space, i.e. that part that is absorbed somewhere within the Earth system.
Mathematically we write
$$ \text{ASR} = Q - F_{reflected} = (1-\alpha) Q $$
From the observations:
End of explanation
# define a reusable function!
def equilibrium_temperature(alpha,Q,tau):
return ((1-alpha)*Q/(tau*sigma))**(1/4)
Teq_observed = equilibrium_temperature(alpha,Q,tau)
print(Teq_observed)
Explanation: As we noted last time, this number is just slightly greater than the observed OLR of 238.5 W m$^{-2}$.
<a id='section3'></a>
3. Equilibrium temperature
This is one of the central concepts in climate modeling.
The Earth system is in energy balance when energy in = energy out, i.e. when
$$ \text{ASR} = \text{OLR} $$
We want to know:
What surface temperature do we need to have this balance?
By how much would the temperature change in response to other changes in Earth system?
Changes in greenhouse gases
Changes in cloudiness
etc.
With our simple greenhouse model, we can get an exact solution for the equilibrium temperature.
First, write down our statement of energy balance:
$$ (1-\alpha) Q = \tau \sigma T_s^4 $$
Rearrange to solve for $T_s$:
$$ T_s^4 = \frac{(1-\alpha) Q}{\tau \sigma} $$
and take the fourth root, denoting our equilibrium temperature as $T_{eq}$:
$$ T_{eq} = \left( \frac{(1-\alpha) Q}{\tau \sigma} \right)^\frac{1}{4} $$
Plugging the observed values back in, we compute:
End of explanation
Teq_new = equilibrium_temperature(0.32,Q,0.57)
# an example of formatted print output, limiting to two or one decimal places
print('The new equilibrium temperature is {:.2f} K.'.format(Teq_new))
print('The equilibrium temperature increased by about {:.1f} K.'.format(Teq_new-Teq_observed))
Explanation: And this equilibrium temperature is just slightly warmer than 288 K. Why?
A climate change scenario
Suppose that, due to global warming (changes in atmospheric composition and subsequent changes in cloudiness):
The longwave transmissitivity decreases to $\tau = 0.57$
The planetary albedo increases to $\alpha = 0.32$
What is the new equilibrium temperature?
For this very simple model, we can work out the answer exactly:
End of explanation
c_w = 4E3 # Specific heat of water in J/kg/K
rho_w = 1E3 # Density of water in kg/m3
H = 100. # Depth of water in m
C = c_w * rho_w * H # Heat capacity of the model
print('The effective heat capacity is {:.1e} J/m2/K'.format(C))
Explanation: Most climate models are more complicated mathematically, and solving directly for the equilibrium temperature will not be possible!
Instead, we will be able to use the model to calculate the terms in the energy budget (ASR and OLR).
Python exercise
Write Python functions to calculate ASR and OLR for arbitrary parameter values.
Verify the following:
With the new parameter values but the old temperature T = 288 K, is ASR greater or lesser than OLR?
Is the Earth gaining or losing energy?
How does your answer change if T = 295 K (or any other temperature greater than 291 K)?
<a id='section4'></a>
4. A time-dependent Energy Balance Model
The above exercise shows us that if some properties of the climate system change in such a way that the equilibrium temperature goes up, then the Earth system receives more energy from the sun than it is losing to space. The system is no longer in energy balance.
The temperature must then increase to get back into balance. The increase will not happen all at once! It will take time for energy to accumulate in the climate system. We want to model this time-dependent adjustment of the system.
In fact almost all climate models are time-dependent, meaning the model calculates time derivatives (rates of change) of climate variables.
An energy balance equation
We will write the total energy budget of the Earth system as
$$ \frac{dE}{dt} = (1-\alpha) Q - OLR $$
Note: This is a generically true statement. We have just defined some terms, and made the (very good) assumption that the only significant energy sources are radiative exchanges with space.
This equation is the starting point for EVERY CLIMATE MODEL.
But so far, we don’t actually have a MODEL. We just have a statement of a budget. To use this budget to make a model, we need to relate terms in the budget to state variables of the atmosphere-ocean system.
For now, the state variable we are most interested in is temperature – because it is directly connected to the physics of each term above.
If we now suppose that
$$ E = C T_s $$
where $T_s$ is the global mean surface temperature, and $C$ is a constant – the effective heat capacity of the atmosphere- ocean column.
then our budget equation becomes:
$$ C \frac{dT_s}{dt} = \text{ASR} - \text{OLR} $$
where
$C$ is the heat capacity of Earth system, in units of J m$^{-2}$ K$^{-1}$.
$\frac{dT}{dt}$ is the rate of change of global average surface temperature.
By adopting this equation, we are assuming that the energy content of the Earth system (atmosphere, ocean, ice, etc.) is proportional to surface temperature.
Important things to think about:
Why is this a sensible assumption?
What determines the heat capacity $C$?
What are some limitations of this assumption?
For our purposes here we are going to use a value of C equivalent to heating 100 meters of water:
$$C = c_w \rho_w H$$
where
$c_w = 4 \times 10^3$ J kg$^{-1}$ $^\circ$C$^{-1}$ is the specific heat of water,
$\rho_w = 10^3$ kg m$^{-3}$ is the density of water, and
$H$ is an effective depth of water that is heated or cooled.
End of explanation
dt = 60. * 60. * 24. * 365. # one year expressed in seconds
# Try a single timestep, assuming we have working functions for ASR and OLR
T1 = 288.
T2 = T1 + dt / C * ( ASR(alpha=0.32) - OLR(T1, tau=0.57) )
print(T2)
Explanation: Solving the energy balance model
This is a first-order Ordinary Differential Equation (ODE) for $T_s$ as a function of time. It is also our very first climate model.
To solve it (i.e. see how $T_s$ evolves from some specified initial condition) we have two choices:
Solve it analytically
Solve it numerically
Option 1 (analytical) will usually not be possible because the equations will typically be too complex and non-linear. This is why computers are our best friends in the world of climate modeling.
HOWEVER it is often useful and instructive to simplify a model down to something that is analytically solvable when possible. Why? Two reasons:
Analysis will often yield a deeper understanding of the behavior of the system
Gives us a benchmark against which to test the results of our numerical solutions.
<a id='section5'></a>
5. Representing time derivatives on a computer
Recall that the derivative is the instantaneous rate of change. It is defined as
$$ \frac{dT}{dt} = \lim_{\Delta t\rightarrow 0} \frac{\Delta T}{\Delta t}$$
On the computer there is no such thing as an instantaneous change.
We are always dealing with discrete quantities.
So we approximate the derivative with $\Delta T/ \Delta t$.
So long as we take the time interval $\Delta t$ "small enough", the approximation is valid and useful.
(The meaning of "small enough" varies widely in practice. Let's not talk about it now)
So we write our model as
$$ C \frac{\Delta T}{\Delta t} \approx \text{ASR} - \text{OLR}$$
where $\Delta T$ is the change in temperature predicted by our model over a short time interval $\Delta t$.
We can now use this to make a prediction:
Given a current temperature $T_1$ at time $t_1$, what is the temperature $T_2$ at a future time $t_2$?
We can write
$$ \Delta T = T_2-T_1 $$
$$ \Delta t = t_2-t_1 $$
and so our model says
$$ C \frac{T_2-T_1}{\Delta t} = \text{ASR} - \text{OLR} $$
Which we can rearrange to solve for the future temperature:
$$ T_2 = T_1 + \frac{\Delta t}{C} \left( \text{ASR} - \text{OLR}(T_1) \right) $$
We now have a formula with which to make our prediction!
Notice that we have written the OLR as a function of temperature. We will use the current temperature $T_1$ to compute the OLR, and use that OLR to determine the future temperature.
<a id='section6'></a>
6. Numerical solution of the Energy Balance Model
The quantity $\Delta t$ is called a timestep. It is the smallest time interval represented in our model.
Here we're going to use a timestep of 1 year:
End of explanation
T1 = T2
T2 = T1 + dt / C * ( ASR(alpha=0.32) - OLR(T1, tau=0.57) )
print(T2)
Explanation: What happened? Why?
Try another timestep
End of explanation
def step_forward(T):
return T + dt / C * ( ASR(alpha=0.32) - OLR(T, tau=0.57) )
Explanation: Warmed up again, but by a smaller amount.
But this is tedious typing. Time to define a function to make things easier and more reliable:
End of explanation
step_forward(300.)
Explanation: Try it out with an arbitrary temperature:
End of explanation
import numpy as np
numsteps = 20
Tsteps = np.zeros(numsteps+1)
Years = np.zeros(numsteps+1)
Tsteps[0] = 288.
for n in range(numsteps):
Years[n+1] = n+1
Tsteps[n+1] = step_forward( Tsteps[n] )
print(Tsteps)
Explanation: Notice that our function calls other functions and variables we have already defined.
Python fact 10: Functions can access variables and other functions defined outside of the function.
This is both very useful and occasionally confusing.
Now let's really harness the power of the computer by making a loop (and storing values in arrays):
End of explanation
# a special instruction for the Jupyter notebook
# Display all plots inline in the notebook
%matplotlib inline
# import the plotting package
import matplotlib.pyplot as plt
plt.plot( Years, Tsteps )
plt.xlabel('Years')
plt.ylabel('Global mean temperature (K)');
Explanation: What did we just do?
Created an array of zeros
set the initial temperature to 288 K
repeated our time step 20 times.
Stored the results of each time step into the array.
Python fact 11: the for statement executes a statement (or series of statements) a specified number of times (a loop!)
Python fact 12: Use square bracket [ ] to refer to elements of an array or list. Use round parentheses ( ) for function arguments.
Plotting the result
Now let's draw a picture of our result!
End of explanation
lambda_0 = 4 * sigma * tau * Teq_observed**3
# This is an example of formatted text output in Python
print( 'lambda_0 = {:.2f} W m-2 K-1'.format(lambda_0) )
Explanation: Note how the temperature adjusts smoothly toward the equilibrium temperature, that is, the temperature at which
ASR = OLR.
If the planetary energy budget is out of balance, the temperature must change so that the OLR gets closer to the ASR!
The adjustment is actually an exponential decay process: The rate of adjustment slows as the temperature approaches equilibrium.
The temperature gets very very close to equilibrium but never reaches it exactly.
Python fact 13: We can easily make simple graphs with the function plt.plot(x,y), where x and y are arrays of the same size. But we must import it first.
This is actually not native Python, but uses a special graphics library called matplotlib.
Just about all of our notebooks will start with this:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
<a id='section7'></a>
7. Analytical solution of the Energy Balance Model: e-folding time and feedback parameter
Equilibrium solutions
We've already seen that the equilibrium solution of the model is
$$ T_{eq} = \left( \frac{(1-\alpha) Q}{\tau \sigma} \right)^\frac{1}{4} $$
and tuned the model parameter based on this relationship.
We are going to linearize the equation for small perturbations away from this equilibrium.
Let $T_s = T_{eq} + T_s^\prime$ and restrict our solution to $T_s^\prime << T_{eq}$.
Note this this is not a big restriction! For example, a 10 degree warming or cooling is just $\pm$3.4% of the absolute equilibrium temperature.
Linearizing the governing equation
Now use a first-order Taylor series expansion to write
$$ \text{OLR} = \tau \sigma T_s^4 $$
$$OLR = \tau \sigma T_s^4 = \tau \sigma \left( T_{eq} + T_s^\prime \right)^4 \approx \tau \sigma \left( T_{eq}^4 + 4 T_{eq}^3 T_s^\prime \right) $$
and the budget for the perturbation temperature thus becomes
$$C \frac{d T_s^\prime}{d t} = -\lambda_0 T_s^\prime$$
where we define
$$\lambda_0 = 4 \tau \sigma T_{eq}^3 $$
Putting in our observational values, we get
End of explanation
tstar = C / lambda_0 # Calculated value of relaxation time constant
seconds_per_year = 60.*60.*24.*365.
print( 'The e-folding time is {:1.2e} seconds or about {:1.0f} years.'.format(tstar, tstar / seconds_per_year))
Explanation: This is actually our first estimate of what is often called the Planck feedback. It is the tendency for a warm surface to cool by increased longwave radiation to space.
It may also be refered to as the "no-feedback" climate response parameter. As we will see, $\lambda_0$ quantifies the sensitivity of the climate system in the absence of any actual feedback processes.
Solve the linear ODE
Now define
$$ t^* = \frac{C}{\lambda_0} $$
This is a positive constant with dimensions of time (seconds). With these definitions the temperature evolves according to
$$ \frac{d T_s^\prime}{d t} = - \frac{T_s^\prime}{t^*}$$
This is one of the simplest ODEs. Hopefully it looks familiar to most of you. It is the equation for an exponential decay process.
We can easily solve for the temperature evolution by integrating from an initial condition $T_s^\prime(0)$:
$$ \int_{T_s^\prime(0)}^{T_s^\prime(t)} \frac{d T_s^\prime}{T_s^\prime} = -\int_0^t \frac{dt}{t^*}$$
$$\ln \bigg( \frac{T_s^\prime(t)}{T_s^\prime(0)} \bigg) = -\frac{t}{t^*}$$
$$T_s^\prime(t) = T_s^\prime(0) \exp \bigg(-\frac{t}{t^*} \bigg)$$
I hope that the mathematics is straightforward for everyone in this class. If not, go through it carefully and make sure you understand each step.
e-folding time for relaxation of global mean temperature
Our model says that surface temperature will relax toward its equilibrium value over a characteristic time scale $t^$. This is an e-folding time* – the time it takes for the perturbation to decay by a factor $1/e = 0.37$
What should this timescale be for the climate system?
To estimate $t^*$ we need a value for the effective heat capacity $C$.
Our "quick and dirty" estimate above used 100 meters of water to set this heat capacity.
What is the right choice for water depth $H$?
That turns out to be an interesting and subtle question. It depends very much on the timescale of the problem
days?
years?
decades?
millenia?
We will revisit this question later in the course. For now, let’s just continue assuming $H = 100$ m (a bit deeper than the typical depth of the surface mixed layer in the oceans).
Now calculate the e-folding time for the surface temperature:
End of explanation
%load_ext version_information
%version_information numpy, matplotlib
Explanation: This is a rather fast timescale relative to other processes that can affect the planetary energy budget.
But notice that the climate feedback parameter $\lambda$ is smaller, the timescale gets longer. We will come back to this later.
<a id='section8'></a>
8. Summary and take-away messages
We looked at the flows of energy in and out of the Earth system.
These are determined by radiation at the top of the Earth's atmosphere.
Any imbalance between shortwave absorption (ASR) and longwave emission (OLR) drives a change in temperature
Using this idea, we built a climate model!
This Zero-Dimensional Energy Balance Model solves for the global, annual mean surface temperature $T_s$
Two key assumptions:
Energy content of the Earth system varies proportionally to $T_s$
The OLR increases as $\tau \sigma T_s^4$ (our simple greenhouse model)
Earth (or any planet) has a well-defined equilibrium temperature at which ASR = OLR, because of the temperature dependence of the outgoing longwave radiation.
The system will tend to relax toward its equilibrium temperature on an $e$-folding timescale that depends on
(1) radiative feedback processes, and
(2) effective heat capacity.
In our estimate, this e-folding time is relatively short. In the absence of other processes that can either increase the heat capacity or lower (in absolute value) the feedback parameter, the Earth would never be very far out of energy balance.
We will quantify this statement more as the term progresses.
<div class="alert alert-success">
[Back to ATM 623 notebook home](../index.ipynb)
</div>
Version information
End of explanation |
2,380 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial - Transformers
An example of how to incorporate the transfomers library from HuggingFace with fastai
Step1: In this tutorial, we will see how we can use the fastai library to fine-tune a pretrained transformer model from the transformers library by HuggingFace. We will use the mid-level API to gather the data. Even if this tutorial is self contained, it might help to check the imagenette tutorial to have a second look on the mid-level API (with a gentle introduction using the higher level APIs) in computer vision.
Importing a transformers pretrained model
First things first, we will need to install the transformers library. If you haven't done it yet, install the library
Step2: We can use several versions of this GPT2 model, look at the transformers documentation for more details. Here we will use the basic version (that already takes a lot of space in memory!) You can change the model used by changing the content of pretrained_weights (if it's not a GPT2 model, you'll need to change the classes used for the model and the tokenizer of course).
Step3: Before we move on to the fine-tuning part, let's have a look at this tokenizer and this model. The tokenizers in HuggingFace usually do the tokenization and the numericalization in one step (we ignore the padding warning for now)
Step4: Like fastai Transforms, the tokenizer has a decode method to give you back a text from ids
Step5: The model can be used to generate predictions (it is pretrained). It has a generate method that expects a batch of prompt, so we feed it our ids and add one batch dimension (there is a padding warning we can ignore as well)
Step6: The predictions, by default, are of length 20
Step7: We can use the decode method (that prefers a numpy array to a tensor)
Step8: Bridging the gap with fastai
Now let's see how we can use fastai to fine-tune this model on wikitext-2, using all the training utilities (learning rate finder, 1cycle policy etc...). First, we import all the text utilities
Step9: Preparing the data
Then we download the dataset (if not present), it comes as two csv files
Step10: Let's have a look at what those csv files look like
Step11: We gather all texts in one numpy array (since it will be easier to use this way with fastai)
Step12: To process this data to train a model, we need to build a Transform that will be applied lazily. In this case we could do the pre-processing once and for all and only use the transform for decoding (we will see how just after), but the fast tokenizer from HuggingFace is, as its name indicates, fast, so it doesn't really impact performance to do it this way.
In a fastai Transform you can define
Step13: Two comments on the code above
Step14: We specify dl_type=LMDataLoader for when we will convert this TfmdLists to DataLoaders
Step15: They look the same but only because they begin and end the same way. We can see the shapes are different
Step16: And we can have a look at both decodes using show_at
Step17: The fastai library expects the data to be assembled in a DataLoaders object (something that has a training and validation dataloader). We can get one by using the dataloaders method. We just have to specify a batch size and a sequence length. We'll train with sequences of size 256 (GPT2 used sequence length 1024, but not everyone has enough GPU RAM for that)
Step18: Note that you may have to reduce the batch size depending on your GPU RAM.
In fastai, as soon as we have a DataLoaders, we can use show_batch to have a look at the data (here texts for inputs, and the same text shifted by one token to the right for validation)
Step19: Another way to gather the data is to preprocess the texts once and for all and only use the transform to decode the tensors to texts
Step20: Now we change the previous Tokenizer like this
Step21: In the <code>encodes</code> method, we still account for the case where we get something that's not already tokenized, just in case we were to build a dataset with new texts using this transform.
Step22: And we can check it still works properly for showing purposes
Step23: Fine-tuning the model
The HuggingFace model will return a tuple in outputs, with the actual predictions and some additional activations (should we want to use them in some regularization scheme). To work inside the fastai training loop, we will need to drop those using a Callback
Step24: Of course we could make this a bit more complex and add some penalty to the loss using the other part of the tuple of predictions, like the RNNRegularizer.
Now, we are ready to create our Learner, which is a fastai object grouping data, model and loss function and handles model training or inference. Since we are in a language model setting, we pass perplexity as a metric, and we need to use the callback we just defined. Lastly, we use mixed precision to save every bit of memory we can (and if you have a modern GPU, it will also make training faster)
Step25: We can check how good the model is without any fine-tuning step (spoiler alert, it's pretty good!)
Step26: This lists the validation loss and metrics (so 26.6 as perplexity is kind of amazing).
Now that we have a Learner we can use all the fastai training loop capabilities
Step27: The learning rate finder curve suggests picking something between 1e-4 and 1e-3.
Step28: Now with just one epoch of fine-tuning and not much regularization, our model did not really improve since it was already amazing. To have a look at some generated texts, let's take a prompt that looks like a wikipedia article
Step29: Article seems to begin with new line and the title between = signs, so we will mimic that
Step30: The prompt needs to be tokenized and numericalized, so we use the same function as before to do this, before we use the generate method of the model. | Python Code:
#|all_slow
Explanation: Tutorial - Transformers
An example of how to incorporate the transfomers library from HuggingFace with fastai
End of explanation
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
Explanation: In this tutorial, we will see how we can use the fastai library to fine-tune a pretrained transformer model from the transformers library by HuggingFace. We will use the mid-level API to gather the data. Even if this tutorial is self contained, it might help to check the imagenette tutorial to have a second look on the mid-level API (with a gentle introduction using the higher level APIs) in computer vision.
Importing a transformers pretrained model
First things first, we will need to install the transformers library. If you haven't done it yet, install the library:
!pip install -Uq transformers
Then let's import what will need: we will fine-tune the GPT2 pretrained model and fine-tune on wikitext-2 here. For this, we need the GPT2LMHeadModel (since we want a language model) and the GPT2Tokenizer to prepare the data.
End of explanation
pretrained_weights = 'gpt2'
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_weights)
model = GPT2LMHeadModel.from_pretrained(pretrained_weights)
Explanation: We can use several versions of this GPT2 model, look at the transformers documentation for more details. Here we will use the basic version (that already takes a lot of space in memory!) You can change the model used by changing the content of pretrained_weights (if it's not a GPT2 model, you'll need to change the classes used for the model and the tokenizer of course).
End of explanation
ids = tokenizer.encode('This is an example of text, and')
ids
Explanation: Before we move on to the fine-tuning part, let's have a look at this tokenizer and this model. The tokenizers in HuggingFace usually do the tokenization and the numericalization in one step (we ignore the padding warning for now):
End of explanation
tokenizer.decode(ids)
Explanation: Like fastai Transforms, the tokenizer has a decode method to give you back a text from ids:
End of explanation
import torch
t = torch.LongTensor(ids)[None]
preds = model.generate(t)
Explanation: The model can be used to generate predictions (it is pretrained). It has a generate method that expects a batch of prompt, so we feed it our ids and add one batch dimension (there is a padding warning we can ignore as well):
End of explanation
preds.shape,preds[0]
Explanation: The predictions, by default, are of length 20:
End of explanation
tokenizer.decode(preds[0].numpy())
Explanation: We can use the decode method (that prefers a numpy array to a tensor):
End of explanation
from fastai.text.all import *
Explanation: Bridging the gap with fastai
Now let's see how we can use fastai to fine-tune this model on wikitext-2, using all the training utilities (learning rate finder, 1cycle policy etc...). First, we import all the text utilities:
End of explanation
path = untar_data(URLs.WIKITEXT_TINY)
path.ls()
Explanation: Preparing the data
Then we download the dataset (if not present), it comes as two csv files:
End of explanation
df_train = pd.read_csv(path/'train.csv', header=None)
df_valid = pd.read_csv(path/'test.csv', header=None)
df_train.head()
Explanation: Let's have a look at what those csv files look like:
End of explanation
all_texts = np.concatenate([df_train[0].values, df_valid[0].values])
Explanation: We gather all texts in one numpy array (since it will be easier to use this way with fastai):
End of explanation
class TransformersTokenizer(Transform):
def __init__(self, tokenizer): self.tokenizer = tokenizer
def encodes(self, x):
toks = self.tokenizer.tokenize(x)
return tensor(self.tokenizer.convert_tokens_to_ids(toks))
def decodes(self, x): return TitledStr(self.tokenizer.decode(x.cpu().numpy()))
Explanation: To process this data to train a model, we need to build a Transform that will be applied lazily. In this case we could do the pre-processing once and for all and only use the transform for decoding (we will see how just after), but the fast tokenizer from HuggingFace is, as its name indicates, fast, so it doesn't really impact performance to do it this way.
In a fastai Transform you can define:
- an <code>encodes</code> method that is applied when you call the transform (a bit like the forward method in a nn.Module)
- a <code>decodes</code> method that is applied when you call the decode method of the transform, if you need to decode anything for showing purposes (like converting ids to a text here)
- a <code>setups</code> method that sets some inner state of the Transform (not needed here so we skip it)
End of explanation
splits = [range_of(df_train), list(range(len(df_train), len(all_texts)))]
tls = TfmdLists(all_texts, TransformersTokenizer(tokenizer), splits=splits, dl_type=LMDataLoader)
Explanation: Two comments on the code above:
- in <code>encodes</code> we don't use the tokenizer.encode method since it does some additional preprocessing for the model after tokenizing and numericalizing (the part throwing a warning before). Here we don't need any post-processing so it's fine to skip it.
- in <code>decodes</code> we return a TitledStr object and not just a plain string. That's a fastai class that adds a show method to the string, which will allow us to use all the fastai show methods.
You can then group your data with this Transform using a TfmdLists. It has an s in its name because it contains the training and validation set. We indicate the indices of the training set and the validation set with splits (here all the first indices until len(df_train) and then all the remaining indices):
End of explanation
tls.train[0],tls.valid[0]
Explanation: We specify dl_type=LMDataLoader for when we will convert this TfmdLists to DataLoaders: we will use an LMDataLoader since we have a language modeling problem, not the usual fastai TfmdDL.
In a TfmdLists you can access the elements of the training or validation set quite easily:
End of explanation
tls.tfms(tls.train.items[0]).shape, tls.tfms(tls.valid.items[0]).shape
Explanation: They look the same but only because they begin and end the same way. We can see the shapes are different:
End of explanation
show_at(tls.train, 0)
show_at(tls.valid, 0)
Explanation: And we can have a look at both decodes using show_at:
End of explanation
bs,sl = 4,256
dls = tls.dataloaders(bs=bs, seq_len=sl)
Explanation: The fastai library expects the data to be assembled in a DataLoaders object (something that has a training and validation dataloader). We can get one by using the dataloaders method. We just have to specify a batch size and a sequence length. We'll train with sequences of size 256 (GPT2 used sequence length 1024, but not everyone has enough GPU RAM for that):
End of explanation
dls.show_batch(max_n=2)
Explanation: Note that you may have to reduce the batch size depending on your GPU RAM.
In fastai, as soon as we have a DataLoaders, we can use show_batch to have a look at the data (here texts for inputs, and the same text shifted by one token to the right for validation):
End of explanation
def tokenize(text):
toks = tokenizer.tokenize(text)
return tensor(tokenizer.convert_tokens_to_ids(toks))
tokenized = [tokenize(t) for t in progress_bar(all_texts)]
Explanation: Another way to gather the data is to preprocess the texts once and for all and only use the transform to decode the tensors to texts:
End of explanation
class TransformersTokenizer(Transform):
def __init__(self, tokenizer): self.tokenizer = tokenizer
def encodes(self, x):
return x if isinstance(x, Tensor) else tokenize(x)
def decodes(self, x): return TitledStr(self.tokenizer.decode(x.cpu().numpy()))
Explanation: Now we change the previous Tokenizer like this:
End of explanation
tls = TfmdLists(tokenized, TransformersTokenizer(tokenizer), splits=splits, dl_type=LMDataLoader)
dls = tls.dataloaders(bs=bs, seq_len=sl)
Explanation: In the <code>encodes</code> method, we still account for the case where we get something that's not already tokenized, just in case we were to build a dataset with new texts using this transform.
End of explanation
dls.show_batch(max_n=2)
Explanation: And we can check it still works properly for showing purposes:
End of explanation
class DropOutput(Callback):
def after_pred(self): self.learn.pred = self.pred[0]
Explanation: Fine-tuning the model
The HuggingFace model will return a tuple in outputs, with the actual predictions and some additional activations (should we want to use them in some regularization scheme). To work inside the fastai training loop, we will need to drop those using a Callback: we use those to alter the behavior of the training loop.
Here we need to write the event after_pred and replace self.learn.pred (which contains the predictions that will be passed to the loss function) by just its first element. In callbacks, there is a shortcut that lets you access any of the underlying Learner attributes so we can write self.pred[0] instead of self.learn.pred[0]. That shortcut only works for read access, not write, so we have to write self.learn.pred on the right side (otherwise we would set a pred attribute in the Callback).
End of explanation
learn = Learner(dls, model, loss_func=CrossEntropyLossFlat(), cbs=[DropOutput], metrics=Perplexity()).to_fp16()
Explanation: Of course we could make this a bit more complex and add some penalty to the loss using the other part of the tuple of predictions, like the RNNRegularizer.
Now, we are ready to create our Learner, which is a fastai object grouping data, model and loss function and handles model training or inference. Since we are in a language model setting, we pass perplexity as a metric, and we need to use the callback we just defined. Lastly, we use mixed precision to save every bit of memory we can (and if you have a modern GPU, it will also make training faster):
End of explanation
learn.validate()
Explanation: We can check how good the model is without any fine-tuning step (spoiler alert, it's pretty good!)
End of explanation
learn.lr_find()
Explanation: This lists the validation loss and metrics (so 26.6 as perplexity is kind of amazing).
Now that we have a Learner we can use all the fastai training loop capabilities: learning rate finder, training with 1cycle etc...
End of explanation
learn.fit_one_cycle(1, 1e-4)
Explanation: The learning rate finder curve suggests picking something between 1e-4 and 1e-3.
End of explanation
df_valid.head(1)
Explanation: Now with just one epoch of fine-tuning and not much regularization, our model did not really improve since it was already amazing. To have a look at some generated texts, let's take a prompt that looks like a wikipedia article:
End of explanation
prompt = "\n = Unicorn = \n \n A unicorn is a magical creature with a rainbow tail and a horn"
Explanation: Article seems to begin with new line and the title between = signs, so we will mimic that:
End of explanation
prompt_ids = tokenizer.encode(prompt)
inp = tensor(prompt_ids)[None].cuda()
inp.shape
preds = learn.model.generate(inp, max_length=40, num_beams=5, temperature=1.5)
tokenizer.decode(preds[0].cpu().numpy())
Explanation: The prompt needs to be tokenized and numericalized, so we use the same function as before to do this, before we use the generate method of the model.
End of explanation |
2,381 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kudryavtsev Model
Link to this notebook
Step1: Part 1
We will run the Kudryatsev model for conditions in Barrow, Alaska in a very cold year, 1964. The mean annaul temperature for 1964 was -15.21C, the amplitude over that year was 18.51C. It was close to normal snow year, meaning the average snow thickness over this winter was 0.22m.
Adapt the settings in the Ku model for Barrow 1964. Make sure you request an output file. Save the simulation settings and submit your simulation. Download the model results and open them in Panoply.
Step2: Q1.1
Step3: Q2.1
Step4: Q3.1 | Python Code:
# Load standard Python modules
import numpy as np
import matplotlib.pyplot as plt
# Load PyMT model(s)
import pymt.models
ku = pymt.models.Ku()
Explanation: Kudryavtsev Model
Link to this notebook: https://github.com/csdms/pymt/blob/master/docs/demos/ku.ipynb
Install command: $ conda install notebook pymt_permamodel
Download local copy of notebook:
$ curl -O https://raw.githubusercontent.com/csdms/pymt/master/docs/demos/ku.ipynb
Introduction to Permafrost Processes - Lesson 2 Kudryavtsev Model
This lab has been designed and developed by Irina Overeem and Mark Piper, CSDMS, University of Colorado, CO
with assistance of Kang Wang, Scott Stewart at CSDMS, University of Colorado, CO, and Elchin Jafarov, at Los Alamos National Labs, NM.
These labs are developed with support from NSF Grant 1503559, ‘Towards a Tiered Permafrost Modeling Cyberinfrastructure’
Classroom organization
This lab is the second in a series of introduction to permafrost process modeling, designed for inexperienced users. In this first lesson, we explore the Air Frost Number model and learn to use the CSDMS Python Modeling Toolkit (PyMT). We implemented a basic configuration of the Air Frost Number (as formulated by Nelson and Outcalt in 1987). This series of labs is designed for inexperienced modelers to gain some experience with running a numerical model, changing model inputs, and analyzing model output. Specifically, this first lab looks at what controls permafrost occurrence and compares the occurrence of permafrost in Russia.
Basic theory on the Air Frost Number is presented in Frost Number Model Lecture 1.
This lab is the second in a series of introduction to permafrost process modeling, designed for inexperienced users. In this second lesson, we explore the Kudryavstev model and learn to use the CSDMS Python Modeling Toolkit (PyMT). We implemented the Kudryavstev model (as formulated in Anisimov et al.1997). It is dubbed the Ku-model. This series of labs is designed for inexperienced modelers to gain some experience with running a numerical model, changing model inputs, and analyzing model output. Specifically, this lab looks at what controls soil temperature and active layer thickness and compares model output with observed longterm data collected at permafrost active layer thickness monitoring sites in Fairbanks and Barrow, Alaska.
Basic theory on the Kudryavstev model is presented in Kudryavtsev Model Lecture 2
This lab will likely take ~ 1,5 hours to complete in the classroom. This time assumes you are unfamiiar with the PyMT and need to learn setting parameters, saving runs, downloading data and looking at output (otherwise it will be much faster).
We will use netcdf files for output, this is a standard output from all CSDMS models. If you have no experience with visualizing these files, Panoply software will be helpful. Find instructions on how to use this software.
Learning objectives
Skills
familiarize with a basic configuration of the Kudryavstev Model for 1D (a single location).
hands-on experience with visualizing NetCDF time series with Panoply.
data to model comparisons and how to think about uncertainty in data and model output.
Topical learning objectives:
what are controls on permafrost soil temperature
what is a steady-state model
what are important parameters for calculating active layer thickness
active layer thickness evolution with climate warming in two locations in Alaska
References and More information
Anisimov, O. A., Shiklomanov, N. I., & Nelson, F. E. (1997). Global warming and active-layer thickness: results from transient general circulation models. Global and Planetary Change, 15(3-4), 61-77. DOI:10.1016/S0921-8181(97)00009-X
Sazonova, T.S., Romanovsky, V.E., 2003. A model for regional-scale estimation of temporal and spatial variability of active layer thickness and mean nnaual ground emperatures. Permafrost and periglacial processes 14, 125-139. DOI: 10.1002/ppp.449
Zhang, T., 2005. Influence of the seasonal snow cover on the ground thermal regime: an overview. Review of Geophysics, 43, RG4002.
The Kudryavtsev Model
The Kudryavtsev et al. (1974), or Ku model, presents an
approximate solution of the Stefan problem. The model provides a
steady-state solution under the assumption of sinusoidal air
temperature forcing. It considers snow, vegetation, and soil layers
as thermal damping to variation of air temperature. The layer of
soil is considered to be a homogeneous column with different thermal
properties in the frozen and thawed states. The main outputs are
annual maximum frozen/thaw depth and mean annual temperature at the
top of permafrost (or at the base of the active layer). It can be
applied over a wide variety of climatic conditions.
End of explanation
config_file, run_folder = ku.setup(T_air=-15.21, A_air=18.51)
ku.initialize(config_file, run_folder)
ku.update()
ku.output_var_names
ku.get_value('soil__active_layer_thickness')
Explanation: Part 1
We will run the Kudryatsev model for conditions in Barrow, Alaska in a very cold year, 1964. The mean annaul temperature for 1964 was -15.21C, the amplitude over that year was 18.51C. It was close to normal snow year, meaning the average snow thickness over this winter was 0.22m.
Adapt the settings in the Ku model for Barrow 1964. Make sure you request an output file. Save the simulation settings and submit your simulation. Download the model results and open them in Panoply.
End of explanation
args = ku.setup(h_snow=0.)
ku.initialize(*args)
ku.update()
ku.get_value('soil__active_layer_thickness')
args = ku.setup(h_snow=0.4)
ku.initialize(*args)
ku.update()
ku.get_value('soil__active_layer_thickness')
Explanation: Q1.1: What was the active layer thickness the model predicted?
Sketch a soil profile for winter conditions versus August conditions, indicate where the frozen-unfrozen boundary is in each two cases.
Q1.2: How do you think snow affects the active layer thickness predictions?
Part 2
Run the Kudryatsev model with a range of snow conditions (0 m as the one extreme, and in extremely snowy years, the mean snow thickness over the winter is 0.4m in Barrow). Set these two simulations, run them and dowload the files.
End of explanation
args = ku.setup(vwc_H2O=0.2)
ku.initialize(*args)
ku.update()
ku.get_value('soil__active_layer_thickness')
args = ku.setup(vwc_H2O=0.6)
ku.initialize(*args)
ku.update()
ku.get_value('soil__active_layer_thickness')
Explanation: Q2.1: What happens if there is no snow at all (0 m)?
Q2.2: What is the active layer thickness prediction for a very snowy year?
Part 3
Run the Kudryatsev model with a range of soil water contents. What happens if there is 20% more, and 20% less soil water content?
End of explanation
import pandas
data = pandas.read_csv("https://raw.githubusercontent.com/mcflugen/pymt_ku/master/data/Barrow_1961-2015.csv")
data
maat = data["atmosphere_bottom_air__temperature"]
tamp = data["atmosphere_bottom_air__temperature_amplitude"]
snow_depth = data["snowpack__depth"]
ku = pymt.models.Ku()
args = ku.setup(end_year=2050)
ku.initialize(*args)
n_steps = int((ku.end_time - ku.time) / ku.time_step)
thickness = np.empty(n_steps)
for i in range(n_steps):
ku.set_value("atmosphere_bottom_air__temperature", maat.values[i])
ku.set_value("atmosphere_bottom_air__temperature_amplitude", tamp.values[i])
ku.set_value("snowpack__depth", snow_depth.values[i])
ku.update()
thickness[i] = ku.get_value('soil__active_layer_thickness')
plt.plot(thickness) # This should be the same as the above but it's NOT! But now it is. BOOM!
Explanation: Q3.1: Is this selected range of 20% realistic for soils in permafrost regions?
Q3.2: From the theory presented in the associated lecture notes, how do you think soil water content in summer affects the soil temperature?
Part 4
Posted here are time-series for climate conditions for both Barrow and Fairbanks, Alaska. Time-series are annual values and run from 1961-2015, the data include mean annual temperature (MAAT), temperature amplitude (TAMP) and winter-average snow depth (SD).
These are text files, so you can plot them in your own favorite software or programming language.
Choose which case you want to run, you will now run a 55 year simulation.
End of explanation |
2,382 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Real-time music auto-tagging
In this tutorial, we use Essentia's TensorFlow integration to perform auto-tagging in real-time.
Additionally, this serves as an example of TensorFlow inference in streaming mode and can be easily adapted to work offline.
Setup
To install Essentia with TensorFlow support, refer to the Setup section of our previous Music auto-tagging, classification, and embedding extraction tutorial for instructions.
Additionally, we rely on the pysoundcard package to capture the audio loopback of the system and feed Essentia in real-time. This way we can easily test our models with any music coming from our local player or browser.
Step1: Let's download MusiCNN, one of our auto-tagging models. This and more models are available from the Essentia models' site.
Step2: Then we import the required packages and Essentia algorithms.
In this case, we use the TensorFlow functionalities in streaming mode.
Step3: Define the analysis parameters.
To make this demo work in real-time, we tweaked some of the analysis parameters of MusiCNN.
While it was trained on patches of size 187 (\~3 seconds) we set patch_size to 64 (\~1 second) to increase the prediction rate.
You can experiment with the patch_size and display_size parameters to modify the prediction rate to your taste.
Step4: Instantiate the algorithms. With this, we create a network similar to the one used inside TensorflowPredictMusiCNN, the wrapper algorithm presented in the previous tutorial. However, by instantiating the algorithms separately we gain additional control required for real-time usage.
Step5: Connect the algorithms. We also store the mel-spectrograms in the Pool for visualization purposes.
Step6: Create a callback function that will be called every time the audio buffer is ready to process.
Step7: Initialize the plots and start processing the loopback stream. | Python Code:
!pip -q install pysoundcard
Explanation: Real-time music auto-tagging
In this tutorial, we use Essentia's TensorFlow integration to perform auto-tagging in real-time.
Additionally, this serves as an example of TensorFlow inference in streaming mode and can be easily adapted to work offline.
Setup
To install Essentia with TensorFlow support, refer to the Setup section of our previous Music auto-tagging, classification, and embedding extraction tutorial for instructions.
Additionally, we rely on the pysoundcard package to capture the audio loopback of the system and feed Essentia in real-time. This way we can easily test our models with any music coming from our local player or browser.
End of explanation
!wget -q https://essentia.upf.edu/models/autotagging/msd/msd-musicnn-1.pb
!wget -q https://essentia.upf.edu/models/autotagging/msd/msd-musicnn-1.json
Explanation: Let's download MusiCNN, one of our auto-tagging models. This and more models are available from the Essentia models' site.
End of explanation
import json
from essentia.streaming import (
VectorInput,
FrameCutter,
TensorflowInputMusiCNN,
VectorRealToTensor,
TensorToPool,
TensorflowPredict,
PoolToTensor,
TensorToVectorReal
)
from essentia import Pool, run, reset
from IPython import display
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import softmax
import soundcard as sc
%matplotlib nbagg
Explanation: Then we import the required packages and Essentia algorithms.
In this case, we use the TensorFlow functionalities in streaming mode.
End of explanation
with open('msd-musicnn-1.json', 'r') as json_file:
metadata = json.load(json_file)
model_file = 'msd-musicnn-1.pb'
input_layer = metadata['schema']['inputs'][0]['name']
output_layer = metadata['schema']['outputs'][0]['name']
classes = metadata['classes']
n_classes = len(classes)
# Analysis parameters.
sample_rate = 16000
frame_size = 512
hop_size = 256
n_bands = 96
patch_size = 64
display_size = 10
buffer_size = patch_size * hop_size
Explanation: Define the analysis parameters.
To make this demo work in real-time, we tweaked some of the analysis parameters of MusiCNN.
While it was trained on patches of size 187 (\~3 seconds) we set patch_size to 64 (\~1 second) to increase the prediction rate.
You can experiment with the patch_size and display_size parameters to modify the prediction rate to your taste.
End of explanation
buffer = np.zeros(buffer_size, dtype='float32')
vimp = VectorInput(buffer)
fc = FrameCutter(frameSize=frame_size, hopSize=hop_size)
tim = TensorflowInputMusiCNN()
vtt = VectorRealToTensor(shape=[1, 1, patch_size, n_bands],
lastPatchMode='discard')
ttp = TensorToPool(namespace=input_layer)
tfp = TensorflowPredict(graphFilename=model_file,
inputs=[input_layer],
outputs=[output_layer])
ptt = PoolToTensor(namespace=output_layer)
ttv = TensorToVectorReal()
pool = Pool()
Explanation: Instantiate the algorithms. With this, we create a network similar to the one used inside TensorflowPredictMusiCNN, the wrapper algorithm presented in the previous tutorial. However, by instantiating the algorithms separately we gain additional control required for real-time usage.
End of explanation
vimp.data >> fc.signal
fc.frame >> tim.frame
tim.bands >> vtt.frame
tim.bands >> (pool, 'melbands')
vtt.tensor >> ttp.tensor
ttp.pool >> tfp.poolIn
tfp.poolOut >> ptt.pool
ptt.tensor >> ttv.tensor
ttv.frame >> (pool, output_layer)
Explanation: Connect the algorithms. We also store the mel-spectrograms in the Pool for visualization purposes.
End of explanation
def callback(data):
buffer[:] = data.flatten()
# Generate predictions.
reset(vimp)
run(vimp)
# Update the mel-spectrograms and activations buffers.
mel_buffer[:] = np.roll(mel_buffer, -patch_size)
mel_buffer[:, -patch_size:] = pool['melbands'][-patch_size:, :].T
img_mel.set_data(mel_buffer)
act_buffer[:] = np.roll(act_buffer, -1)
act_buffer[:, -1] = softmax(20 * pool[output_layer][-1, :].T)
img_act.set_data(act_buffer)
f.canvas.draw()
Explanation: Create a callback function that will be called every time the audio buffer is ready to process.
End of explanation
mel_buffer = np.zeros([n_bands, patch_size * display_size])
act_buffer = np.zeros([n_classes, display_size])
pool.clear()
f, ax = plt.subplots(1, 2, figsize=[9.6, 7])
f.canvas.draw()
ax[0].set_title('Mel Spectrogram')
img_mel = ax[0].imshow(mel_buffer, aspect='auto',
origin='lower', vmin=0, vmax=6)
ax[0].set_xticks([])
ax[1].set_title('Activations')
img_act = ax[1].matshow(act_buffer, aspect='0.5', vmin=0, vmax=1)
ax[1].set_xticks([])
ax[1].yaxis.set_ticks_position('right')
plt.yticks(np.arange(n_classes), classes, fontsize=6)
# Capture and process the speakers loopback.
with sc.all_microphones(include_loopback=True)[0].recorder(samplerate=sample_rate) as mic:
while True:
callback(mic.record(numframes=buffer_size).mean(axis=1))
Explanation: Initialize the plots and start processing the loopback stream.
End of explanation |
2,383 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python for Probability, Statistics, and Machine Learning
Step1: Conditional Expectation and Mean Square Error
In this section, we work through a detailed example using conditional
expectation and optimization methods. Suppose we have two fair six-sided die
($X$ and $Y$) and we want to measure the sum of the two variables as $Z=X+Y$.
Further, let's suppose that given $Z$, we want the best estimate of $X$ in the
mean-squared-sense. Thus, we want to minimize the following
Step2: With all that setup we can now use basic calculus to minimize the
objective function $J$,
Step3: Programming Tip.
Sympy has a stats module that can do some basic work with expressions
involving probability densities and expectations. The above code uses its E
function to compute the expectation.
This says that $z/2$ is the MSE estimate of $X$ given $Z$ which means
geometrically (interpreting the MSE as a squared distance weighted by the
probability mass function) that $z/2$ is as close to $x$ as we are going to
get for a given $z$.
Let's look at the same problem using the conditional expectation operator $
\mathbb{E}(\cdot|z) $ and apply it to our definition of $Z$. Then
$$
\mathbb{E}(z|z)=\mathbb{E}(x+y|z)=\mathbb{E}(x|z)+\mathbb{E}(y|z)=z
$$
using the linearity of the expectation. Now, since by the
symmetry of the problem (i.e., two identical die), we have
$$
\mathbb{E}(x|z)=\mathbb{E}(y|z)
$$
we can plug this in and solve
$$
2 \mathbb{E}(x|z)=z
$$
which once again gives,
$$
\mathbb{E}(x|z) =\frac{z}{2}
$$
Step4: <!-- dom
Step5: Programming Tip.
The stats.sample(x, S.Eq(z,7)) function call samples the x variable subject
to a condition on the z variable. In other words, it generates random samples
of x die, given that the sum of the outcomes of that die and the y die add
up to z==7.
Please run the above code repeatedly in the Jupyter/IPython
Notebook corresponding to this section until you have convinced
yourself that the $\mathbb{E}(x|z)$ gives the lower MSE every
time. To push this reasoning, let's consider the case where the
die is so biased so that the outcome of 6 is ten times more
probable than any of the other outcomes. That is,
$$
\mathbb{P}(6) = 2/3
$$
whereas $\mathbb{P}(1)=\mathbb{P}(2)=\ldots=\mathbb{P}(5)=1/15$.
We can explore this using Sympy as in the following
Step6: As before, we construct the sum of the two dice, and plot the
corresponding probability mass function in Figure. As compared with Figure, the probability mass has been shifted
away from the smaller numbers.
Step7: <!-- dom
Step8: Now that we have $\mathbb{E}(x|z=7) = 5$, we can generate
samples as before and see if this gives the minimum MSE. | Python Code:
import numpy as np
np.random.seed(12345)
Explanation: Python for Probability, Statistics, and Machine Learning
End of explanation
import sympy as S
from sympy.stats import density, E, Die
x=Die('D1',6) # 1st six sided die
y=Die('D2',6) # 2nd six sides die
a=S.symbols('a')
z = x+y # sum of 1st and 2nd die
J = E((x-a*(x+y))**2) # expectation
print S.simplify(J)
Explanation: Conditional Expectation and Mean Square Error
In this section, we work through a detailed example using conditional
expectation and optimization methods. Suppose we have two fair six-sided die
($X$ and $Y$) and we want to measure the sum of the two variables as $Z=X+Y$.
Further, let's suppose that given $Z$, we want the best estimate of $X$ in the
mean-squared-sense. Thus, we want to minimize the following:
$$
J(\alpha) = \sum ( x - \alpha z )^2 \mathbb{P}(x,z)
$$
where $\mathbb{P}$ is the probability mass function for this problem.
The idea is that when we have solved this problem, we will have a function of
$Z$ that is going to be the minimum MSE estimate of $X$. We can substitute in
for $Z$ in $J$ and get:
$$
J(\alpha) = \sum ( x - \alpha (x+y) )^2 \mathbb{P}(x,y)
$$
Let's work out the steps in Sympy in the following:
End of explanation
sol,=S.solve(S.diff(J,a),a) # using calculus to minimize
print sol # solution is 1/2
Explanation: With all that setup we can now use basic calculus to minimize the
objective function $J$,
End of explanation
%matplotlib inline
from numpy import arange,array
from matplotlib.pylab import subplots, cm
from sympy import Integer
fig,ax = subplots()
v = arange(1,7) + arange(1,7)[:,None]
foo=lambda i: density(z)[Integer(i)].evalf() # some tweaks to get a float out
Zmass=array(map(foo,v.flat),dtype=float).reshape(6,6)
pc=ax.pcolor(arange(1,8),arange(1,8),Zmass,cmap=cm.gray)
_=ax.set_xticks([(i+0.5) for i in range(1,7)])
_=ax.set_xticklabels([str(i) for i in range(1,7)])
_=ax.set_yticks([(i+0.5) for i in range(1,7)])
_=ax.set_yticklabels([str(i) for i in range(1,7)])
for i in range(1,7):
for j in range(1,7):
_=ax.text(i+.5,j+.5,str(i+j),fontsize=18,fontweight='bold',color='goldenrod')
_=ax.set_title(r'Probability Mass for $Z$',fontsize=18)
_=ax.set_xlabel('$X$ values',fontsize=18)
_=ax.set_ylabel('$Y$ values',fontsize=18);
cb=fig.colorbar(pc)
_=cb.ax.set_title(r'Probability',fontsize=12)
#fig.savefig('fig-probability/Conditional_expectation_MSE_001.png')
Explanation: Programming Tip.
Sympy has a stats module that can do some basic work with expressions
involving probability densities and expectations. The above code uses its E
function to compute the expectation.
This says that $z/2$ is the MSE estimate of $X$ given $Z$ which means
geometrically (interpreting the MSE as a squared distance weighted by the
probability mass function) that $z/2$ is as close to $x$ as we are going to
get for a given $z$.
Let's look at the same problem using the conditional expectation operator $
\mathbb{E}(\cdot|z) $ and apply it to our definition of $Z$. Then
$$
\mathbb{E}(z|z)=\mathbb{E}(x+y|z)=\mathbb{E}(x|z)+\mathbb{E}(y|z)=z
$$
using the linearity of the expectation. Now, since by the
symmetry of the problem (i.e., two identical die), we have
$$
\mathbb{E}(x|z)=\mathbb{E}(y|z)
$$
we can plug this in and solve
$$
2 \mathbb{E}(x|z)=z
$$
which once again gives,
$$
\mathbb{E}(x|z) =\frac{z}{2}
$$
End of explanation
import numpy as np
from sympy import stats
# Eq constrains Z
samples_z7 = lambda : stats.sample(x, S.Eq(z,7))
#using 6 as an estimate
mn= np.mean([(6-samples_z7())**2 for i in range(100)])
#7/2 is the MSE estimate
mn0= np.mean([(7/2.-samples_z7())**2 for i in range(100)])
print 'MSE=%3.2f using 6 vs MSE=%3.2f using 7/2 ' % (mn,mn0)
Explanation: <!-- dom:FIGURE: [fig-probability/Conditional_expectation_MSE_001.png, width=500 frac=0.85] The values of $Z$ are in yellow with the corresponding values for $X$ and $Y$ on the axes. The gray scale colors indicate the underlying joint probability density. <div id="fig:Conditional_expectation_MSE_001"></div> -->
<!-- begin figure -->
<div id="fig:Conditional_expectation_MSE_001"></div>
<p>The values of $Z$ are in yellow with the corresponding values for $X$ and $Y$ on the axes. The gray scale colors indicate the underlying joint probability density.</p>
<img src="fig-probability/Conditional_expectation_MSE_001.png" width=500>
<!-- end figure -->
which is equal to the estimate we just found by minimizing the MSE.
Let's explore this further with Figure. Figure shows the values of $Z$ in yellow with
the corresponding values for $X$ and $Y$ on the axes. Suppose $z=2$, then the
closest $X$ to this is $X=1$, which is what $\mathbb{E}(x|z)=z/2=1$ gives. What
happens when $Z=7$? In this case, this value is spread out diagonally along the
$X$ axis so if $X=1$, then $Z$ is 6 units away, if $X=2$, then $Z$ is 5 units
away and so on.
Now, back to the original question, if we had $Z=7$ and we wanted
to get as close as we could to this using $X$, then why not choose
$X=6$ which is only one unit away from $Z$? The problem with doing
that is $X=6$ only occurs 1/6 of the time, so we are not likely to
get it right the other 5/6 of the time. So, 1/6 of the time we are
one unit away but 5/6 of the time we are much more than one unit
away. This means that the MSE score is going to be worse. Since
each value of $X$ from 1 to 6 is equally likely, to play it safe,
we choose $7/2$ as the estimate, which is what the conditional
expectation suggests.
We can check this claim with samples using Sympy below:
End of explanation
# here 6 is ten times more probable than any other outcome
x=stats.FiniteRV('D3',{1:1/15., 2:1/15.,
3:1/15., 4:1/15.,
5:1/15., 6:2/3.})
Explanation: Programming Tip.
The stats.sample(x, S.Eq(z,7)) function call samples the x variable subject
to a condition on the z variable. In other words, it generates random samples
of x die, given that the sum of the outcomes of that die and the y die add
up to z==7.
Please run the above code repeatedly in the Jupyter/IPython
Notebook corresponding to this section until you have convinced
yourself that the $\mathbb{E}(x|z)$ gives the lower MSE every
time. To push this reasoning, let's consider the case where the
die is so biased so that the outcome of 6 is ten times more
probable than any of the other outcomes. That is,
$$
\mathbb{P}(6) = 2/3
$$
whereas $\mathbb{P}(1)=\mathbb{P}(2)=\ldots=\mathbb{P}(5)=1/15$.
We can explore this using Sympy as in the following:
End of explanation
z = x + y
foo=lambda i: density(z)[S.Integer(i)].evalf() # some tweaks to get a float out
v = np.arange(1,7) + np.arange(1,7)[:,None]
Zmass=np.array(map(foo,v.flat),dtype=float).reshape(6,6)
from matplotlib.pylab import subplots, cm
fig,ax=subplots()
pc=ax.pcolor(np.arange(1,8),np.arange(1,8),Zmass,cmap=cm.gray)
_=ax.set_xticks([(i+0.5) for i in range(1,7)])
_=ax.set_xticklabels([str(i) for i in range(1,7)])
_=ax.set_yticks([(i+0.5) for i in range(1,7)])
_=ax.set_yticklabels([str(i) for i in range(1,7)])
for i in range(1,7):
for j in range(1,7):
_=ax.text(i+.5,j+.5,str(i+j),fontsize=18,fontweight='bold',color='goldenrod')
_=ax.set_title(r'Probability Mass for $Z$; Nonuniform case',fontsize=16)
_=ax.set_xlabel('$X$ values',fontsize=18)
_=ax.set_ylabel('$Y$ values',fontsize=18);
cb=fig.colorbar(pc)
_=cb.ax.set_title(r'Probability',fontsize=12)
#fig.savefig('fig-probability/Conditional_expectation_MSE_002.png')
Explanation: As before, we construct the sum of the two dice, and plot the
corresponding probability mass function in Figure. As compared with Figure, the probability mass has been shifted
away from the smaller numbers.
End of explanation
E(x, S.Eq(z,7)) # conditional expectation E(x|z=7)
Explanation: <!-- dom:FIGURE: [fig-probability/Conditional_expectation_MSE_002.png, width=500 frac=0.85] The values of $Z$ are in yellow with the corresponding values for $X$ and $Y$ on the axes. <div id="fig:Conditional_expectation_MSE_002"></div> -->
<!-- begin figure -->
<div id="fig:Conditional_expectation_MSE_002"></div>
<p>The values of $Z$ are in yellow with the corresponding values for $X$ and $Y$ on the axes.</p>
<img src="fig-probability/Conditional_expectation_MSE_002.png" width=500>
<!-- end figure -->
Let's see what the conditional expectation says about how we can estimate $X$
from $Z$.
End of explanation
samples_z7 = lambda : stats.sample(x, S.Eq(z,7))
#using 6 as an estimate
mn= np.mean([(6-samples_z7())**2 for i in range(100)])
#5 is the MSE estimate
mn0= np.mean([(5-samples_z7())**2 for i in range(100)])
print 'MSE=%3.2f using 6 vs MSE=%3.2f using 5 ' % (mn,mn0)
Explanation: Now that we have $\mathbb{E}(x|z=7) = 5$, we can generate
samples as before and see if this gives the minimum MSE.
End of explanation |
2,384 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fibonacci Stretch
Step1: You can also jump to Part 6 for more audio examples.
Part 1 - Representing rhythm as symbolic data
1.1 Rhythms as arrays
The main musical element we're going to play with here is rhythm (in particular, rhythmic ratios and their similarities). The base rhythm that we're going to focus on is the tresillo ("three"-side) of the son clave pattern, which sounds like this
Step2: ...and looks something like this in Western music notation
Step3: Briefly
Step4: Note that both the music notation and the array are symbolic representations of the rhythm; the rhythm is abstracted so that there is no information about tempo, dynamics, timbre, or other musical information. All we have is the temporal relationship between each note in the sequence (as well as the base assumption that the notes are evenly spaced).
Let's hear (and visualize) an example of how this rhythm sounds in more concrete terms
Step5: 1.2 Rhythmic properties
To work with rhythm in an analytical fashion, we'll need to define some properties of a given rhythmic sequence.
Let's define pulses as the number of onsets in a sequence (i.e. the number of 1s as opposed to 0s), and steps as the total number of elements in the sequence
Step6: We can listen to the pulses and steps together
Step7: You can follow along with the printed array and hear that every 1 corresponds to a pulse, and every 0 to a step.
In addition, let's define pulse lengths as the number of steps that each pulse lasts
Step8: Note that the tresillo rhythm's pulse lengths all fall along the Fibonacci sequence. This allows us do some pretty fun things, as we'll see in a bit. But first let's take a step back.
Part 2 - Fibonacci rhythms
2.1 Fibonacci numbers
The Fibonacci sequence is a particular sequence in which each value is the sum of the two preceding values. We can define a function in Python that gives us the nth Fibonacci number
Step9: And the first 20 numbers in the sequence are
Step10: The Fibonacci sequence is closely linked to the golden ratio in many ways, including the fact that as we go up the sequence, the ratio between successive numbers gets closer and closer to the golden ratio. (If you're interested, Vijay Iyer's article Strength in numbers
Step11: We can also use the golden ratio to find the index of a Fibonacci number
Step12: 2.2 Using Fibonacci numbers to manipulate rhythms
Recall our tresillo rhythm
Step13: We might classify it as a Fibonacci rhythm, since every one of its pulse lengths is a Fibonacci number. If we wanted to expand that rhythm along the Fibonacci sequence, what would that look like?
An intuitive (and, as it turns out, musically satisfying) method would be to take every pulse length and simply replace it with the Fibonacci number that follows it. So in our example, the 3s become 5s, and the 2 becomes 3.
Step14: We'll also want to be able to contract rhythms along the Fibonacci sequence (i.e. choose numbers in decreasing order instead of increasing order), as well as specify how many Fibonacci numbers away we want to end up.
We can generalize this expansion and contraction into a single function that can scale pulse lengths
Step15: Of course, once we have these scaled pulse lengths, we'll want to be able to convert them back into rhythms, in our original array format
Step16: This is exactly the kind of rhythmic expansion and contraction that the Vijay Iyer Trio explore in their renditions of "Mystic Brew" and "Human Nature (Trio Extension)".
Next up, let's begin working with some actual audio!
Part 3 - Mapping rhythm to audio
Part of the beauty of working with rhythms in a symbolic fashion is that once we set things up, we can apply them to any existing audio track.
To properly map the relationship between a rhythmic sequence and an audio representation of a piece of music, we'll have to do some feature extraction, that is, teasing out specific attributes of the music by analyzing the audio signal.
Our goal is to create a musically meaningful relationship between our symbolic rhythmic data and the audio track we want to manipulate.
3.1 Estimating tempo
First we'll load up our source audio file. For this example we'll work with Michael Jackson's "Human Nature", off of his 1982 album Thriller
Step17: An important feature we want to extract from the audio is tempo (i.e. the time interval between steps). Let's estimate that using the librosa.beat.tempo method (which requires us to first detect onsets, or [])
Step18: <div style="color
Step19: And let's listen to our extracted beats with the original audio track
Step20: 3.3 From beats to measures
In order to map our tresillo rhythm to the audio in a musically meaningful way, we'll need to group beats into measures. From listening to the above example we can hear that every beat corresponds to a quarter note; thus, we'll set beats_per_measure to 4
Step21: Using beats_per_measure we can calculate the times for the start of each measure
Step22: Note that we're working in samples now, as this is the unit that the audio data is actually stored in; when we loaded up the audio track, we essentially read in a large array of samples. The sample rate, which we defined as sr, tells us how many samples there are per second.
Thus, it's a simple matter to convert samples to times whenever we need to
Step23: We can visualize, and listen to, the measure and beat markers along with the original waveform
Step24: 3.4 Putting it all together
Step25: For this example, we want the rhythm to last an entire measure as well, so we'll set steps_per_measure to be the number of steps in the rhythm (in this case, 8)
Step26: With these markers in place, we can now overlay the tresillo rhythm onto each measure and listen to the result
Step27: The clicks for measures, pulses, and steps, overlap with each other at certain points. While you can hear this based on the fact that each click is at a different frequency, it can be hard to tell visually in the above figure. We can make this more apparent by plotting each set of clicks with a different color.
In the below figure, each measure is denoted by a large <span style="color
Step28: You can hear that the tresillo rhythm's pulses line up with the harmonic rhythm of "Human Nature"; generally, we want to pick rhythms and audio tracks that have at least some kind of musical relationship.
(We could actually try to estimate rhythmic patterns based on onsets and tempo, but that's for another time.)
Part 4 - Time-stretching audio
Now that we've put the symbolic rhythm and source audio together, we're ready to begin manipulating the audio and doing some actual stretching!
4.1 Target rhythms
First, we'll define the target rhythm that we want the audio to be mapped to
Step29: 4.2 Pulse ratios
Given an original rhythm and target rhythm, we can compute their pulse ratios, that is, the ratio between each of their pulses
Step30: 4.3 Modifying measures by time-stretching
Since we're treating our symbolic rhythms as having the duration of one measure, it makes sense to start by modifying a single measure.
Basically what we want to do is
Step31: You'll notice that in the part where we choose stretch methods, there's a function called euclidean_stretch that we haven't defined. We'll get to that in just a second! For now, let's just keep that in the back of our heads, and not worry about it too much, so that we can hear what our modification method sounds like when applied to the first measure of "Human Nature"
Step32: It doesn't sound like there's much difference between the stretched version and the original, does it?
4.4 Modifying an entire track by naively time-stretching each pulse
To get a better sense, let's apply the modification to the entire audio track
Step33: Listening to the whole track, only perceptible difference is that the last two beats of each measure are slightly faster. If we look at the pulse ratios again
Step34: ... we can see that this makes sense, as we're time-stretching the first two pulses by the same amount, and then time-stretching the last pulse by a different amount.
(Note that while we're expanding our original rhythm along the Fibonacci sequence, this actually corresponds to a contraction when time-stretching. This is because we want to maintain the original tempo, so we're trying to fit more steps into the same timespan.)
4.5 Overlaying target rhythm clicks
We can get some more insight if we sonify the target rhythm's clicks and overlay it onto our modified track
Step35: This gets to the heart of the problem
Step36: Looking at the first pulses of the original rhythm and target rhythm, we want to turn
[1 0 0]
into
[1 0 0 0 0].
To accomplish this, we'll turn to the concept of Euclidean rhythms.
5.2 Generating Euclidean rhythms using Bjorklund's algorithm
A Euclidean rhythm is a type of rhythm that can be generated based upon the Euclidean algorithm for calculating the greatest common divisor of two numbers.
Step37: The concept of Euclidean rhythms was first introduced by Godfried Toussaint in his 2004 paper The Euclidean Algorithm Generates Traditional Musical Rhythms.
The algorithm for generating these rhythms is actually Bjorklund's algorithm, first described by E. Bjorklund in his 2003 paper The Theory of Rep-Rate Pattern Generation in the SNS Timing System, which deals with neutron accelerators in nuclear physics. Here we use Brian House's Python implementation of Bjorklund's algorithm; you can find the source code on GitHub.
It turns out that our tresillo rhythm is an example of a Euclidean rhythm. We can generate it by plugging in the number of pulses and steps into Bjorklund's algorithm
Step38: 5.3 Using Euclidean rhythms to subdivide pulses
Say we want to stretch a pulse [1 0 0] so that it resembles another pulse [1 0 0 0 0]
Step39: We want to know how much to stretch each subdivision. To do this, we'll convert these single pulses into rhythms of their own. First, we'll treat each step in the original pulse as an onset
Step40: And as mentioned before, we'll use Bjorklund's algorithm to generate the target pulse's rhythm. The trick here is to use the number of steps in the original pulse as the number of pulses for the target pulse rhythm (hence the conversion to onsets earlier)
Step41: You might have noticed that this rhythm is exactly the same as the rhythm produced by contracting the tresillo rhythm along the Fibonacci sequence by a factor of 1
Step42: And it's true that there is some significant overlap between Euclidean rhythms and Fibonacci rhythms. The advantage of working with Euclidean rhythms here is that they work with any number of pulses and steps, not just ones that are Fibonacci numbers.
To summarize
Step43: The resulting pulse ratios are
Step44: ... which doesn't intuitively look like it would produce something any different from what we tried before. However, we might perceive a greater difference because
Step45: Let's take a listen to how it sounds
Step46: Much better! With clicks
Step47: As you can hear, the modified track's rhythm is in line with the clicks, and sounds noticeably different from the original song. This is a pretty good place to end up!
Part 6 - Fibonacci stretch
Step48: Now we can simply feed the function a path to an audio file (as well as any parameters we want to customize).
This is the exact method that's applied to the sneak peek at the final result up top. The only difference is that we use a 90-second excerpt rather than our original 30-second one
Step49: And indeed we get the exact same result.
6.2 Examples
Step50: As mentioned in part 2.2, we can contract rhythms as well using negative numbers as our stretch_factor. Let's try that with "Chan Chan" by the Buena Vista Social Club
Step51: (Note that although we do end up with a perceptible difference (the song now sounds like it's in 7/8), it should actually sound like it's in 5/8, since [1 0 0 1 0 0 1 0] is getting compressed to [1 0 1 0 1]. This is an implementation detail with the Euclidean stretch method that I need to fix.)
6.3 Examples
Step52: We can define both a custom target rhythm as well. In addition, neither original_rhythm nor target_rhythm have to be Fibonacci rhythms for the stretch algorithm to work (although with this implementation they do both have to have the same number of pulses).
Let's try that out with the same verse, going from an original rhythm with 8 steps (i.e. in 4/4 meter) to a target rhythm with 10 steps (i.e. in 5/4 meter)
Step53: As another example, we can give a swing feel to the first movement of Mozart's "Eine kleine Nachtmusik" (K. 525), as performed by A Far Cry
Step54: It works pretty decently until around 0
Step55: 6.4 Examples | Python Code:
import IPython.display as ipd
ipd.Audio("../data/out_humannature_90s_stretched.mp3", rate=44100)
Explanation: Fibonacci Stretch: An Exploration Through Code
by David Su
This notebook and its associated code are also available on GitHub.
Contents
Introduction
A sneak peek at the final result
Part 1 - Representing rhythm as symbolic data
1.1 Rhythms as arrays
1.2 Rhythmic properties
Part 2 - Fibonacci rhythms
2.1 Fibonacci numbers
2.2 Using Fibonacci numbers to manipulate rhythms
Part 3 - Mapping rhythm to audio
3.1 Estimating tempo
3.2 From tempo to beats
3.3 From beats to measures
3.4 Putting it all together: mapping symbolic rhythms to audio signals
Part 4 - Time-stretching audio
4.1 Target rhythms
4.2 Pulse ratios
4.3 Modifying measures by time-stretching
4.4 Modifying an entire track by naively time-stretching each pulse
4.5 Overlaying target rhythm clicks
Part 5- Euclidean stretch
5.1 Subdividing pulses
5.2 Generating Euclidean rhythms using Bjorklund's algorithm
5.3 Using Euclidean rhythms to subdivide pulses
5.4 The Euclidean stretch algorithm
Part 6 - Fibonacci stretch: implementation and examples
6.1 Implementation
6.2 Examples: customizing stretch factors
6.3 Examples: customizing orginal and target rhythms
6.4 Examples: customizing input beats per measure
Part 7 - Final thoughts
7.1 Fibonacci stretch as a creative tool
7.2 Rhythm perception
7.3 Implementation improvements
7.4 Future directions
Introduction
The goal of this notebook is to investigate and explore a method of time-stretching an existing audio track such that its rhythmic pulses become expanded or contracted along the Fibonacci sequence, using Euclidean rhythms as the basis for modification. For lack of a better term, let's call it Fibonacci stretch.
Inspiration for this came initially from Vijay Iyer's article on Fibonacci numbers and musical rhythm as well as his trio's renditions of "Mystic Brew" and "Human Nature"; we'll use original version of the latter as an example throughout. We'll also touch upon Godfried Toussaint's work on Euclidean rhythms and Bjorklund's algorithm, both of which are intimately related.
A sneak peek at the final result
This is where we'll end up:
End of explanation
ipd.Audio("../data/tresillo_rhythm.mp3", rate=44100)
Explanation: You can also jump to Part 6 for more audio examples.
Part 1 - Representing rhythm as symbolic data
1.1 Rhythms as arrays
The main musical element we're going to play with here is rhythm (in particular, rhythmic ratios and their similarities). The base rhythm that we're going to focus on is the tresillo ("three"-side) of the son clave pattern, which sounds like this:
End of explanation
%matplotlib inline
import math # Standard library imports
import IPython.display as ipd, librosa, librosa.display, numpy as np, matplotlib.pyplot as plt # External libraries
import pardir; pardir.pardir() # Allow imports from parent directory
import bjorklund # Fork of Brian House's implementation of Bjorklund's algorithm https://github.com/brianhouse/bjorklund
import fibonaccistretch # Functions pertaining specifically to Fibonacci stretch; much of what we'll use here
Explanation: ...and looks something like this in Western music notation:
We can convert that into a sequence of bits, with each 1 representing an onset, and 0 representing a rest (similar to the way a sequencer works). Doing so yields this:
[1 0 0 1 0 0 1 0]
...which we can conveniently store as a list in Python. Actually, this is a good time to start diving directly into code. First, let's import all the Python libraries we need:
End of explanation
tresillo_rhythm = np.array([1, 0, 0, 1, 0, 0, 1, 0])
print(tresillo_rhythm)
Explanation: Briefly: we're using IPython.display to do audio playback, librosa for the bulk of the audio processing and manipulation (namely time-stretching), numpy to represent data, and matplotlib to plot the data.
Here's our list of bits encoding the tresillo sequence in Python (we'll use numpy arrays for consistency with later when when we deal with both audio signals and plotting visualizations):
End of explanation
# Generate tresillo clicks
sr = 44100
tresillo_click_interval = 0.25 # in seconds
tresillo_click_times = np.array([i * tresillo_click_interval for i in range(len(tresillo_rhythm))
if tresillo_rhythm[i] != 0])
tresillo_clicks = librosa.clicks(times=tresillo_click_times, click_freq=2000.0, sr=sr) # Generate clicks according to the rhythm
# Plot clicks and click times
plt.figure(figsize=(8, 2))
librosa.display.waveplot(tresillo_clicks, sr=sr)
plt.vlines(tresillo_click_times + 0.005, -1, 1, color="r") # Add tiny offset so the first line shows up
plt.xticks(np.arange(0, 1.75, 0.25))
# Render clicks as audio
ipd.Audio(tresillo_clicks, rate=sr)
Explanation: Note that both the music notation and the array are symbolic representations of the rhythm; the rhythm is abstracted so that there is no information about tempo, dynamics, timbre, or other musical information. All we have is the temporal relationship between each note in the sequence (as well as the base assumption that the notes are evenly spaced).
Let's hear (and visualize) an example of how this rhythm sounds in more concrete terms:
End of explanation
tresillo_num_pulses = np.count_nonzero(tresillo_rhythm)
tresillo_num_steps = len(tresillo_rhythm)
print("The tresillo rhythm has {} pulses and {} steps".format(tresillo_num_pulses, tresillo_num_steps))
Explanation: 1.2 Rhythmic properties
To work with rhythm in an analytical fashion, we'll need to define some properties of a given rhythmic sequence.
Let's define pulses as the number of onsets in a sequence (i.e. the number of 1s as opposed to 0s), and steps as the total number of elements in the sequence:
End of explanation
# Generate the clicks
tresillo_pulse_clicks, tresillo_step_clicks = fibonaccistretch.generate_rhythm_clicks(tresillo_rhythm, tresillo_click_interval)
tresillo_pulse_times, tresillo_step_times = fibonaccistretch.generate_rhythm_times(tresillo_rhythm, tresillo_click_interval)
# Tresillo as an array
print(tresillo_rhythm)
# Tresillo audio, plotted
plt.figure(figsize=(8, 2))
librosa.display.waveplot(tresillo_pulse_clicks + tresillo_step_clicks, sr=sr)
plt.vlines(tresillo_pulse_times + 0.005, -1, 1, color="r")
plt.vlines(tresillo_step_times + 0.005, -0.5, 0.5, color="r")
# Tresillo as audio
ipd.Audio(tresillo_pulse_clicks + tresillo_step_clicks, rate=44100)
Explanation: We can listen to the pulses and steps together:
End of explanation
tresillo_pulse_lengths = fibonaccistretch.calculate_pulse_lengths(tresillo_rhythm)
print("Tresillo pulse lengths: {}".format(tresillo_pulse_lengths))
Explanation: You can follow along with the printed array and hear that every 1 corresponds to a pulse, and every 0 to a step.
In addition, let's define pulse lengths as the number of steps that each pulse lasts:
End of explanation
fibonaccistretch.fibonacci??
Explanation: Note that the tresillo rhythm's pulse lengths all fall along the Fibonacci sequence. This allows us do some pretty fun things, as we'll see in a bit. But first let's take a step back.
Part 2 - Fibonacci rhythms
2.1 Fibonacci numbers
The Fibonacci sequence is a particular sequence in which each value is the sum of the two preceding values. We can define a function in Python that gives us the nth Fibonacci number:
End of explanation
first_twenty_fibs = np.array([fibonaccistretch.fibonacci(n) for n in range(20)])
plt.figure(figsize=(16,1))
plt.scatter(first_twenty_fibs, np.zeros(20), c="r")
plt.axis("off")
print(first_twenty_fibs)
Explanation: And the first 20 numbers in the sequence are:
End of explanation
# Calculate and plot Fibonacci number ratios
phi = (1 + math.sqrt(5)) / 2 # Golden ratio; 1.61803398875...
fibs_ratios = np.array([first_twenty_fibs[i] / float(max(1, first_twenty_fibs[i-1])) for i in range(2,20)])
plt.plot(np.arange(len(fibs_ratios)), fibs_ratios, "r")
# Plot golden ratio as a consant
phis = np.empty(len(fibs_ratios))
phis.fill(phi)
plt.xticks(np.arange(len(fibs_ratios)))
plt.xlabel("Fibonacci index (denotes i for ith Fibonacci number)")
plt.ylabel("Ratio between ith and (i-1)th Fibonacci number")
plt.plot(np.arange(len(phis)), phis, "b", alpha=0.5)
Explanation: The Fibonacci sequence is closely linked to the golden ratio in many ways, including the fact that as we go up the sequence, the ratio between successive numbers gets closer and closer to the golden ratio. (If you're interested, Vijay Iyer's article Strength in numbers: How Fibonacci taught us how to swing goes into this in more depth.)
Below is a plot of Fibonacci number ratios in <span style="color:red">red</span>, and the golden ratio as a constant in <span style="color:blue">blue</span>. You can see how the Fibonacci ratios converge to the golden ratio:
End of explanation
fibonaccistretch.find_fibonacci_index??
fib_n = 21
fib_i = fibonaccistretch.find_fibonacci_index(fib_n)
assert(fibonaccistretch.fibonacci(fib_i) == fib_n)
print("{} is the {}th Fibonacci number".format(fib_n, fib_i))
Explanation: We can also use the golden ratio to find the index of a Fibonacci number:
End of explanation
plt.figure(figsize=(8, 2))
plt.vlines(tresillo_pulse_times + 0.005, -1, 1, color="r")
plt.vlines(tresillo_step_times + 0.005, -0.5, 0.5, color="r", alpha=0.5)
plt.yticks([])
print("Tresillo rhythm sequence: {}".format(tresillo_rhythm))
print("Tresillo pulse lengths: {}".format(tresillo_pulse_lengths))
Explanation: 2.2 Using Fibonacci numbers to manipulate rhythms
Recall our tresillo rhythm:
End of explanation
expanded_pulse_lengths = fibonaccistretch.fibonacci_expand_pulse_lengths(tresillo_pulse_lengths)
print("Expanded tresillo pulse lengths: {}".format(expanded_pulse_lengths))
Explanation: We might classify it as a Fibonacci rhythm, since every one of its pulse lengths is a Fibonacci number. If we wanted to expand that rhythm along the Fibonacci sequence, what would that look like?
An intuitive (and, as it turns out, musically satisfying) method would be to take every pulse length and simply replace it with the Fibonacci number that follows it. So in our example, the 3s become 5s, and the 2 becomes 3.
End of explanation
# Note that `scale_amount` determines the direction and magnitude of the scaling.
# If `scale_amount` > 0, it corresponds to a rhythmic expansion.
# If `scale_amount` < 0, it corresponds to a rhythmic contraction.
# If `scale_amount` == 0, the original scale is maintained and no changes are made.
print("Tresillo pulse lengths: {}".format(tresillo_pulse_lengths))
print("Tresillo pulse lengths expanded by 1: {}".format(fibonaccistretch.fibonacci_scale_pulse_lengths(tresillo_pulse_lengths, scale_amount=1)))
print("Tresillo pulse lengths expanded by 2: {}".format(fibonaccistretch.fibonacci_scale_pulse_lengths(tresillo_pulse_lengths, scale_amount=2)))
print("Tresillo pulse lengths contracted by 1: {}".format(fibonaccistretch.fibonacci_scale_pulse_lengths(tresillo_pulse_lengths, scale_amount=-1)))
Explanation: We'll also want to be able to contract rhythms along the Fibonacci sequence (i.e. choose numbers in decreasing order instead of increasing order), as well as specify how many Fibonacci numbers away we want to end up.
We can generalize this expansion and contraction into a single function that can scale pulse lengths:
End of explanation
# Scale tresillo rhythm by a variety of factors and plot the results
for scale_factor, color in [(0, "r"), (1, "g"), (2, "b"), (-1, "y")]:
scaled_rhythm = fibonaccistretch.fibonacci_scale_rhythm(tresillo_rhythm, scale_factor)
scaled_pulse_indices = np.array([p_i for p_i,x in enumerate(scaled_rhythm) if x > 0 ])
scaled_step_indices = np.array([s_i for s_i in range(len(scaled_rhythm))])
scaled_pulse_ys = np.empty(len(scaled_pulse_indices))
scaled_pulse_ys.fill(0)
scaled_step_ys = np.empty(len(scaled_step_indices))
scaled_step_ys.fill(0)
# plt.figure(figsize=(len([scaled_rhythm])*0.5, 1))
plt.figure(figsize=(8, 1))
if scale_factor > 0:
plt.title("Tresillo rhythm expanded by {}: {}".format(abs(scale_factor), scaled_rhythm), loc="left")
elif scale_factor < 0:
plt.title("Tresillo rhythm contracted by {}: {}".format(abs(scale_factor), scaled_rhythm), loc="left")
else: # scale_factor == 0, which means rhythm is unaltered
plt.title("Tresillo rhythm: {}".format(scaled_rhythm), loc="left")
# plt.scatter(scaled_pulse_indices, scaled_pulse_ys, c=color)
# plt.scatter(scaled_step_indices, scaled_step_ys, c="k", alpha=0.5)
# plt.grid(True)
plt.vlines(scaled_pulse_indices, -1, 1, color=color)
plt.vlines(scaled_step_indices, -0.5, 0.5, color=color, alpha=0.5)
plt.xticks(np.arange(0, plt.xlim()[1], 1))
plt.yticks([])
# plt.xticks(np.linspace(0, 10, 41))
Explanation: Of course, once we have these scaled pulse lengths, we'll want to be able to convert them back into rhythms, in our original array format:
End of explanation
# Load input audio file
filename = "../data/humannature_30s.mp3"
y, sr = librosa.load(filename, sr=sr)
plt.figure(figsize=(16,4))
librosa.display.waveplot(y, sr=sr)
ipd.Audio(y, rate=sr)
Explanation: This is exactly the kind of rhythmic expansion and contraction that the Vijay Iyer Trio explore in their renditions of "Mystic Brew" and "Human Nature (Trio Extension)".
Next up, let's begin working with some actual audio!
Part 3 - Mapping rhythm to audio
Part of the beauty of working with rhythms in a symbolic fashion is that once we set things up, we can apply them to any existing audio track.
To properly map the relationship between a rhythmic sequence and an audio representation of a piece of music, we'll have to do some feature extraction, that is, teasing out specific attributes of the music by analyzing the audio signal.
Our goal is to create a musically meaningful relationship between our symbolic rhythmic data and the audio track we want to manipulate.
3.1 Estimating tempo
First we'll load up our source audio file. For this example we'll work with Michael Jackson's "Human Nature", off of his 1982 album Thriller:
End of explanation
tempo = fibonaccistretch.estimate_tempo(y, sr)
print("Tempo (calculated): {}".format(tempo))
tempo = 93.0 # Hard-coded from prior knowledge
print("Tempo (hard-coded): {}".format(tempo))
Explanation: An important feature we want to extract from the audio is tempo (i.e. the time interval between steps). Let's estimate that using the librosa.beat.tempo method (which requires us to first detect onsets, or []):
End of explanation
beat_times = fibonaccistretch.calculate_beat_times(y, sr, tempo)
print("First 10 beat times (in seconds): {}".format(beat_times[:10]))
Explanation: <div style="color:gray">
(We can see that the tempo we've estimated differs by approximately 1BPM from the tempo that we've hard-coded from prior knowledge.
It's often the case that such automatic feature extraction tools and algorithms require a fair bit of fine-tuning, so we can improve our results by supplying some user-defined parameters, especially when using them out of the box like we are here. The variables `hop_length` and `tempo` are two such parameters in this case.
However, the more parameters we define manually, the less flexible our overall system becomes, so it's a tradeoff between accuracy and robustness.)
</div>
3.2 From tempo to beats
From the tempo we can calculate the times of every beat in the song (assuming the tempo is consistent, which in this case it is):
End of explanation
# Listen to beat clicks (i.e. a metronome)
beat_clicks = librosa.clicks(times=beat_times, sr=sr, length=len(y))
# Plot waveform and beats
plt.figure(figsize=(16,4))
librosa.display.waveplot(y, sr=sr)
plt.vlines(beat_times, -0.25, 0.25, color="r")
ipd.Audio(y + beat_clicks, rate=sr)
Explanation: And let's listen to our extracted beats with the original audio track:
End of explanation
beats_per_measure = 4
Explanation: 3.3 From beats to measures
In order to map our tresillo rhythm to the audio in a musically meaningful way, we'll need to group beats into measures. From listening to the above example we can hear that every beat corresponds to a quarter note; thus, we'll set beats_per_measure to 4:
End of explanation
# Work in samples from here on
beat_samples = librosa.time_to_samples(beat_times, sr=sr)
measure_samples = fibonaccistretch.calculate_measure_samples(y, beat_samples, beats_per_measure)
print("First 10 measure samples: {}".format(measure_samples[:10]))
Explanation: Using beats_per_measure we can calculate the times for the start of each measure:
End of explanation
measure_times = librosa.samples_to_time(measure_samples, sr=sr)
print("First 10 measure times (in seconds): {}".format(measure_times[:10], sr=sr))
Explanation: Note that we're working in samples now, as this is the unit that the audio data is actually stored in; when we loaded up the audio track, we essentially read in a large array of samples. The sample rate, which we defined as sr, tells us how many samples there are per second.
Thus, it's a simple matter to convert samples to times whenever we need to:
End of explanation
# Add clicks, then plot and listen
plt.figure(figsize=(16, 4))
librosa.display.waveplot(y, sr=sr)
plt.vlines(measure_times, -1, 1, color="r")
plt.vlines(beat_times, -0.5, 0.5, color="r")
measure_clicks = librosa.clicks(times=measure_times, sr=sr, click_freq=3000.0, length=len(y))
ipd.Audio(y + measure_clicks + beat_clicks, rate=sr)
Explanation: We can visualize, and listen to, the measure and beat markers along with the original waveform:
End of explanation
print("Tresillo rhythm: {}\n"
"{} pulses, {} steps".format(tresillo_rhythm, tresillo_num_pulses, tresillo_num_steps))
Explanation: 3.4 Putting it all together: mapping symbolic rhythms to audio signals
With our knowledge of the song's tempo, beats, and measures, we can start bringing our symbolic rhythms into audio-land. Again, let's work with our trusty tresillo rhythm:
End of explanation
steps_per_measure = tresillo_num_steps
steps_per_measure
Explanation: For this example, we want the rhythm to last an entire measure as well, so we'll set steps_per_measure to be the number of steps in the rhythm (in this case, 8):
End of explanation
fibonaccistretch.overlay_rhythm_onto_audio(tresillo_rhythm, y, measure_samples, sr=sr)
Explanation: With these markers in place, we can now overlay the tresillo rhythm onto each measure and listen to the result:
End of explanation
fibonaccistretch.overlay_rhythm_onto_audio(tresillo_rhythm, y, measure_samples, sr=sr, click_colors={"measure": "r",
"pulse": "g",
"step": "b"})
Explanation: The clicks for measures, pulses, and steps, overlap with each other at certain points. While you can hear this based on the fact that each click is at a different frequency, it can be hard to tell visually in the above figure. We can make this more apparent by plotting each set of clicks with a different color.
In the below figure, each measure is denoted by a large <span style="color:red">red</span> line, each pulse by a medium <span style="color:green">green</span> line, and each step by a small <span style="color:blue">blue</span> line.
End of explanation
original_rhythm = tresillo_rhythm
target_rhythm = fibonaccistretch.fibonacci_scale_rhythm(original_rhythm, 1) # "Fibonacci scale" original rhythm by a factor of 1
print("Original rhythm: {}\n"
"Target rhythm: {}".format(original_rhythm, target_rhythm))
Explanation: You can hear that the tresillo rhythm's pulses line up with the harmonic rhythm of "Human Nature"; generally, we want to pick rhythms and audio tracks that have at least some kind of musical relationship.
(We could actually try to estimate rhythmic patterns based on onsets and tempo, but that's for another time.)
Part 4 - Time-stretching audio
Now that we've put the symbolic rhythm and source audio together, we're ready to begin manipulating the audio and doing some actual stretching!
4.1 Target rhythms
First, we'll define the target rhythm that we want the audio to be mapped to:
End of explanation
pulse_ratios = fibonaccistretch.calculate_pulse_ratios(original_rhythm, target_rhythm)
print("Pulse ratios: {}".format(pulse_ratios))
Explanation: 4.2 Pulse ratios
Given an original rhythm and target rhythm, we can compute their pulse ratios, that is, the ratio between each of their pulses:
End of explanation
fibonaccistretch.modify_measure??
Explanation: 4.3 Modifying measures by time-stretching
Since we're treating our symbolic rhythms as having the duration of one measure, it makes sense to start by modifying a single measure.
Basically what we want to do is: for each pulse, get the audio chunk that maps to that pulse, and time-stretch it based on our calculated pulse ratios.
Below is an implementation of just that. It's a bit long, but that's mostly due to having to define several properties to do with rhythm and audio. The core idea, of individually stretching the pulses, remains the same:
End of explanation
first_measure_data = y[measure_samples[0]:measure_samples[1]]
first_measure_modified = fibonaccistretch.modify_measure(first_measure_data,
original_rhythm, target_rhythm,
stretch_method="timestretch")
ipd.Audio(first_measure_modified, rate=sr)
Explanation: You'll notice that in the part where we choose stretch methods, there's a function called euclidean_stretch that we haven't defined. We'll get to that in just a second! For now, let's just keep that in the back of our heads, and not worry about it too much, so that we can hear what our modification method sounds like when applied to the first measure of "Human Nature":
End of explanation
# Modify the track using naive time-stretch
y_modified, measure_samples_modified = fibonaccistretch.modify_track(y, measure_samples,
original_rhythm, target_rhythm,
stretch_method="timestretch")
plt.figure(figsize=(16,4))
librosa.display.waveplot(y_modified, sr=sr)
ipd.Audio(y_modified, rate=sr)
Explanation: It doesn't sound like there's much difference between the stretched version and the original, does it?
4.4 Modifying an entire track by naively time-stretching each pulse
To get a better sense, let's apply the modification to the entire audio track:
End of explanation
pulse_ratios = fibonaccistretch.calculate_pulse_ratios(original_rhythm, target_rhythm)
print(pulse_ratios)
Explanation: Listening to the whole track, only perceptible difference is that the last two beats of each measure are slightly faster. If we look at the pulse ratios again:
End of explanation
fibonaccistretch.overlay_rhythm_onto_audio(target_rhythm, y_modified, measure_samples, sr)
Explanation: ... we can see that this makes sense, as we're time-stretching the first two pulses by the same amount, and then time-stretching the last pulse by a different amount.
(Note that while we're expanding our original rhythm along the Fibonacci sequence, this actually corresponds to a contraction when time-stretching. This is because we want to maintain the original tempo, so we're trying to fit more steps into the same timespan.)
4.5 Overlaying target rhythm clicks
We can get some more insight if we sonify the target rhythm's clicks and overlay it onto our modified track:
End of explanation
print("Original rhythm: {}\n"
"Target rhythm: {}".format(original_rhythm, target_rhythm))
Explanation: This gets to the heart of the problem: when we time-stretch an entire pulse this way, we retain the original pulse's internal rhythm, essentially creating a polyrhythm in the target pulse's step (i.e. metrical) structure. Even though we're time-stretching each pulse, we don't hear a difference because everything within the pulse gets time-stretched by the same amount.
Part 5 - Euclidean stretch
Listening to the rendered track in Part 4.5, you can hear that aside from the beginning of each measure and pulse, the musical onsets in the modified track don't really line up with the target rhythm's clicks at all. Thus, without the clicks, we have no way to identify the target rhythm, even though that's what we were using as the basis of our stretch method!
So how do we remedy this?
5.1 Subdividing pulses
We dig deeper. That is, we can treat each pulse as a rhythm of its own, and subdivide it accordingly, since each pulse is comprised of multiple steps after all.
End of explanation
fibonaccistretch.euclid??
gcd = fibonaccistretch.euclid(8, 12)
print("Greatest common divisor of 8 and 12 is {}".format(gcd))
Explanation: Looking at the first pulses of the original rhythm and target rhythm, we want to turn
[1 0 0]
into
[1 0 0 0 0].
To accomplish this, we'll turn to the concept of Euclidean rhythms.
5.2 Generating Euclidean rhythms using Bjorklund's algorithm
A Euclidean rhythm is a type of rhythm that can be generated based upon the Euclidean algorithm for calculating the greatest common divisor of two numbers.
End of explanation
print(np.array(bjorklund.bjorklund(pulses=3, steps=8)))
Explanation: The concept of Euclidean rhythms was first introduced by Godfried Toussaint in his 2004 paper The Euclidean Algorithm Generates Traditional Musical Rhythms.
The algorithm for generating these rhythms is actually Bjorklund's algorithm, first described by E. Bjorklund in his 2003 paper The Theory of Rep-Rate Pattern Generation in the SNS Timing System, which deals with neutron accelerators in nuclear physics. Here we use Brian House's Python implementation of Bjorklund's algorithm; you can find the source code on GitHub.
It turns out that our tresillo rhythm is an example of a Euclidean rhythm. We can generate it by plugging in the number of pulses and steps into Bjorklund's algorithm:
End of explanation
original_pulse = np.array([1,0,0])
target_pulse = np.array([1,0,0,0,0])
Explanation: 5.3 Using Euclidean rhythms to subdivide pulses
Say we want to stretch a pulse [1 0 0] so that it resembles another pulse [1 0 0 0 0]:
End of explanation
original_pulse_rhythm = np.ones(len(original_pulse), dtype="int")
print(original_pulse_rhythm)
Explanation: We want to know how much to stretch each subdivision. To do this, we'll convert these single pulses into rhythms of their own. First, we'll treat each step in the original pulse as an onset:
End of explanation
target_pulse_rhythm = np.array(bjorklund.bjorklund(pulses=len(original_pulse), steps=len(target_pulse)))
print(target_pulse_rhythm)
Explanation: And as mentioned before, we'll use Bjorklund's algorithm to generate the target pulse's rhythm. The trick here is to use the number of steps in the original pulse as the number of pulses for the target pulse rhythm (hence the conversion to onsets earlier):
End of explanation
print(fibonaccistretch.fibonacci_scale_rhythm(tresillo_rhythm, -1))
Explanation: You might have noticed that this rhythm is exactly the same as the rhythm produced by contracting the tresillo rhythm along the Fibonacci sequence by a factor of 1:
End of explanation
print("In order to stretch pulse-to-pulse {} --> {}\n"
"we subdivide and stretch rhythms {} --> {}".format(original_pulse, target_pulse,
original_pulse_rhythm, target_pulse_rhythm))
Explanation: And it's true that there is some significant overlap between Euclidean rhythms and Fibonacci rhythms. The advantage of working with Euclidean rhythms here is that they work with any number of pulses and steps, not just ones that are Fibonacci numbers.
To summarize:
End of explanation
print(fibonaccistretch.calculate_pulse_ratios(original_pulse_rhythm, target_pulse_rhythm))
Explanation: The resulting pulse ratios are:
End of explanation
fibonaccistretch.euclidean_stretch??
Explanation: ... which doesn't intuitively look like it would produce something any different from what we tried before. However, we might perceive a greater difference because:
a) we're working on a more granular temporal level (subdivisions of pulses as opposed to measures), and
b) we're adjusting an equally-spaced rhythm (e.g. [1 1 1]) to one that's not necessarily equally-spaced (e.g. [1 0 1 0 1])
5.4 The Euclidean stretch algorithm
With all this in mind, we can now implement Euclidean stretch:
End of explanation
# Modify the track
y_modified, measure_samples_modified = fibonaccistretch.modify_track(y, measure_samples,
original_rhythm, target_rhythm,
stretch_method="euclidean")
plt.figure(figsize=(16,4))
librosa.display.waveplot(y_modified, sr=sr)
ipd.Audio(y_modified, rate=sr)
Explanation: Let's take a listen to how it sounds:
End of explanation
fibonaccistretch.overlay_rhythm_onto_audio(target_rhythm, y_modified, measure_samples, sr)
Explanation: Much better! With clicks:
End of explanation
fibonaccistretch.fibonacci_stretch_track??
Explanation: As you can hear, the modified track's rhythm is in line with the clicks, and sounds noticeably different from the original song. This is a pretty good place to end up!
Part 6 - Fibonacci stretch: implementation and examples
6.1 Implementation
Here's an end-to-end implementation of Fibonacci stretch. A lot of the default parameters have been set to the ones we've been using in this notebook, although of course you can pass in your own:
End of explanation
# "Human Nature" stretched by a factor of 1 using default parameters
fibonaccistretch.fibonacci_stretch_track("../data/humannature_90s.mp3",
stretch_factor=1,
tempo=93.0)
Explanation: Now we can simply feed the function a path to an audio file (as well as any parameters we want to customize).
This is the exact method that's applied to the sneak peek at the final result up top. The only difference is that we use a 90-second excerpt rather than our original 30-second one:
End of explanation
# "Human Nature" stretched by a factor of 2
fibonaccistretch.fibonacci_stretch_track("../data/humannature_30s.mp3",
tempo=93.0,
stretch_factor=2,
overlay_clicks=True)
Explanation: And indeed we get the exact same result.
6.2 Examples: customizing stretch factors
Now that we have a function to easily stretch tracks, we can begin playing around with some of the parameters.
Here's the 30-second "Human Nature" excerpt again, only this time it's stretched by a factor of 2 instead of 1:
End of explanation
# "Chan Chan" stretched by a factor of -1
fibonaccistretch.fibonacci_stretch_track("../data/chanchan_30s.mp3",
stretch_factor=-1,
tempo=78.5)
Explanation: As mentioned in part 2.2, we can contract rhythms as well using negative numbers as our stretch_factor. Let's try that with "Chan Chan" by the Buena Vista Social Club:
End of explanation
# "I'm the One" stretched by a factor of 1
fibonaccistretch.fibonacci_stretch_track("../data/imtheone_cropped_chance_60s.mp3",
tempo=162,
original_rhythm=np.array([1,0,0,0,0,1,0,0]),
stretch_factor=1)
Explanation: (Note that although we do end up with a perceptible difference (the song now sounds like it's in 7/8), it should actually sound like it's in 5/8, since [1 0 0 1 0 0 1 0] is getting compressed to [1 0 1 0 1]. This is an implementation detail with the Euclidean stretch method that I need to fix.)
6.3 Examples: customizing original and target rhythms
In order to get musically meaningful results we generally want to supply parameters that make musical sense with our input audio (although it can certainly be interesting to try with parameters that don't!). One of the parameters that makes the most difference in results is the rhythm sequence used to represent each measure.
Here's Chance the Rapper's verse from DJ Khaled's "I'm the One", with a custom original_rhythm that matches the bassline of the song:
End of explanation
# "I'm the One" in 5/4
fibonaccistretch.fibonacci_stretch_track("../data/imtheone_cropped_chance_60s.mp3",
tempo=162,
original_rhythm=np.array([1,0,0,0,0,1,0,0]),
target_rhythm=np.array([1,0,0,0,0,1,0,0,0,0]),
overlay_clicks=True)
Explanation: We can define both a custom target rhythm as well. In addition, neither original_rhythm nor target_rhythm have to be Fibonacci rhythms for the stretch algorithm to work (although with this implementation they do both have to have the same number of pulses).
Let's try that out with the same verse, going from an original rhythm with 8 steps (i.e. in 4/4 meter) to a target rhythm with 10 steps (i.e. in 5/4 meter):
End of explanation
# "Eine kleine Nachtmusik" with a swing feel
fibonaccistretch.fibonacci_stretch_track("../data/einekleinenachtmusik_30s.mp3",
tempo=130,
original_rhythm=np.array([1,0,1,1]),
target_rhythm=np.array([1,0,0,1,0,1]))
Explanation: As another example, we can give a swing feel to the first movement of Mozart's "Eine kleine Nachtmusik" (K. 525), as performed by A Far Cry:
End of explanation
# "Chan Chan" in 5/4
fibonaccistretch.fibonacci_stretch_track("../data/chanchan_30s.mp3",
tempo=78.5,
original_rhythm=np.array([1,0,0,1,0,0,0,0]),
target_rhythm=np.array([1,0,0,0,0,1,0,0,0,0])) # Also interesting to try with [1,0,1]
Explanation: It works pretty decently until around 0:09, at which point the assumption of a metronomically consistent tempo breaks down. (This is one of the biggest weaknesses with the current implementation, and is something I definitely hope to work on in the future.)
Let's also hear what "Chan Chan" sounds like in 5/4:
End of explanation
# "Pink + White" stretched by a factor of 1
fibonaccistretch.fibonacci_stretch_track("../data/pinkandwhite_30s.mp3",
beats_per_measure=6,
tempo=160,
# 6/8 to 4/4 using bassline rhythm
original_rhythm=np.array([1,1,1,1,0,0]),
target_rhythm=np.array([1,1,1,0,1,0,0,0]),
# 6/8 to 4/4 using half notes
# original_rhythm=np.array([1,0,0,1,0,0]),
# target_rhythm=np.array([1,0,0,0,1,0,0,0]),
# 6/8 to 10/8 (5/4) using Fibonacci stretch factor of 1
# original_rhythm=np.array([1,0,0,1,0,0]),
# stretch_factor=1,
overlay_clicks=True)
Explanation: 6.4 Examples: customizing input beats per measure
We can also work with source audio in other meters. For example, Frank Ocean's "Pink + White" is in 6/8. Here I've stretched it into 4/4 using the rhythm of the bassline, but you can uncomment the other supplied parameters (or supply your own!) to hear how they sound as well:
End of explanation |
2,385 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Solution Notebook
Problem
Step1: Algorithm
Step2: Unit Test | Python Code:
def permutations(str1, str2):
return sorted(str1) == sorted(str2)
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Solution Notebook
Problem: Determine if a string is a permutation of another string
Constraints
Test Cases
Algorithm: Compare Sorted Strings
Code: Compare Sorted Strings
Algorithm: Hashmap Lookup
Code: Hashmap Lookup
Unit Test
Constraints
Can we assume the string is ASCII?
Yes
Note: Unicode strings could require special handling depending on your language
Is whitespace important?
Yes
Is this case sensitive? 'Nib', 'bin' is not a match?
Yes
Test Cases
One or more empty strings -> False
'Nib', 'bin' -> False
'act', 'cat' -> True
'a ct', 'ca t' -> True
Algorithm: Compare Sorted Strings
Permutations contain the same strings but in different orders. This approach could be slow for large strings due to sorting.
Sort both strings
If both sorted strings are equal
return True
Else
return False
Complexity:
* Time: O(n log n) from the sort, in general
* Space: O(n)
Code: Compare Sorted Strings
End of explanation
from collections import defaultdict
def unique_counts(string):
dict_chars = defaultdict(int)
for char in string:
dict_chars[char] += 1
return dict_chars
def permutations_alt(str1, str2):
if len(str1) != len(str2):
return False
unique_counts1 = unique_counts(str1)
unique_counts2 = unique_counts(str2)
return unique_counts1 == unique_counts2
Explanation: Algorithm: Hash Map Lookup
We'll keep a hash map (dict) to keep track of characters we encounter.
Steps:
* Scan each character
* For each character in each string:
* If the character does not exist in a hash map, add the character to a hash map
* Else, increment the character's count
* If the hash maps for each string are equal
* Return True
* Else
* Return False
Notes:
* Since the characters are in ASCII, we could potentially use an array of size 128 (or 256 for extended ASCII), where each array index is equivalent to an ASCII value
* Instead of using two hash maps, you could use one hash map and increment character values based on the first string and decrement based on the second string
* You can short circuit if the lengths of each string are not equal, although len() in Python is generally O(1) unlike other languages like C where getting the length of a string is O(n)
Complexity:
* Time: O(n)
* Space: O(n)
Code: Hash Map Lookup
End of explanation
%%writefile test_permutation_solution.py
from nose.tools import assert_equal
class TestPermutation(object):
def test_permutation(self, func):
assert_equal(func('', 'foo'), False)
assert_equal(func('Nib', 'bin'), False)
assert_equal(func('act', 'cat'), True)
assert_equal(func('a ct', 'ca t'), True)
print('Success: test_permutation')
def main():
test = TestPermutation()
test.test_permutation(permutations)
try:
test.test_permutation(permutations_alt)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
run -i test_permutation_solution.py
Explanation: Unit Test
End of explanation |
2,386 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
K-Nest-Neighbors
1 Unsupervised Neast Neighbors
It acts as a uniform unterface to three different nestest neighbors algorithms
Step1: We can also use the KDTree and BallTree classes directly to find the nearest neighbors, alternatively. | Python Code:
from sklearn.neighbors import NearestNeighbors
import numpy as np
X = np.array([[-1,-1],[-2, -1],[1,1],[2,1],[3,2]])
nbrs = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(X)
distance, indices = nbrs.kneighbors(X)
indices
distance
Explanation: K-Nest-Neighbors
1 Unsupervised Neast Neighbors
It acts as a uniform unterface to three different nestest neighbors algorithms: BallTree, KDTree, and burte-force algorithm based on routines in sklearn.metrics.pariwise
End of explanation
from sklearn.neighbors import KDTree
import numpy as np
X = np.array([[-1,-1],[-2, -1],[1,1],[2,1],[3,2]])
kdt=KDTree(X, leaf_size=30, metric='euclidean')
kdt.query(X, k=2, return_distance=False)
Explanation: We can also use the KDTree and BallTree classes directly to find the nearest neighbors, alternatively.
End of explanation |
2,387 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recognize named entities on Twitter with LSTMs
In this assignment, you will use a recurrent neural network to solve Named Entity Recognition (NER) problem. NER is a common task in natural language processing systems. It serves for extraction such entities from the text as persons, organizations, locations, etc. In this task you will experiment to recognize named entities from Twitter.
For example, we want to extract persons' and organizations' names from the text. Than for the input text
Step1: Load the Twitter Named Entity Recognition corpus
We will work with a corpus, which contains twits with NE tags. Every line of a file contains a pair of a token (word/punctuation symbol) and a tag, separated by a whitespace. Different tweets are separated by an empty line.
The function read_data reads a corpus from the file_path and returns two lists
Step2: And now we can load three separate parts of the dataset
Step3: You should always understand what kind of data you deal with. For this purpose, you can print the data running the following cell
Step5: Prepare dictionaries
To train a neural network, we will use two mappings
Step6: After implementing the function build_dict you can make dictionaries for tokens and tags. Special tokens in our case will be
Step7: The next additional functions will help you to create the mapping between tokens and ids for a sentence.
Step9: Generate batches
Neural Networks are usually trained with batches. It means that weight updates of the network are based on several sequences at every single time. The tricky part is that all sequences within a batch need to have the same length. So we will pad them with a special <PAD> token. It is also a good practice to provide RNN with sequence lengths, so it can skip computations for padding parts. We provide the batching function batches_generator readily available for you to save time.
Step10: Build a recurrent neural network
This is the most important part of the assignment. Here we will specify the network architecture based on TensorFlow building blocks. It's fun and easy as a lego constructor! We will create an LSTM network which will produce probability distribution over tags for each token in a sentence. To take into account both right and left contexts of the token, we will use Bi-Directional LSTM (Bi-LSTM). Dense layer will be used on top to perform tag classification.
Step12: First, we need to create placeholders to specify what data we are going to feed into the network during the execution time. For this task we will need the following placeholders
Step14: Now, let us specify the layers of the neural network. First, we need to perform some preparatory steps
Step16: To compute the actual predictions of the neural network, you need to apply softmax to the last layer and find the most probable tags with argmax.
Step18: During training we do not need predictions of the network, but we need a loss function. We will use cross-entropy loss, efficiently implemented in TF as
cross entropy with logits. Note that it should be applied to logits of the model (not to softmax probabilities!). Also note, that we do not want to take into account loss terms coming from <PAD> tokens. So we need to mask them out, before computing mean.
Step20: The last thing to specify is how we want to optimize the loss.
We suggest that you use Adam optimizer with a learning rate from the corresponding placeholder.
You will also need to apply clipping to eliminate exploding gradients. It can be easily done with clip_by_norm function.
Step21: Congratulations! You have specified all the parts of your network. You may have noticed, that we didn't deal with any real data yet, so what you have written is just recipes on how the network should function.
Now we will put them to the constructor of our Bi-LSTM class to use it in the next section.
Step22: Train the network and predict tags
Session.run is a point which initiates computations in the graph that we have defined. To train the network, we need to compute self.train_op, which was declared in perform_optimization. To predict tags, we just need to compute self.predictions. Anyway, we need to feed actual data through the placeholders that we defined before.
Step23: Implement the function predict_for_batch by initializing feed_dict with input x_batch and lengths and running the session for self.predictions.
Step26: We finished with necessary methods of our BiLSTMModel model and almost ready to start experimenting.
Evaluation
To simplify the evaluation process we provide two functions for you
Step27: Run your experiment
Create BiLSTMModel model with the following parameters
Step28: Finally, we are ready to run the training!
Step29: Now let us see full quality reports for the final model on train, validation, and test sets. To give you a hint whether you have implemented everything correctly, you might expect F-score about 40% on the validation set.
The output of the cell below (as well as the output of all the other cells) should be present in the notebook for peer2peer review! | Python Code:
import sys
sys.path.append("..")
from common.download_utils import download_week2_resources
download_week2_resources()
Explanation: Recognize named entities on Twitter with LSTMs
In this assignment, you will use a recurrent neural network to solve Named Entity Recognition (NER) problem. NER is a common task in natural language processing systems. It serves for extraction such entities from the text as persons, organizations, locations, etc. In this task you will experiment to recognize named entities from Twitter.
For example, we want to extract persons' and organizations' names from the text. Than for the input text:
Ian Goodfellow works for Google Brain
a NER model needs to provide the following sequence of tags:
B-PER I-PER O O B-ORG I-ORG
Where B- and I- prefixes stand for the beginning and inside of the entity, while O stands for out of tag or no tag. Markup with the prefix scheme is called BIO markup. This markup is introduced for distinguishing of consequent entities with similar types.
A solution of the task will be based on neural networks, particularly, on Bi-Directional Long Short-Term Memory Networks (Bi-LSTMs).
Libraries
For this task you will need the following libraries:
- Tensorflow — an open-source software library for Machine Intelligence.
- Numpy — a package for scientific computing.
If you have never worked with Tensorflow, you would probably need to read some tutorials during your work on this assignment, e.g. this one could be a good starting point.
Data
The following cell will download all data required for this assignment into the folder week2/data.
End of explanation
def read_data(file_path):
tokens = []
tags = []
tweet_tokens = []
tweet_tags = []
for line in open(file_path, encoding='utf-8'):
line = line.strip()
if not line:
if tweet_tokens:
tokens.append(tweet_tokens)
tags.append(tweet_tags)
tweet_tokens = []
tweet_tags = []
else:
token, tag = line.split()
# Replace all urls with <URL> token
# Replace all users with <USR> token
######################################
######### YOUR CODE HERE #############
######################################
tweet_tokens.append(token)
tweet_tags.append(tag)
return tokens, tags
Explanation: Load the Twitter Named Entity Recognition corpus
We will work with a corpus, which contains twits with NE tags. Every line of a file contains a pair of a token (word/punctuation symbol) and a tag, separated by a whitespace. Different tweets are separated by an empty line.
The function read_data reads a corpus from the file_path and returns two lists: one with tokens and one with the corresponding tags. You need to complete this function by adding a code, which will replace a user's nickname to <USR> token and any URL to <URL> token. You could think that a URL and a nickname are just strings which start with http:// or https:// in case of URLs and a @ symbol for nicknames.
End of explanation
train_tokens, train_tags = read_data('data/train.txt')
validation_tokens, validation_tags = read_data('data/validation.txt')
test_tokens, test_tags = read_data('data/test.txt')
Explanation: And now we can load three separate parts of the dataset:
- train data for training the model;
- validation data for evaluation and hyperparameters tuning;
- test data for final evaluation of the model.
End of explanation
for i in range(3):
for token, tag in zip(train_tokens[i], train_tags[i]):
print('%s\t%s' % (token, tag))
print()
Explanation: You should always understand what kind of data you deal with. For this purpose, you can print the data running the following cell:
End of explanation
from collections import defaultdict
def build_dict(tokens_or_tags, special_tokens):
tokens_or_tags: a list of lists of tokens or tags
special_tokens: some special tokens
# Create a dictionary with default value 0
tok2idx = defaultdict(lambda: 0)
idx2tok = []
# Create mappings from tokens to indices and vice versa
# Add special tokens to dictionaries
# The first special token must have index 0
######################################
######### YOUR CODE HERE #############
######################################
return tok2idx, idx2tok
Explanation: Prepare dictionaries
To train a neural network, we will use two mappings:
- {token}$\to${token id}: address the row in embeddings matrix for the current token;
- {tag}$\to${tag id}: one-hot ground truth probability distribution vectors for computing the loss at the output of the network.
Now you need to implement the function build_dict which will return {token or tag}$\to${index} and vice versa.
End of explanation
special_tokens = ['<UNK>', '<PAD>']
special_tags = ['O']
# Create dictionaries
token2idx, idx2token = build_dict(train_tokens + validation_tokens, special_tokens)
tag2idx, idx2tag = build_dict(train_tags, special_tags)
Explanation: After implementing the function build_dict you can make dictionaries for tokens and tags. Special tokens in our case will be:
- <UNK> token for out of vocabulary tokens;
- <PAD> token for padding sentence to the same length when we create batches of sentences.
End of explanation
def words2idxs(tokens_list):
return [token2idx[word] for word in tokens_list]
def tags2idxs(tags_list):
return [tag2idx[tag] for tag in tags_list]
def idxs2words(idxs):
return [idx2token[idx] for idx in idxs]
def idxs2tags(idxs):
return [idx2tag[idx] for idx in idxs]
Explanation: The next additional functions will help you to create the mapping between tokens and ids for a sentence.
End of explanation
def batches_generator(batch_size, tokens, tags,
shuffle=True, allow_smaller_last_batch=True):
Generates padded batches of tokens and tags.
n_samples = len(tokens)
if shuffle:
order = np.random.permutation(n_samples)
else:
order = np.arange(n_samples)
n_batches = n_samples // batch_size
if allow_smaller_last_batch and n_samples % batch_size:
n_batches += 1
for k in range(n_batches):
batch_start = k * batch_size
batch_end = min((k + 1) * batch_size, n_samples)
current_batch_size = batch_end - batch_start
x_list = []
y_list = []
max_len_token = 0
for idx in order[batch_start: batch_end]:
x_list.append(words2idxs(tokens[idx]))
y_list.append(tags2idxs(tags[idx]))
max_len_token = max(max_len_token, len(tags[idx]))
# Fill in the data into numpy nd-arrays filled with padding indices.
x = np.ones([current_batch_size, max_len_token], dtype=np.int32) * token2idx['<PAD>']
y = np.ones([current_batch_size, max_len_token], dtype=np.int32) * tag2idx['O']
lengths = np.zeros(current_batch_size, dtype=np.int32)
for n in range(current_batch_size):
utt_len = len(x_list[n])
x[n, :utt_len] = x_list[n]
lengths[n] = utt_len
y[n, :utt_len] = y_list[n]
yield x, y, lengths
Explanation: Generate batches
Neural Networks are usually trained with batches. It means that weight updates of the network are based on several sequences at every single time. The tricky part is that all sequences within a batch need to have the same length. So we will pad them with a special <PAD> token. It is also a good practice to provide RNN with sequence lengths, so it can skip computations for padding parts. We provide the batching function batches_generator readily available for you to save time.
End of explanation
import tensorflow as tf
import numpy as np
class BiLSTMModel():
pass
Explanation: Build a recurrent neural network
This is the most important part of the assignment. Here we will specify the network architecture based on TensorFlow building blocks. It's fun and easy as a lego constructor! We will create an LSTM network which will produce probability distribution over tags for each token in a sentence. To take into account both right and left contexts of the token, we will use Bi-Directional LSTM (Bi-LSTM). Dense layer will be used on top to perform tag classification.
End of explanation
def declare_placeholders(self):
Specifies placeholders for the model.
# Placeholders for input and ground truth output.
self.input_batch = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input_batch')
self.ground_truth_tags = ######### YOUR CODE HERE #############
# Placeholder for lengths of the sequences.
self.lengths = tf.placeholder(dtype=tf.int32, shape=[None], name='lengths')
# Placeholder for a dropout keep probability. If we don't feed
# a value for this placeholder, it will be equal to 1.0.
self.dropout_ph = tf.placeholder_with_default(1.0, shape=[])
# Placeholder for a learning rate (float32).
self.learning_rate_ph = ######### YOUR CODE HERE #############
BiLSTMModel.__declare_placeholders = classmethod(declare_placeholders)
Explanation: First, we need to create placeholders to specify what data we are going to feed into the network during the execution time. For this task we will need the following placeholders:
- input_batch — sequences of words (the shape equals to [batch_size, sequence_len]);
- ground_truth_tags — sequences of tags (the shape equals to [batch_size, sequence_len]);
- lengths — lengths of not padded sequences (the shape equals to [batch_size]);
- dropout_ph — dropout keep probability; this placeholder has a predefined value 1;
- learning_rate_ph — learning rate; we need this placeholder because we want to change the value during training.
It could be noticed that we use None in the shapes in the declaration, which means that data of any size can be feeded.
You need to complete the function declare_placeholders.
End of explanation
def build_layers(self, vocabulary_size, embedding_dim, n_hidden_rnn, n_tags):
Specifies bi-LSTM architecture and computes logits for inputs.
# Create embedding variable (tf.Variable).
initial_embedding_matrix = np.random.randn(vocabulary_size, embedding_dim) / np.sqrt(embedding_dim)
embedding_matrix_variable = ######### YOUR CODE HERE #############
# Create RNN cells (for example, tf.nn.rnn_cell.BasicLSTMCell) with n_hidden_rnn number of units
# and dropout (tf.nn.rnn_cell.DropoutWrapper), initializing all *_keep_prob with dropout placeholder.
forward_cell = ######### YOUR CODE HERE #############
backward_cell = ######### YOUR CODE HERE #############
# Look up embeddings for self.input_batch (tf.nn.embedding_lookup).
# Shape: [batch_size, sequence_len, embedding_dim].
embeddings = ######### YOUR CODE HERE #############
# Pass them through Bidirectional Dynamic RNN (tf.nn.bidirectional_dynamic_rnn).
# Shape: [batch_size, sequence_len, 2 * n_hidden_rnn].
(rnn_output_fw, rnn_output_bw), _ = ######### YOUR CODE HERE #############
rnn_output = tf.concat([rnn_output_fw, rnn_output_bw], axis=2)
# Dense layer on top.
# Shape: [batch_size, sequence_len, n_tags].
self.logits = tf.layers.dense(rnn_output, n_tags, activation=None)
BiLSTMModel.__build_layers = classmethod(build_layers)
Explanation: Now, let us specify the layers of the neural network. First, we need to perform some preparatory steps:
Create embeddings matrix with tf.Variable. Specify its name (embeddings_matrix), type (tf.float32), and initialize with random values.
Create forward and backward LSTM cells. TensorFlow provides a number of RNN cells ready for you. We suggest that you use BasicLSTMCell, but you can also experiment with other types, e.g. GRU cells. This blogpost could be interesting if you want to learn more about the differences.
Wrap your cells with DropoutWrapper. Dropout is an important regularization technique for neural networks. Specify all keep probabilities using the dropout placeholder that we created before.
After that, you can build the computation graph that transforms an input_batch:
Look up embeddings for an input_batch in the prepared embedding_matrix.
Pass the embeddings through Bidirectional Dynamic RNN with the specified forward and backward cells. Use the lengths placeholder here to avoid computations for padding tokens inside the RNN.
Create a dense layer on top. Its output will be used directly in loss function.
Fill in the code below. In case you need to debug something, the easiest way is to check that tensor shapes of each step match the expected ones.
End of explanation
def compute_predictions(self):
Transforms logits to probabilities and finds the most probable tags.
# Create softmax (tf.nn.softmax) function
softmax_output = ######### YOUR CODE HERE #############
# Use argmax (tf.argmax) to get the most probable tags
self.predictions = ######### YOUR CODE HERE #############
BiLSTMModel.__compute_predictions = classmethod(compute_predictions)
Explanation: To compute the actual predictions of the neural network, you need to apply softmax to the last layer and find the most probable tags with argmax.
End of explanation
def compute_loss(self, n_tags, PAD_index):
Computes masked cross-entopy loss with logits.
# Create cross entropy function function (tf.nn.softmax_cross_entropy_with_logits)
ground_truth_tags_one_hot = tf.one_hot(self.ground_truth_tags, n_tags)
loss_tensor = ######### YOUR CODE HERE #############
# Create loss function which doesn't operate with <PAD> tokens (tf.reduce_mean)
mask = tf.cast(tf.not_equal(loss_tensor, PAD_index), tf.float32)
self.loss = ######### YOUR CODE HERE #############
BiLSTMModel.__compute_loss = classmethod(compute_loss)
Explanation: During training we do not need predictions of the network, but we need a loss function. We will use cross-entropy loss, efficiently implemented in TF as
cross entropy with logits. Note that it should be applied to logits of the model (not to softmax probabilities!). Also note, that we do not want to take into account loss terms coming from <PAD> tokens. So we need to mask them out, before computing mean.
End of explanation
def perform_optimization(self):
Specifies the optimizer and train_op for the model.
# Create an optimizer (tf.train.AdamOptimizer)
self.optimizer = ######### YOUR CODE HERE #############
self.grads_and_vars = self.optimizer.compute_gradients(self.loss)
# Gradient clipping (tf.clip_by_norm) for self.grads_and_vars
clip_norm = 1.0
self.grads_and_vars = ######### YOUR CODE HERE #############
self.train_op = self.optimizer.apply_gradients(self.grads_and_vars)
BiLSTMModel.__perform_optimization = classmethod(perform_optimization)
Explanation: The last thing to specify is how we want to optimize the loss.
We suggest that you use Adam optimizer with a learning rate from the corresponding placeholder.
You will also need to apply clipping to eliminate exploding gradients. It can be easily done with clip_by_norm function.
End of explanation
def init_model(self, vocabulary_size, n_tags, embedding_dim, n_hidden_rnn, PAD_index):
self.__declare_placeholders()
self.__build_layers(vocabulary_size, embedding_dim, n_hidden_rnn, n_tags)
self.__compute_predictions()
self.__compute_loss(n_tags, PAD_index)
self.__perform_optimization()
BiLSTMModel.__init__ = classmethod(init_model)
Explanation: Congratulations! You have specified all the parts of your network. You may have noticed, that we didn't deal with any real data yet, so what you have written is just recipes on how the network should function.
Now we will put them to the constructor of our Bi-LSTM class to use it in the next section.
End of explanation
def train_on_batch(self, session, x_batch, y_batch, lengths, learning_rate, dropout_keep_probability):
feed_dict = {self.input_batch: x_batch,
self.ground_truth_tags: y_batch,
self.learning_rate_ph: learning_rate,
self.dropout_ph: dropout_keep_probability,
self.lengths: lengths}
session.run(self.train_op, feed_dict=feed_dict)
BiLSTMModel.train_on_batch = classmethod(train_on_batch)
Explanation: Train the network and predict tags
Session.run is a point which initiates computations in the graph that we have defined. To train the network, we need to compute self.train_op, which was declared in perform_optimization. To predict tags, we just need to compute self.predictions. Anyway, we need to feed actual data through the placeholders that we defined before.
End of explanation
def predict_for_batch(self, session, x_batch, lengths):
######################################
######### YOUR CODE HERE #############
######################################
return predictions
BiLSTMModel.predict_for_batch = classmethod(predict_for_batch)
Explanation: Implement the function predict_for_batch by initializing feed_dict with input x_batch and lengths and running the session for self.predictions.
End of explanation
from evaluation import precision_recall_f1
def predict_tags(model, session, token_idxs_batch, lengths):
Performs predictions and transforms indices to tokens and tags.
tag_idxs_batch = model.predict_for_batch(session, token_idxs_batch, lengths)
tags_batch, tokens_batch = [], []
for tag_idxs, token_idxs in zip(tag_idxs_batch, token_idxs_batch):
tags, tokens = [], []
for tag_idx, token_idx in zip(tag_idxs, token_idxs):
if token_idx != token2idx['<PAD>']:
tags.append(idx2tag[tag_idx])
tokens.append(idx2token[token_idx])
tags_batch.append(tags)
tokens_batch.append(tokens)
return tags_batch, tokens_batch
def eval_conll(model, session, tokens, tags, short_report=True):
Computes NER quality measures using CONLL shared task script.
y_true, y_pred = [], []
for x_batch, y_batch, lengths in batches_generator(1, tokens, tags):
tags_batch, tokens_batch = predict_tags(model, session, x_batch, lengths)
if len(x_batch[0]) != len(tags_batch[0]):
raise Exception("Incorrect length of prediction for the input, "
"expected length: %i, got: %i" % (len(x_batch[0]), len(tags_batch[0])))
ground_truth_tags = [idx2tag[tag_idx] for tag_idx in y_batch[0]]
# We extend every prediction and ground truth sequence with 'O' tag
# to indicate a possible end of entity.
y_true.extend(ground_truth_tags + ['O'])
y_pred.extend(tags_batch[0] + ['O'])
results = precision_recall_f1(y_true, y_pred, print_results=True, short_report=short_report)
return results
Explanation: We finished with necessary methods of our BiLSTMModel model and almost ready to start experimenting.
Evaluation
To simplify the evaluation process we provide two functions for you:
- predict_tags: uses a model to get predictions and transforms indices to tokens and tags;
- eval_conll: calculates precision, recall and F1 for the results.
End of explanation
tf.reset_default_graph()
model = ######### YOUR CODE HERE #############
batch_size = ######### YOUR CODE HERE #############
n_epochs = ######### YOUR CODE HERE #############
learning_rate = ######### YOUR CODE HERE #############
learning_rate_decay = ######### YOUR CODE HERE #############
dropout_keep_probability = ######### YOUR CODE HERE #############
Explanation: Run your experiment
Create BiLSTMModel model with the following parameters:
- vocabulary_size — number of tokens;
- n_tags — number of tags;
- embedding_dim — dimension of embeddings, recommended value: 200;
- n_hidden_rnn — size of hidden layers for RNN, recommended value: 200;
- PAD_index — an index of the padding token (<PAD>).
Set hyperparameters. You might want to start with the following recommended values:
- batch_size: 32;
- 4 epochs;
- starting value of learning_rate: 0.005
- learning_rate_decay: a square root of 2;
- dropout_keep_probability: try several values: 0.1, 0.5, 0.9.
However, feel free to conduct more experiments to tune hyperparameters and earn extra points for the assignment.
End of explanation
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Start training... \n')
for epoch in range(n_epochs):
# For each epoch evaluate the model on train and validation data
print('-' * 20 + ' Epoch {} '.format(epoch+1) + 'of {} '.format(n_epochs) + '-' * 20)
print('Train data evaluation:')
eval_conll(model, sess, train_tokens, train_tags, short_report=True)
print('Validation data evaluation:')
eval_conll(model, sess, validation_tokens, validation_tags, short_report=True)
# Train the model
for x_batch, y_batch, lengths in batches_generator(batch_size, train_tokens, train_tags):
model.train_on_batch(sess, x_batch, y_batch, lengths, learning_rate, dropout_keep_probability)
# Decaying the learning rate
learning_rate = learning_rate / learning_rate_decay
print('...training finished.')
Explanation: Finally, we are ready to run the training!
End of explanation
print('-' * 20 + ' Train set quality: ' + '-' * 20)
train_results = eval_conll(model, sess, train_tokens, train_tags, short_report=False)
print('-' * 20 + ' Validation set quality: ' + '-' * 20)
validation_results = ######### YOUR CODE HERE #############
print('-' * 20 + ' Test set quality: ' + '-' * 20)
test_results = ######### YOUR CODE HERE #############
Explanation: Now let us see full quality reports for the final model on train, validation, and test sets. To give you a hint whether you have implemented everything correctly, you might expect F-score about 40% on the validation set.
The output of the cell below (as well as the output of all the other cells) should be present in the notebook for peer2peer review!
End of explanation |
2,388 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to create Popups
Step1: Simple popups
You can define your popup at the feature creation, but you can also overwrite them afterwards
Step2: Vega Popup
You may know that it's possible to create awesome Vega charts with (or without) vincent. If you're willing to put one inside a popup, it's possible thanks to folium.Vega.
Step4: Fancy HTML popup
Now, you can put any HTML code inside of a Popup, thaks to the IFrame object.
Step5: Note that you can put another Figure into an IFrame ; this should let you do stange things... | Python Code:
import sys
sys.path.insert(0,'..')
import folium
print (folium.__file__)
print (folium.__version__)
Explanation: How to create Popups
End of explanation
m = folium.Map([45,0], zoom_start=4)
folium.Marker([45,-30], popup="inline implicit popup").add_to(m)
folium.CircleMarker([45,-10], radius=1e5, popup=folium.Popup("inline explicit Popup")).add_to(m)
ls = folium.PolyLine([[43,7],[43,13],[47,13],[47,7],[43,7]], color='red')
ls.add_children(folium.Popup("outline Popup on Polyline"))
ls.add_to(m)
gj = folium.GeoJson({ "type": "Polygon", "coordinates": [[[27,43],[33,43],[33,47],[27,47]]]})
gj.add_children(folium.Popup("outline Popup on GeoJSON"))
gj.add_to(m)
m
Explanation: Simple popups
You can define your popup at the feature creation, but you can also overwrite them afterwards:
End of explanation
import vincent, json
import numpy as np
scatter_points = {
'x' : np.random.uniform(size=(100,)),
'y' : np.random.uniform(size=(100,)),
}
# Let's create the vincent chart.
scatter_chart = vincent.Scatter(scatter_points,
iter_idx='x',
width=600,
height=300)
# Let's convert it to JSON.
scatter_json = scatter_chart.to_json()
# Let's convert it to dict.
scatter_dict = json.loads(scatter_json)
m = folium.Map([43,-100], zoom_start=4)
# Let's create a Vega popup based on scatter_chart.
popup = folium.Popup(max_width=800)
folium.Vega(scatter_chart, height=350, width=650).add_to(popup)
folium.Marker([30,-120], popup=popup).add_to(m)
# Let's create a Vega popup based on scatter_json.
popup = folium.Popup(max_width=800)
folium.Vega(scatter_json, height=350, width=650).add_to(popup)
folium.Marker([30,-100], popup=popup).add_to(m)
# Let's create a Vega popup based on scatter_dict.
popup = folium.Popup(max_width=800)
folium.Vega(scatter_dict, height=350, width=650).add_to(popup)
folium.Marker([30,-80], popup=popup).add_to(m)
m
Explanation: Vega Popup
You may know that it's possible to create awesome Vega charts with (or without) vincent. If you're willing to put one inside a popup, it's possible thanks to folium.Vega.
End of explanation
m = folium.Map([43,-100], zoom_start=4)
html=
<h1> This is a big popup</h1><br>
With a few lines of code...
<p>
<code>
from numpy import *<br>
exp(-2*pi)
</code>
</p>
iframe = folium.element.IFrame(html=html, width=500, height=300)
popup = folium.Popup(iframe, max_width=2650)
folium.Marker([30,-100], popup=popup).add_to(m)
m
Explanation: Fancy HTML popup
Now, you can put any HTML code inside of a Popup, thaks to the IFrame object.
End of explanation
# Let's create a Figure, with a map inside.
f = folium.element.Figure()
folium.Map([-25,150], zoom_start=3).add_to(f)
# Let's put the figure into an IFrame.
iframe = folium.element.IFrame(width=500, height=300)
f.add_to(iframe)
# Let's put the IFrame in a Popup
popup = folium.Popup(iframe, max_width=2650)
# Let's create another map.
m = folium.Map([43,-100], zoom_start=4)
# Let's put the Popup on a marker, in the second map.
folium.Marker([30,-100], popup=popup).add_to(m)
# We get a map in a Popup. Not really useful, but powerful.
m
Explanation: Note that you can put another Figure into an IFrame ; this should let you do stange things...
End of explanation |
2,389 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm using tensorflow 2.10.0. | Problem:
import tensorflow as tf
seed_x = 10
### return the tensor as variable 'result'
def g(seed_x):
tf.random.set_seed(seed_x)
return tf.random.uniform(shape=(10,), minval=1, maxval=5, dtype=tf.int32)
result = g(seed_x) |
2,390 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PanSTARRS - WISE crossmatch
Step1: Load the data
Load the catalogues
Step2: Coordinates
As we will use the coordinates to make a cross-match we to load them
Step3: Compute the ML parameters
Number of sources per magnitude in i-band
Step4: Number of sources per magnitude per unit area in the selected region (cumulative distribution). This sets the number of background sources. The units of this are N/(square arcsec) per magnitude.
Step5: Compute real(m) and q(m)
The first step is to crossmatch the catalogues to make an estimation
Step6: Estimated $Q_0$
Step7: Save the parameters | Python Code:
import numpy as np
from astropy.table import Table
from astropy import units as u
from astropy.coordinates import SkyCoord
import pickle
from mltier1 import get_center, get_n_m, estimate_q_m, Field
%pylab inline
field = Field(170.0, 190.0, 45.5, 56.5)
Explanation: PanSTARRS - WISE crossmatch: Pre-configure the ML parameters
In this step we will prepare the auxiliary variables used for the ML
End of explanation
panstarrs_full = Table.read("panstarrs_u2.fits")
wise_full = Table.read("wise_u2.fits")
panstarrs = field.filter_catalogue(
panstarrs_full,
colnames=("raMean", "decMean"))
# Free memory
del panstarrs_full
wise = field.filter_catalogue(
wise_full,
colnames=("raWise", "decWise"))
# Free memory
del wise_full
Explanation: Load the data
Load the catalogues
End of explanation
coords_panstarrs = SkyCoord(panstarrs['raMean'], panstarrs['decMean'], unit=(u.deg, u.deg), frame='icrs')
coords_wise = SkyCoord(wise['raWise'], wise['decWise'], unit=(u.deg, u.deg), frame='icrs')
Explanation: Coordinates
As we will use the coordinates to make a cross-match we to load them
End of explanation
bin_list = np.linspace(12., 30., 1801)
center = get_center(bin_list)
n_m = get_n_m(panstarrs["i"], bin_list, field.area)
Explanation: Compute the ML parameters
Number of sources per magnitude in i-band
End of explanation
plot(center, n_m);
Explanation: Number of sources per magnitude per unit area in the selected region (cumulative distribution). This sets the number of background sources. The units of this are N/(square arcsec) per magnitude.
End of explanation
radius = 5 # arcseconds
q_m = estimate_q_m(panstarrs["i"], bin_list, n_m, coords_wise, coords_panstarrs, radius=5)
plot(center, q_m);
Explanation: Compute real(m) and q(m)
The first step is to crossmatch the catalogues to make an estimation
End of explanation
q0 = 0.62
Explanation: Estimated $Q_0$
End of explanation
pickle.dump([bin_list, center, q0, n_m, q_m], open("pw_params.pckl", 'wb'))
Explanation: Save the parameters
End of explanation |
2,391 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Display a map E. coli central carbon metabolism.
Step1: Visualize reaction-centric and/or metabolite-centric data. | Python Code:
escher.Builder('e_coli_core.Core metabolism').display_in_notebook()
Explanation: Display a map E. coli central carbon metabolism.
End of explanation
escher.Builder('e_coli_core.Core metabolism', reaction_data={'PGK': 100}, metabolite_data={'ATP': 20}).display_in_notebook()
Explanation: Visualize reaction-centric and/or metabolite-centric data.
End of explanation |
2,392 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pochodna (różniczkowanie numeryczne)
Różnica dzielona w przód
Rozwińmy funkcję $f(x)$ w otoczeniu $h$ punktu $x$ w szereg Taylora
Step1: Numeryczne całkowanie
Całka to pole, więc obliczmy je w przybliżony sposób za pomocą prostokątów.
Przedział $[a,b]$ jest podzielony na $n$ podprzedziałów $$(a,x_1), (x_1,x_2), ..., (x_{n-1},b),$$ punkt $\xi_{j}$ zaś, dla $j = 1, 2, ..., n$ leży gdzieś wewnątrz $j$-tego odcinka. Tworzymy sumę
Step2: Funkcja pierwotna
$$ \int_{a}^{b} f(x)dx = F(b) - F(a) $$
$$F(x) = \int_{a}^{x} f(x')dx' $$
Pochodne funkcji wielu zmiennych
Step3: hill shading
Step4: Laplasjan
https | Python Code:
import numpy as np
x = np.linspace(0, 10, 10)
f = np.sin(x)
f1 = np.cos(x)
df = f[1:] - f[:-1]
dx = x[1:] - x[:-1]
x2 = (x[1:] + x[:-1])/2
fp = df/dx
fp.shape, x.shape
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(x2, np.cos(x2), 'o-')
plt.plot(x2, fp, 'ro-')
Explanation: Pochodna (różniczkowanie numeryczne)
Różnica dzielona w przód
Rozwińmy funkcję $f(x)$ w otoczeniu $h$ punktu $x$ w szereg Taylora:
$$ f(x+h) = f(x)+ f'(x)h +O(h^2)$$
Dzieląc przez $h$ otrzymujemy:
$$ \displaystyle \frac{f(x+h)-f(x) }{h}=f'(x) + O(h)$$
Np. różnica centralna:
$$ \displaystyle \frac{f(x)-f(x-h) }{2h}=f'(x) + O(h^2)$$
End of explanation
import numpy as np
x = np.linspace(0,np.pi,10)
f = np.sin(x)
w = np.ones_like(x)
w[0] = 0.5
w[-1] = 0.5
h = x[1]-x[0]
print(h*np.sum(w*f))
Explanation: Numeryczne całkowanie
Całka to pole, więc obliczmy je w przybliżony sposób za pomocą prostokątów.
Przedział $[a,b]$ jest podzielony na $n$ podprzedziałów $$(a,x_1), (x_1,x_2), ..., (x_{n-1},b),$$ punkt $\xi_{j}$ zaś, dla $j = 1, 2, ..., n$ leży gdzieś wewnątrz $j$-tego odcinka. Tworzymy sumę:
$$S_{n} = \sum_{j=1}^{n}f(\xi_{j})(x_{j} - x_{j-1}) = \sum_{j=1}^{n}f(\xi_{j}) \Delta x_{j}$$
przy czym $x_0 = a$, $x_n = b$ oraz $\Delta x_j = x_j - x_{j-1}$.
Geometrycznie suma ta, nazywana sumą Riemanna, jest sumą pól prostokątów zaznaczonych na rysunku poniżej. Jeżeli będziemy dzielić $[a,b]$ na coraz to więcej odcinków o coraz to mniejszych długościach, otrzymamy jako granicę sum Riemanna całkę Riemanna funkcji $f(x)$, oznaczaną przez:
$$ \int_{a}^{b} f(x)dx = \lim_{|L| \to 0} \sum_{j=1}^{n} f(\xi_{j}) \Delta x_{j} \quad (1) $$
gdzie przez $|L|$ oznaczyliśmy długość największego z podprzedziałów.
Kwadratury
Metody numetyczne obliczania całek sprowadzają sie do obliczania kwadratur - tj. sumy ważonej wartości funkcji w punktach:
$$ \int_a^b f(x)dx = \sum_{i=1}^N w_i f_i,$$
gdzie $f_i = f(x_i)$.
W metodzie trapezów link mamy wagi $w_i=1$ dla wszystkich punków z wyjątkiem: $w_1=w_{N}=\frac{1}{2}$.
Przykład:
$$\int_0^\pi \sin(x)dx = 2$$
End of explanation
import numpy as np
x = np.linspace(-2,1,40)
y = np.linspace(-2,3,34)
X,Y = np.meshgrid(x,y)
%matplotlib notebook
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import sympy
from sympy.abc import x,y
sympy.init_printing(use_latex='mathjax')
from IPython.display import display
f_symb = -sympy.exp(-(x**2+2*y**2))
display(f_symb)
F = sympy.lambdify((x,y),f_symb,np)
Fx = sympy.lambdify((x,y),f_symb.diff(x),np)
Fy = sympy.lambdify((x,y),f_symb.diff(y),np)
fig = plt.figure()
ax = fig.gca(projection='3d')
#ax.plot_wireframe(X,Y,F)
ax.plot_surface(X, Y, F(X,Y), cmap=cm.coolwarm,rstride=1,cstride=1)
#ax.quiver3D(X, Y, Fx, Fy) #, cmap=cm.coolwarm,rstride=1,cstride=1)
ax.set_xlabel('X')
ax.set_ylabel('Y')
x0,y0 = -1.5,-.5
h = 0.2
for i in range(30):
x0 += -h * Fx(x0,y0)
y0 += -h * Fy(x0,y0)
#ax.scatter3D(x0,y0,F(x0,y0),s=240,c='g',marker='o')
ax.plot([x0,x0],[y0,y0],[-1,F(x0,y0)],c='r')
plt.figure()
plt.contourf(X,Y,F(X,Y))
plt.quiver(X,Y,-Fx(X,Y),-Fy(X,Y))
plt
x0,y0 = -1.5,-.5
h = 0.1
for i in range(100):
x0 += -h * Fx(x0,y0)
y0 += -h * Fy(x0,y0)
plt.plot(x0,y0,'go')
Explanation: Funkcja pierwotna
$$ \int_{a}^{b} f(x)dx = F(b) - F(a) $$
$$F(x) = \int_{a}^{x} f(x')dx' $$
Pochodne funkcji wielu zmiennych
End of explanation
x_ = np.linspace(-5,5,150)
y_ = np.linspace(-5,5,154)
X,Y = np.meshgrid(x_,y_)
f_symb = sympy.sin(x**2/4+y**2)
expr = sympy.diff(f_symb,x)
display(f_symb)
F = sympy.lambdify((x,y),f_symb,np)
Fx = sympy.lambdify((x,y),f_symb.diff(x),np)
Fy = sympy.lambdify((x,y),f_symb.diff(y),np)
plt.figure()
plt.imshow(F(X,Y),alpha=.6,cmap='jet')
plt.imshow(np.diff(F(X,Y),axis=0),alpha=.4,cmap='gray')
Explanation: hill shading
End of explanation
import sympy
from sympy.abc import x,y
sympy.init_printing(use_latex='mathjax')
from IPython.display import display
f_symb = sympy.sin(x**2+y**2)
lap_f = f_symb.diff(x,2) + f_symb.diff(y,2)
display(lap_f.simplify())
Explanation: Laplasjan
https://en.wikipedia.org/wiki/Discrete_Laplace_operator
End of explanation |
2,393 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring Review Data
let's look at some typical reviews, how many files there are, what the distributions of review lengths and hours played are, etc.
Step1: Football Manager 2015 Stats
Step2: Football Manager 2015
Step3: Football Manager 2015 | Python Code:
import os
import sys
from json import loads
from collections import Counter
import numpy as np
import pandas as pd
# So, let's take a look at how many lines are in our reviews files
# Note: I just started processing GTAV after ending all the other processes
os.chdir('..')
! wc -l data/*.jsonlines
Explanation: Exploring Review Data
let's look at some typical reviews, how many files there are, what the distributions of review lengths and hours played are, etc.
End of explanation
# So, let's first get all of the reviews for the game with the smallest amount of
# review data, i.e., Football Manager 2015 (not including GTAV)
reviews = [loads(review)['review'] for review in open('data/Football_Manager_2015.jsonlines')]
# First of all, how many reviews are there?
print('number of reviews: {}'.format(len(reviews)))
# Here's a couple reviews from the beginning of the file
reviews[:3]
Explanation: Football Manager 2015 Stats
End of explanation
# Let's measure the lengths of each review using a "list comprehension"
lengths = [len(review) for review in reviews]
# Let's print out the first 10 lengths, just to see what we're working with
lengths[:10]
# Compute the average length value
avg_len = sum(lengths)/len(lengths)
print('average length: {}'.format(avg_len))
min_len = min(lengths)
print('minimum review length = {}'.format(min_len))
max_len = max(lengths)
print('maximum review length = {}'.format(max_len))
_lengths = np.array(lengths)
std = _lengths.std()
_lengths.size
bins = std/10.0
_rounded_lengths = np.array([np.ceil(l/bins)*bins for l in lengths])
_rounded_lengths
# Let's make bins of size 300
rounded_lengths = [np.ceil(l/60)*60 for l in lengths]
print('original lengths (first 10): {}\nrounded lengths (to the nearest 300): {}'
.format(lengths[:10], rounded_lengths[:10]))
# Now, let's make a frequency distribution with the collections.Counter module
rounded_length_fdist = Counter(_rounded_lengths)
rounded_length_fdist
# It is obvious from looking at the freq. dist. above that length drops off a cliff
# after about 1000 characters
# In fact, for length up to 900, almost 1,050 reviews are accounted for, which means
# that only a little over 100 reviews are thinly-distributed over the remaining area
# above 1,000 characters
# Usually, when you use Pandas, you're using a dataframe, but a dataframe, as I understand
# it, it just made up on a set of "Series"
# Let's make a Series from our rounded lengths and then call its value_counts() method to
# get exactly what collections.Counter was doing (but we'll be able to use it to make a
# nice plot)
rounded_length_series = pd.Series(_rounded_lengths)
rounded_length_series.value_counts()
# The table above is nice, but let's do better and try to get a histogram
# Don't worry about all this importing stuff, it's just from something I read in a blog
# post
# Actually, go and check out the blog post here:
# http://nbviewer.ipython.org/github/mwaskom/seaborn/blob/master/examples/plotting_distributions.ipynb
# It's in, you guessed it, an IPython notebook! Shows what you can do with
# matplotlib.
%matplotlib inline
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_palette("deep", desat=.6)
sns.set_context(rc={"figure.figsize": (8, 4)})
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(pd.Series(lengths), label='Game')
ax.set_xlabel('Review length (in characters)')
ax.set_ylabel('Total reviews')
ax.set_title('Histogram')
fig.savefig('sample_fig')
# That is one nice-looking histogram!
# From it, we can really see just how few reviews there are past 1,000
# For this item, we could probably set the cap at 1,200, let's say
# What do you think?
# If 90% of the reviews have length <= 1200 of the reviews, but not 900,
# let's set MAXLEN to 1200
len([r for r in reviews if len(r) <= 1200])/len(reviews) >= 0.9
Explanation: Football Manager 2015: Review Length Distribution
End of explanation
# Let's do a similar kind of thing for the hours values
hours = [loads(review)['total_game_hours'] for review in open('data/Football_Manager_2015.jsonlines')]
hours[:10]
print("min: {}\nmax: {}".format(min(hours), max(hours)))
rounded_hours = [np.ceil(h/200)*200 for h in hours]
rounded_hours[:10]
# Let's use pandas again instead of collections.Counter
rounded_hours_series = pd.Series(rounded_hours)
rounded_hours_series.value_counts()
# Hmm, well, would you look at that! The distribution looks exactly the same
# as that for length!
# Let's plot it!
plt.hist(rounded_hours_series)
# If 90% of the reviews have hours values <= 800 of the reviews, but not 600,
# let's set MAXHOURS to 800
len([r for r in hours if float(r) <= 400])/len(hours) >= 0.9
Explanation: Football Manager 2015: Hours Distribution
End of explanation |
2,394 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the OpenAQ API
The openaq api is an easy-to-use wrapper built around the OpenAQ Api. Complete API documentation can be found on their website.
There are no keys or rate limits (as of March 2017), so working with the API is straight forward. If building a website or app, you may want to just use the python wrapper and interact with the data in json format. However, the rest of this tutorial will assume you are interested in analyzing the data. To get more out of it, I recommend installing seaborn for manipulating the asthetics of plots, and working with data as DataFrames using pandas. For more information on these, check out the installation section of this documentation.
From this point forward, I assume you have at least a basic knowledge of python and matplotlib. This documentation was built using the following versions of all packages
Step1: OpenAQ API
The OpenAQ APi has only eight endpoints that we are interested in
Step2: Cities
The cities API endpoint lists the cities available within the platform. Results can be subselected by country and paginated to retrieve all results in the database. Let's start by performing a basic query with an increased limit (so we can get all of them) and return it as a DataFrame
Step3: So we retrieved 1400+ entries from the database. We can then take a look at them
Step4: Let's try to find out which ones are in India
Step5: Great! For the rest of the tutorial, we are going to focus on Delhi, India. Why? Well..because there are over 500,000 data points and my personal research is primarily in India. We will also take a look at some $SO_2$ data from Hawai'i later on (another great research locale).
Countries
Similar to the cities endpoint, the countries endpoint lists the countries available. The only parameters we have to play with are the limit and page number. If we want to grab them all, we can just up the limit to the maximum (10000).
Step6: Fetches
If you are interested in getting information pertaining to the individual data fetch operations, go ahead and use this endpoint. Most people won't need to use this. This API method does not allow the df parameter; if you would like it to be added, drop me a message.
Otherwise, here is how you can access the json-formatted data
Step7: Parameters
The parameters endpoint will provide a listing off all the parameters available
Step8: Sources
The sources endpoint will provide a list of the sources where the raw data came from.
Step9: Locations
The locations endpoint will return the list of measurement locations and their meta data. We can do quite a bit of querying with this one
Step10: What if we just want to grab the locations in Delhi?
Step11: What about just figuring out which locations in Delhi have $PM_{2.5}$ data?
Step12: Latest
Grab the latest data from a location or locations.
What was the most recent $PM_{2.5}$ data in Delhi?
Step13: What about the most recent $SO_2$ data in Hawii?
Step14: Measurements
Finally, the endpoint we've all been waiting for! Measurements allows you to grab all of the dataz! You can query on a whole bunhc of parameters listed in the API documentation. Let's dive in
Step15: Clearly, we should be doing some serious data cleaning ;) Why don't we go ahead and plot all of these locations on a figure.
Step16: Don't worry too much about how ugly and uninteresting the plot above is...we'll take care of that in the next tutorial! Let's go ahead and look at the distribution of $PM_{2.5}$ values seen in Delhi by various sensors. This is the same data as above, but viewed in a different way.
Step17: If we remember from above, there was at least one location where many parameters were measured. Let's go ahead and look at that location and see if there is any correlation among parameters!
Step18: For kicks, let's go ahead and look at a timeseries of $SO_2$ data in Hawai'i. Quiz | Python Code:
import pandas as pd
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
import openaq
import warnings
warnings.simplefilter('ignore')
%matplotlib inline
# Set major seaborn asthetics
sns.set("notebook", style='ticks', font_scale=1.0)
# Increase the quality of inline plots
mpl.rcParams['figure.dpi']= 500
print ("pandas v{}".format(pd.__version__))
print ("matplotlib v{}".format(mpl.__version__))
print ("seaborn v{}".format(sns.__version__))
print ("openaq v{}".format(openaq.__version__))
Explanation: Using the OpenAQ API
The openaq api is an easy-to-use wrapper built around the OpenAQ Api. Complete API documentation can be found on their website.
There are no keys or rate limits (as of March 2017), so working with the API is straight forward. If building a website or app, you may want to just use the python wrapper and interact with the data in json format. However, the rest of this tutorial will assume you are interested in analyzing the data. To get more out of it, I recommend installing seaborn for manipulating the asthetics of plots, and working with data as DataFrames using pandas. For more information on these, check out the installation section of this documentation.
From this point forward, I assume you have at least a basic knowledge of python and matplotlib. This documentation was built using the following versions of all packages:
End of explanation
api = openaq.OpenAQ()
Explanation: OpenAQ API
The OpenAQ APi has only eight endpoints that we are interested in:
cities: provides a simple listing of cities within the platforms
countries: provides a simple listing of countries within the platform
fetches: providing data about individual fetch operations that are used to populate data in the platform
latest: provides the latest value of each available parameter for every location in the system
locations: provides a list of measurement locations and their meta data
measurements: provides data about individual measurements
parameters: provides a simple listing of parameters within the platform
sources: provides a list of data sources
For detailed documentation about each one in the context of this API wrapper, please check out the API documentation.
Your First Request
Real quick, let's go ahead and initiate an instance of the openaq.OpenAQ class so we can begin looking at data:
End of explanation
resp = api.cities(df=True, limit=10000)
# display the first 10 rows
resp.info()
Explanation: Cities
The cities API endpoint lists the cities available within the platform. Results can be subselected by country and paginated to retrieve all results in the database. Let's start by performing a basic query with an increased limit (so we can get all of them) and return it as a DataFrame:
End of explanation
print (resp.head(10))
Explanation: So we retrieved 1400+ entries from the database. We can then take a look at them:
End of explanation
print (resp.query("country == 'IN'"))
Explanation: Let's try to find out which ones are in India:
End of explanation
res = api.countries(limit=10000, df=True)
print (res.head())
Explanation: Great! For the rest of the tutorial, we are going to focus on Delhi, India. Why? Well..because there are over 500,000 data points and my personal research is primarily in India. We will also take a look at some $SO_2$ data from Hawai'i later on (another great research locale).
Countries
Similar to the cities endpoint, the countries endpoint lists the countries available. The only parameters we have to play with are the limit and page number. If we want to grab them all, we can just up the limit to the maximum (10000).
End of explanation
status, resp = api.fetches(limit=1)
# Print out the meta info
resp['meta']
Explanation: Fetches
If you are interested in getting information pertaining to the individual data fetch operations, go ahead and use this endpoint. Most people won't need to use this. This API method does not allow the df parameter; if you would like it to be added, drop me a message.
Otherwise, here is how you can access the json-formatted data:
End of explanation
res = api.parameters(df=True)
print (res)
Explanation: Parameters
The parameters endpoint will provide a listing off all the parameters available:
End of explanation
res = api.sources(df=True)
# Print out the first one
res.ix[0]
Explanation: Sources
The sources endpoint will provide a list of the sources where the raw data came from.
End of explanation
res = api.locations(df=True)
res.info()
# print out the first one
res.ix[0]
Explanation: Locations
The locations endpoint will return the list of measurement locations and their meta data. We can do quite a bit of querying with this one:
Let's see what the data looks like:
End of explanation
res = api.locations(city='Delhi', df=True)
res.ix[0]
Explanation: What if we just want to grab the locations in Delhi?
End of explanation
res = api.locations(city='Delhi', parameter='pm25', df=True)
res.ix[0]
Explanation: What about just figuring out which locations in Delhi have $PM_{2.5}$ data?
End of explanation
res = api.latest(city='Delhi', parameter='pm25', df=True)
res.head()
Explanation: Latest
Grab the latest data from a location or locations.
What was the most recent $PM_{2.5}$ data in Delhi?
End of explanation
res = api.latest(city='Hilo', parameter='so2', df=True)
res
Explanation: What about the most recent $SO_2$ data in Hawii?
End of explanation
res = api.measurements(city='Delhi', parameter='pm25', limit=10000, df=True)
# Print out the statistics on a per-location basiss
res.groupby(['location'])['value'].describe()
Explanation: Measurements
Finally, the endpoint we've all been waiting for! Measurements allows you to grab all of the dataz! You can query on a whole bunhc of parameters listed in the API documentation. Let's dive in:
Let's grab the past 10000 data points for $PM_{2.5}$ in Delhi:
End of explanation
fig, ax = plt.subplots(1, figsize=(10, 6))
for group, df in res.groupby('location'):
# Query the data to only get positive values and resample to hourly
_df = df.query("value >= 0.0").resample('1h').mean()
_df.value.plot(ax=ax, label=group)
ax.legend(loc='best')
ax.set_ylabel("$PM_{2.5}$ [$\mu g m^{-3}$]", fontsize=20)
ax.set_xlabel("")
sns.despine(offset=5)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
Explanation: Clearly, we should be doing some serious data cleaning ;) Why don't we go ahead and plot all of these locations on a figure.
End of explanation
fig, ax = plt.subplots(1, figsize=(14,7))
ax = sns.boxplot(
x='location',
y='value',
data=res.query("value >= 0.0"),
fliersize=0,
palette='deep',
ax=ax)
ax.set_ylim([0, 750])
ax.set_ylabel("$PM_{2.5}\;[\mu gm^{-3}]$", fontsize=18)
ax.set_xlabel("")
sns.despine(offset=10)
plt.xticks(rotation=90)
plt.show()
Explanation: Don't worry too much about how ugly and uninteresting the plot above is...we'll take care of that in the next tutorial! Let's go ahead and look at the distribution of $PM_{2.5}$ values seen in Delhi by various sensors. This is the same data as above, but viewed in a different way.
End of explanation
res = api.measurements(city='Delhi', location='Anand Vihar', limit=1000, df=True)
# Which params do we have?
res.parameter.unique()
df = pd.DataFrame()
for u in res.parameter.unique():
_df = res[res['parameter'] == u][['value']]
_df.columns = [u]
# Merge the dataframes together
df = pd.merge(df, _df, left_index=True, right_index=True, how='outer')
# Get rid of rows where not all exist
df.dropna(how='any', inplace=True)
g = sns.PairGrid(df, diag_sharey=False)
g.map_lower(sns.kdeplot, cmap='Blues_d')
g.map_upper(plt.scatter)
g.map_diag(sns.kdeplot, lw=3)
plt.show()
Explanation: If we remember from above, there was at least one location where many parameters were measured. Let's go ahead and look at that location and see if there is any correlation among parameters!
End of explanation
res = api.measurements(city='Hilo', parameter='so2', limit=10000, df=True)
# Print out the statistics on a per-location basiss
res.groupby(['location'])['value'].describe()
fig, ax = plt.subplots(1, figsize=(10, 5))
for group, df in res.groupby('location'):
# Query the data to only get positive values and resample to hourly
_df = df.query("value >= 0.0").resample('6h').mean()
# Convert from ppm to ppb
_df['value'] *= 1e3
# Multiply the value by 1000 to get from ppm to ppb
_df.value.plot(ax=ax, label=group)
ax.legend(loc='best')
ax.set_ylabel("$SO_2 \; [ppb]$", fontsize=18)
ax.set_xlabel("")
sns.despine(offset=5)
plt.show()
Explanation: For kicks, let's go ahead and look at a timeseries of $SO_2$ data in Hawai'i. Quiz: What do you expect? Did you know that Hawai'i has a huge $SO_2$ problem?
End of explanation |
2,395 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
练习 1:写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!。
Step1: 练习 2:写程序,可由键盘读入两个整数m与n(n不等于0),询问用户意图,如果要求和则计算从m到n的和输出,如果要乘积则计算从m到n的积并输出,如果要求余数则计算m除以n的余数的值并输出,否则则计算m整除n的值并输出。
Step2: 练习 3:写程序,能够根据北京雾霾PM2.5数值给出对应的防护建议。如当PM2.5数值大于500,则应该打开空气净化器,戴防雾霾口罩等。
Step3: 尝试性练习:写程序,能够在屏幕上显示空行。
Step4: 练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议 | Python Code:
name = input('请输入你的姓名')
print('你好',name)
print('请输入出生的月份与日期')
month = int(input('月份:'))
date = int(input('日期:'))
if month == 4:
if date < 20:
print(name, '你是白羊座')
else:
print(name,'你是非常有性格的金牛座')
if month == 5:
if date < 21:
print(name, '你是非常有性格的金牛座')
else:
print(name,'你是双子座')
if month == 6:
if date < 22:
print(name, '你是双子座')
else:
print(name,'你是巨蟹座')
if month == 7:
if date < 23:
print(name, '你是巨蟹座')
else:
print(name,'你是狮子座')
if month == 8:
if date < 23:
print(name, '你是狮子座')
else:
print(name,'你是处女座')
if month == 9:
if date < 24:
print(name, '你是处女座')
else:
print(name,'你是天秤座')
if month == 10:
if date < 24:
print(name, '你是天秤座')
else:
print(name,'你是天蝎座')
if month == 11:
if date < 23:
print(name, '你是天蝎座')
else:
print(name,'你是射手座')
if month == 12:
if date < 22:
print(name, '你是射手座')
else:
print(name,'你是摩羯座')
if month == 1:
if date < 20:
print(name, '你是摩羯座')
else:
print(name,'你是水瓶座')
if month == 2:
if date < 19:
print(name, '你是水瓶座')
else:
print(name,'你是双鱼座')
if month == 3:
if date < 22:
print(name, '你是双鱼座')
else:
print(name,'你是白羊座')
Explanation: 练习 1:写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!。
End of explanation
m = int(input('请输入一个整数,回车结束'))
n = int(input('请输入一个整数,不为零'))
intend = input('请输入计算意图,如 + * %')
if m<n:
min_number = m
else:
min_number = n
total = min_number
if intend == '+':
if m<n:
while m<n:
m = m + 1
total = total + m
print(total)
else:
while m > n:
n = n + 1
total = total + n
print(total)
elif intend == '*':
if m<n:
while m<n:
m = m + 1
total = total * m
print(total)
else:
while m > n:
n = n + 1
total = total * n
print(total)
elif intend == '%':
print(m % n)
else:
print(m // n)
Explanation: 练习 2:写程序,可由键盘读入两个整数m与n(n不等于0),询问用户意图,如果要求和则计算从m到n的和输出,如果要乘积则计算从m到n的积并输出,如果要求余数则计算m除以n的余数的值并输出,否则则计算m整除n的值并输出。
End of explanation
number = int(input('现在北京的PM2.5指数是多少?请输入整数'))
if number > 500:
print('应该打开空气净化器,戴防雾霾口罩')
elif 300 < number < 500:
print('尽量呆在室内不出门,出门佩戴防雾霾口罩')
elif 200 < number < 300:
print('尽量不要进行户外活动')
elif 100 < number < 200:
print('轻度污染,可进行户外活动,可不佩戴口罩')
else:
print('无须特别注意')
Explanation: 练习 3:写程序,能够根据北京雾霾PM2.5数值给出对应的防护建议。如当PM2.5数值大于500,则应该打开空气净化器,戴防雾霾口罩等。
End of explanation
print('空行是我')
print('空行是我')
print('空行是我')
print( )
print('我是空行')
Explanation: 尝试性练习:写程序,能够在屏幕上显示空行。
End of explanation
word = input('请输入一个单词,回车结束')
if word.endswith('s') or word.endswith('sh') or word.endswith('ch') or word.endswith('x'):
print(word,'es',sep = '')
elif word.endswith('y'):
if word.endswith('ay') or word.endswith('ey') or word.endswith('iy') or word.endswith('oy') or word.endswith('uy'):
print(word,'s',sep = '')
else:
word = word[:-1]
print(word,'ies',sep = '')
elif word.endswith('f'):
word = word[:-1]
print(word,'ves',sep = '')
elif word.endswith('fe'):
word = word[:-2]
print(word,'ves',sep = '')
elif word.endswith('o'):
print('词尾加s或者es')
else:
print(word,'s',sep = '')
Explanation: 练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议
End of explanation |
2,396 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: I am really enjoying having this weather station. I say weather
station, but it is just a raspberry pi with a pressure and temperature
sensor attached to it.
Computers are versatile, you can run any software on them, so they can
do a lot of different things.
But the pi takes this to a whole different level. They are $40 or so,
depending on taxes and stuff. Now you need a whole bunch of other
stuff
Step3: Temperature
The temperature plot shows a steady up and down, warming during the
days, cooling a little, but only 2C at night.
There is one day, where I think we had some thunder storms where it dropped during the day.
The last week or so the temperature has been steadily climbing.
Pressure
The Pressure also shows slow drifting up and down. But there is this
other strange ripple up and down.
I mentioned this to a meteorologist and immediately got the reply that
it was because the atmosphere is tidal.
So the pattern we see in the pressure should be driven largely by the moon.
Step4: The latitude is almost in phase with the phase of the moon, at least at the moment.
Next job is to add these series to our data frame and then take a look at scikit-learn | Python Code:
# Tell matplotlib to plot in line
%matplotlib inline
# import pandas
import pandas
# seaborn magically adds a layer of goodness on top of Matplotlib
# mostly this is just changing matplotlib defaults, but it does also
# provide some higher level plotting methods.
import seaborn
# Tell seaborn to set things up
seaborn.set()
def smooth(data, thresh=None):
means = data.mean()
if thresh is None:
sds = data.std()
else:
sds = thresh
delta = data - data.shift()
good = delta[abs(delta) < sds]
print(good.describe())
return delta.where(good, 0.0)
# set the path to the file we are going to analyse
infile = "../files/light.csv"
!scp 192.168.0.127:Adafruit_Python_BMP/light.csv ../files
assume it is csv and let pandas do magic
index_col tells it to use the 'date' column in the data
as the row index, plotting picks up on this and uses the
date on the x-axis
The *parse_dates* bit just tells it to try and figure out
the date/time in the columne labeled 'date'.
data = pandas.read_csv(infile, index_col='date', parse_dates=['date'])
# incantation to extract the first record in the data
start = data[['temp', 'altitude']].irow(0)
# smooth the data to filter out bad temps and pressures
sdata = (smooth(data, 5.0).cumsum() + start)
# now use smooth to throw away dodgy data, and plot the temp and altitude fieldsa
sdata[['temp', 'altitude']].plot(subplots=True)
Explanation: I am really enjoying having this weather station. I say weather
station, but it is just a raspberry pi with a pressure and temperature
sensor attached to it.
Computers are versatile, you can run any software on them, so they can
do a lot of different things.
But the pi takes this to a whole different level. They are $40 or so,
depending on taxes and stuff. Now you need a whole bunch of other
stuff: leads, keyboards, sensors, cameras, touch screens and lots more.
I have been using Adafruit. They have been very helpful with orders.
I found browsing for parts and finding what I needed to get started
with the weather stuff a bit confusing for a while.
I had other stuff to do anyway, I needed to get used to just
installing software on them and setting up environments where I can
figure out how things are working.
I found a great posting about building a weather station, with
enough detail I thought I would be able to work through it.
I order a cheap humidity sensor ($5) that also should do temperature.
I haven't got it working yet, not sure if it is hardware or software.
Meanwhile, I now have a better sensor. The humidity here is often
outside the range the cheaper sensor is supposed to work.
The good thing is that I should be able to find a use for it at some point.
I also have a camera for this thing and a touchscreen. I am thinking
of trying to combine them to make a camera.
The one on my phone can do some neat stuff, but the interface keeps
changing and not in ways that are making things easier.
The night skies have been spectacular of late, with Venus and Jupiter
close together in the early evening sky to the west. If you see two
bright stars after sunset, that is probably them.
Then the moon is just past full. Here in Bermuda there is often
cloud, not heavy, but patchy clouds. The humidity is often very high,
so the atmosphere is interesting.
The best sunsets usually have some clouds for the setting sun to
reflect off. The same with sunrises.
And the full moon too. Tonight it rose behind cloud. Last night it
was clearer and it rose orange/red.
Back to weather
Since I got this thing working it has been hot and settled weather.
There is a strong Bermuda high in place. Someone described it to me
as like a pit-bull, once it gets hold it does not let go. So, we may
be in for a long hot spell.
With the pressure changing quite smoothly, the plots I created still
showed quite an interesting pattern.
Skip the next bit, or go back to the earlier post, it is just
setting things up to do some plotting.
End of explanation
import astropy
from astropy import units
from astropy import find_api_page
# find_api_page is handy, even opens a browser windo.
#find_api_page(units)
# astropy is cool, but I need data for the moon. Let's try pyephem
# uncommented the line below and run this cell if you need to install using
# pip. This will install into the environment that is being used to run your
# ipython notebook server.
#!pip install pyephem
import ephem
# Create a Moon
moon = ephem.Moon()
# Tell it to figure out where it is
moon.compute()
# pring out the phase
moon.phase
def moon_orbitals(index):
Given an index of times create a DataFrame of moon orbitals
For now, just Phase, geocentric latitude and geocentric longitude
# Create dataframe with index as the index
df = pandas.DataFrame(index=index)
# Add three series
df['phase'] = pandas.Series()
df['glat'] = pandas.Series()
df['glon'] = pandas.Series()
# Now generate the data
# NB this is slow, solpy might work out faster
moon = ephem.Moon()
for ix, timestamp in enumerate(index):
# Compute the moon posigion
moon.compute(timestamp.strftime("%Y/%m/%d %H:%M:%S"))
df.phase[ix] = moon.phase
df.glat[ix] = moon.hlat
df.glon[ix] = moon.hlon
return df
# See what we got
moon = moon_orbitals(data.index)
moon.describe()
moon.plot(subplots=True)
# Try feeding in a longer time series
days = pandas.date_range('7/7/2015', periods=560, freq='D')
moon_orbitals(days).plot(subplots=True)
sdata['moon_phase'] = moon.phase
sdata['moon_glat'] = moon.glat
sdata['moon_glon'] = moon.glon
# FIXME -- must be a pandas one liner eg data += moon ?
sdata[['temp', 'altitude', 'moon_phase', 'moon_glon', 'moon_glat']].plot(subplots=True)
Explanation: Temperature
The temperature plot shows a steady up and down, warming during the
days, cooling a little, but only 2C at night.
There is one day, where I think we had some thunder storms where it dropped during the day.
The last week or so the temperature has been steadily climbing.
Pressure
The Pressure also shows slow drifting up and down. But there is this
other strange ripple up and down.
I mentioned this to a meteorologist and immediately got the reply that
it was because the atmosphere is tidal.
So the pattern we see in the pressure should be driven largely by the moon.
End of explanation
print(sdata.index[0])
sdata.index[0].hour + (sdata.index[0].minute / 60.)
sdata.describe()
def tide_proxy(df):
# Create dataframe with index as the index
series = pandas.Series(index=df.index)
for ix, timestamp in enumerate(df.index):
hour_min = timestamp.hour + (timestamp.minute / 60.)
hour_min += df.moon_glat[ix]
series[ix] = hour_min
return series
xx = tide_proxy(sdata)
xx.plot()
xx.describe()
# See what we got
moon = moon_orbitals(data.index)
moon.describe()
sdata['tide'] = tide_proxy(sdata)
fields = ['altitude', 'temp', 'tide']
sdata[fields].plot()
sdata.temp.plot()
with open("../files/moon_weather.csv", 'w') as outfile:
sdata.to_csv(outfile)
Explanation: The latitude is almost in phase with the phase of the moon, at least at the moment.
Next job is to add these series to our data frame and then take a look at scikit-learn
End of explanation |
2,397 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: We'll then start with the bundle from the end of the emcee tutorial. If you're running this notebook locally, you will need to run that first to create the emcee_advanced_tutorials.bundle file that we will use here.
Step2: Defining the custom cost function
As is described in the b.run_solver API docs, a custom function can be passed which overrides the internal default cost function. This function must accept b, model, lnpriors, priors, priors_combine as arguments and return the lnprobability (cost function). The arguments are as follows
Step3: run_solver
In order to swap out the default cost function with our custom cost function, we must pass the function itself to custom_lnprobability_callable when calling b.run_solver | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
import numpy as np
logger = phoebe.logger('error')
Explanation: Advanced: Custom Cost Funtion (with emcee)
IMPORTANT: this tutorial assumes basic knowledge (and uses a file resulting from) the emcee tutorial, although the custom cost function itself can be used for any optimizer or sampler.
NOTE: several bugs related to custom cost functions were fixed in version 2.3.40.
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
b = phoebe.load('emcee_advanced_tutorials.bundle')
Explanation: We'll then start with the bundle from the end of the emcee tutorial. If you're running this notebook locally, you will need to run that first to create the emcee_advanced_tutorials.bundle file that we will use here.
End of explanation
def default_lnprob(b, model, lnpriors, priors, priors_combine):
print("* calling default_lnprob")
return lnpriors + b.calculate_lnlikelihood(model=model)
Explanation: Defining the custom cost function
As is described in the b.run_solver API docs, a custom function can be passed which overrides the internal default cost function. This function must accept b, model, lnpriors, priors, priors_combine as arguments and return the lnprobability (cost function). The arguments are as follows:
* b: the bundle with the current face-values for this forward model
* model: the name of the forward model in b
* lnpriors: the pre-computed value of the log-priors by passing priors and priors_combine to b.calculate_lnp
* priors: the name(s) of the prior distributions
* priors_combine: the choice for how to combine priors if priors includes more than one distribution for any single parameter.
If a custom function is not passed, the default cost function is the sum of the lnlikelihood (from b.calculate_lnlikelihood) and the probability of drawing the current face-values from the passed priors.
Let's reproduce this default case for the sake of this example. We'll include a print statement just for confirmation that our function is being called. In practice, you could do any modifications here with access to parameter values, distributions, synthetic models, and observations.
End of explanation
b.run_solver('emcee_solver',
custom_lnprobability_callable=default_lnprob,
niters=1,
solution='emcee_sol_custom_lnprob', overwrite=True)
Explanation: run_solver
In order to swap out the default cost function with our custom cost function, we must pass the function itself to custom_lnprobability_callable when calling b.run_solver
End of explanation |
2,398 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scapy in 15 minutes (or longer)
Guillaume Valadon & Pierre Lalet
Scapy is a powerful Python-based interactive packet manipulation program and library. It can be used to forge or decode packets for a wide number of protocols, send them on the wire, capture them, match requests and replies, and much more.
This iPython notebook provides a short tour of the main Scapy features. It assumes that you are familiar with networking terminology. All examples where built using the development version from https
Step1: 2_ Adanced firewalking using IP options is sometimes useful to perform network enumeration. Here is more complicate one-liner
Step2: Now that, we've got your attention, let's start the tutorial !
Quick setup
The easiest way to try Scapy is to clone the github repository, then launch the run_scapy script as root. The following examples can be pasted on the Scapy prompt. There is no need to install any external Python modules.
```shell
git clone https
Step3: First steps
With Scapy, each network layer is a Python class.
The '/' operator is used to bind layers together. Let's put a TCP segment on top of IP and assign it to the packet variable, then stack it on top of Ethernet.
Step4: This last output displays the packet summary. Here, Scapy automatically filled the Ethernet type as well as the IP protocol field.
Protocol fields can be listed using the ls() function
Step5: Let's create a new packet to a specific IP destination. With Scapy, each protocol field can be specified. As shown in the ls() output, the interesting field is dst.
Scapy packets are objects with some useful methods, such as summary().
Step6: There are not many differences with the previous example. However, Scapy used the specific destination to perform some magic tricks !
Using internal mechanisms (such as DNS resolution, routing table and ARP resolution), Scapy has automatically set fields necessary to send the packet. This fields can of course be accessed and displayed.
Step7: Scapy uses default values that work most of the time. For example, TCP() is a SYN segment to port 80.
Step8: Moreover, Scapy has implicit packets. For example, they are useful to make the TTL field value vary from 1 to 5 to mimic traceroute.
Step9: Sending and receiving
Currently, you know how to build packets with Scapy. The next step is to send them over the network !
The sr1() function sends a packet and return the corresponding answer. srp1() does the same for layer two packets, i.e. Ethernet. If you are only interested in sending packets send() is your friend.
As an example, we can use the DNS protocol to get www.example.com IPv4 address.
Step10: Another alternative is the sr() function. Like srp1(), the sr1() function can be used for layer 2 packets.
Step11: sr() sent a list of packets, and returns two variables, here r and u, where
Step12: With Scapy, list of packets, such as r or u, can be easily written to, or read from PCAP files.
Step13: Sniffing the network is a straightforward as sending and receiving packets. The sniff() function returns a list of Scapy packets, that can be manipulated as previously described.
Step14: sniff() has many arguments. The prn one accepts a function name that will be called on received packets. Using the lambda keyword, Scapy could be used to mimic the tshark command behavior.
Step15: Alternatively, Scapy can use OS sockets to send and receive packets. The following example assigns an UDP socket to a Scapy StreamSocket, which is then used to query www.example.com IPv4 address.
Unlike other Scapy sockets, StreamSockets do not require root privileges.
Step16: Visualization
Parts of the following examples require the matplotlib module.
With srloop(), we can send 100 ICMP packets to 8.8.8.8 and 8.8.4.4.
Step17: Then we can use the results to plot the IP id values.
Step18: The str() constructor can be used to "build" the packet's bytes as they would be sent on the wire.
Step19: Since some people cannot read this representation, Scapy can
Step20: "hexdump" the packet's bytes
Step21: dump the packet, layer by layer, with the values for each field
Step22: render a pretty and handy dissection of the packet
Step23: Scapy has a traceroute() function, which basically runs a sr(IP(ttl=(1..30)) and creates a TracerouteResult object, which is a specific subclass of SndRcvList().
Step24: The result can be plotted with .world_trace() (this requires GeoIP module and data, from MaxMind)
Step25: The PacketList.make_table() function can be very helpful. Here is a simple "port scanner"
Step26: Implementing a new protocol
Scapy can be easily extended to support new protocols.
The following example defines DNS over TCP. The DNSTCP class inherits from Packet and defines two field
Step27: This new packet definition can be direcly used to build a DNS message over TCP.
Step28: Modifying the previous StreamSocket example to use TCP allows to use the new DNSCTP layer easily.
Step29: Scapy as a module
So far, Scapy was only used from the command line. It is also a Python module than can be used to build specific network tools, such as ping6.py
Step30: Answering machines
A lot of attack scenarios look the same
Step31: Cheap Man-in-the-middle with NFQUEUE
NFQUEUE is an iptables target than can be used to transfer packets to userland process. As a nfqueue module is available in Python, you can take advantage of this Linux feature to perform Scapy based MiTM.
This example intercepts ICMP Echo request messages sent to 8.8.8.8, sent with the ping command, and modify their sequence numbers. In order to pass packets to Scapy, the following iptable command put packets into the NFQUEUE #2807
Step32: Automaton
When more logic is needed, Scapy provides a clever way abstraction to define an automaton. In a nutshell, you need to define an object that inherits from Automaton, and implement specific methods | Python Code:
send(IP(dst="1.2.3.4")/TCP(dport=502, options=[("MSS", 0)]))
Explanation: Scapy in 15 minutes (or longer)
Guillaume Valadon & Pierre Lalet
Scapy is a powerful Python-based interactive packet manipulation program and library. It can be used to forge or decode packets for a wide number of protocols, send them on the wire, capture them, match requests and replies, and much more.
This iPython notebook provides a short tour of the main Scapy features. It assumes that you are familiar with networking terminology. All examples where built using the development version from https://github.com/secdev/scapy, and tested on Linux. They should work as well on OS X, and other BSD.
The current documentation is available on http://scapy.readthedocs.io/ !
Scapy eases network packets manipulation, and allows you to forge complicated packets to perform advanced tests. As a teaser, let's have a look a two examples that are difficult to express without Scapy:
1_ Sending a TCP segment with maximum segment size set to 0 to a specific port is an interesting test to perform against embedded TCP stacks. It can be achieved with the following one-liner:
End of explanation
ans = sr([IP(dst="8.8.8.8", ttl=(1, 8), options=IPOption_RR())/ICMP(seq=RandShort()), IP(dst="8.8.8.8", ttl=(1, 8), options=IPOption_Traceroute())/ICMP(seq=RandShort()), IP(dst="8.8.8.8", ttl=(1, 8))/ICMP(seq=RandShort())], verbose=False, timeout=3)[0]
ans.make_table(lambda (x, y): (", ".join(z.summary() for z in x[IP].options) or '-', x[IP].ttl, y.sprintf("%IP.src% %ICMP.type%")))
Explanation: 2_ Adanced firewalking using IP options is sometimes useful to perform network enumeration. Here is more complicate one-liner:
End of explanation
from scapy.all import *
Explanation: Now that, we've got your attention, let's start the tutorial !
Quick setup
The easiest way to try Scapy is to clone the github repository, then launch the run_scapy script as root. The following examples can be pasted on the Scapy prompt. There is no need to install any external Python modules.
```shell
git clone https://github.com/secdev/scapy --depth=1
sudo ./run_scapy
Welcome to Scapy (2.3.2-dev)
```
Note: iPython users must import scapy as follows
End of explanation
packet = IP()/TCP()
Ether()/packet
Explanation: First steps
With Scapy, each network layer is a Python class.
The '/' operator is used to bind layers together. Let's put a TCP segment on top of IP and assign it to the packet variable, then stack it on top of Ethernet.
End of explanation
>>> ls(IP, verbose=True)
version : BitField (4 bits) = (4)
ihl : BitField (4 bits) = (None)
tos : XByteField = (0)
len : ShortField = (None)
id : ShortField = (1)
flags : FlagsField (3 bits) = (0)
MF, DF, evil
frag : BitField (13 bits) = (0)
ttl : ByteField = (64)
proto : ByteEnumField = (0)
chksum : XShortField = (None)
src : SourceIPField (Emph) = (None)
dst : DestIPField (Emph) = (None)
options : PacketListField = ([])
Explanation: This last output displays the packet summary. Here, Scapy automatically filled the Ethernet type as well as the IP protocol field.
Protocol fields can be listed using the ls() function:
End of explanation
p = Ether()/IP(dst="www.secdev.org")/TCP()
p.summary()
Explanation: Let's create a new packet to a specific IP destination. With Scapy, each protocol field can be specified. As shown in the ls() output, the interesting field is dst.
Scapy packets are objects with some useful methods, such as summary().
End of explanation
print p.dst # first layer that has an src field, here Ether
print p[IP].src # explicitly access the src field of the IP layer
# sprintf() is a useful method to display fields
print p.sprintf("%Ether.src% > %Ether.dst%\n%IP.src% > %IP.dst%")
Explanation: There are not many differences with the previous example. However, Scapy used the specific destination to perform some magic tricks !
Using internal mechanisms (such as DNS resolution, routing table and ARP resolution), Scapy has automatically set fields necessary to send the packet. This fields can of course be accessed and displayed.
End of explanation
print p.sprintf("%TCP.flags% %TCP.dport%")
Explanation: Scapy uses default values that work most of the time. For example, TCP() is a SYN segment to port 80.
End of explanation
[p for p in IP(ttl=(1,5))/ICMP()]
Explanation: Moreover, Scapy has implicit packets. For example, they are useful to make the TTL field value vary from 1 to 5 to mimic traceroute.
End of explanation
sr1(IP(dst="8.8.8.8")/UDP()/DNS(qd=DNSQR()))
p[DNS].an
Explanation: Sending and receiving
Currently, you know how to build packets with Scapy. The next step is to send them over the network !
The sr1() function sends a packet and return the corresponding answer. srp1() does the same for layer two packets, i.e. Ethernet. If you are only interested in sending packets send() is your friend.
As an example, we can use the DNS protocol to get www.example.com IPv4 address.
End of explanation
r, u = srp(Ether()/IP(dst="8.8.8.8", ttl=(5,10))/UDP()/DNS(rd=1, qd=DNSQR(qname="www.example.com")))
r, u
Explanation: Another alternative is the sr() function. Like srp1(), the sr1() function can be used for layer 2 packets.
End of explanation
# Access the first tuple
print r[0][0].summary() # the packet sent
print r[0][1].summary() # the answer received
# Access the ICMP layer. Scapy received a time-exceeded error message
r[0][1][ICMP]
Explanation: sr() sent a list of packets, and returns two variables, here r and u, where:
1. r is a list of results (i.e tuples of the packet sent and its answer)
2. u is a list of unanswered packets
End of explanation
wrpcap("scapy.pcap", r)
pcap_p = rdpcap("scapy.pcap")
pcap_p[0]
Explanation: With Scapy, list of packets, such as r or u, can be easily written to, or read from PCAP files.
End of explanation
s = sniff(count=2)
s
Explanation: Sniffing the network is a straightforward as sending and receiving packets. The sniff() function returns a list of Scapy packets, that can be manipulated as previously described.
End of explanation
sniff(count=2, prn=lambda p: p.summary())
Explanation: sniff() has many arguments. The prn one accepts a function name that will be called on received packets. Using the lambda keyword, Scapy could be used to mimic the tshark command behavior.
End of explanation
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # create an UDP socket
sck.connect(("8.8.8.8", 53)) # connect to 8.8.8.8 on 53/UDP
# Create the StreamSocket and gives the class used to decode the answer
ssck = StreamSocket(sck)
ssck.basecls = DNS
# Send the DNS query
ssck.sr1(DNS(rd=1, qd=DNSQR(qname="www.example.com")))
Explanation: Alternatively, Scapy can use OS sockets to send and receive packets. The following example assigns an UDP socket to a Scapy StreamSocket, which is then used to query www.example.com IPv4 address.
Unlike other Scapy sockets, StreamSockets do not require root privileges.
End of explanation
ans, unans = srloop(IP(dst=["8.8.8.8", "8.8.4.4"])/ICMP(), inter=.1, timeout=.1, count=100, verbose=False)
Explanation: Visualization
Parts of the following examples require the matplotlib module.
With srloop(), we can send 100 ICMP packets to 8.8.8.8 and 8.8.4.4.
End of explanation
%matplotlib inline
ans.multiplot(lambda (x, y): (y[IP].src, (y.time, y[IP].id)), plot_xy=True)
Explanation: Then we can use the results to plot the IP id values.
End of explanation
pkt = IP() / UDP() / DNS(qd=DNSQR())
print repr(str(pkt))
Explanation: The str() constructor can be used to "build" the packet's bytes as they would be sent on the wire.
End of explanation
print pkt.summary()
Explanation: Since some people cannot read this representation, Scapy can:
- give a summary for a packet
End of explanation
hexdump(pkt)
Explanation: "hexdump" the packet's bytes
End of explanation
pkt.show()
Explanation: dump the packet, layer by layer, with the values for each field
End of explanation
pkt.canvas_dump()
Explanation: render a pretty and handy dissection of the packet
End of explanation
ans, unans = traceroute('www.secdev.org', maxttl=15)
Explanation: Scapy has a traceroute() function, which basically runs a sr(IP(ttl=(1..30)) and creates a TracerouteResult object, which is a specific subclass of SndRcvList().
End of explanation
ans.world_trace()
Explanation: The result can be plotted with .world_trace() (this requires GeoIP module and data, from MaxMind)
End of explanation
ans = sr(IP(dst=["scanme.nmap.org", "nmap.org"])/TCP(dport=[22, 80, 443, 31337]), timeout=3, verbose=False)[0]
ans.extend(sr(IP(dst=["scanme.nmap.org", "nmap.org"])/UDP(dport=53)/DNS(qd=DNSQR()), timeout=3, verbose=False)[0])
ans.make_table(lambda (x, y): (x[IP].dst, x.sprintf('%IP.proto%/{TCP:%r,TCP.dport%}{UDP:%r,UDP.dport%}'), y.sprintf('{TCP:%TCP.flags%}{ICMP:%ICMP.type%}')))
Explanation: The PacketList.make_table() function can be very helpful. Here is a simple "port scanner":
End of explanation
class DNSTCP(Packet):
name = "DNS over TCP"
fields_desc = [ FieldLenField("len", None, fmt="!H", length_of="dns"),
PacketLenField("dns", 0, DNS, length_from=lambda p: p.len)]
# This method tells Scapy that the next packet must be decoded with DNSTCP
def guess_payload_class(self, payload):
return DNSTCP
Explanation: Implementing a new protocol
Scapy can be easily extended to support new protocols.
The following example defines DNS over TCP. The DNSTCP class inherits from Packet and defines two field: the length, and the real DNS message. The length_of and length_from arguments link the len and dns fields together. Scapy will be able to automatically compute the len value.
End of explanation
# Build then decode a DNS message over TCP
DNSTCP(str(DNSTCP(dns=DNS())))
Explanation: This new packet definition can be direcly used to build a DNS message over TCP.
End of explanation
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # create an TCP socket
sck.connect(("8.8.8.8", 53)) # connect to 8.8.8.8 on 53/TCP
# Create the StreamSocket and gives the class used to decode the answer
ssck = StreamSocket(sck)
ssck.basecls = DNSTCP
# Send the DNS query
ssck.sr1(DNSTCP(dns=DNS(rd=1, qd=DNSQR(qname="www.example.com"))))
Explanation: Modifying the previous StreamSocket example to use TCP allows to use the new DNSCTP layer easily.
End of explanation
from scapy.all import *
import argparse
parser = argparse.ArgumentParser(description="A simple ping6")
parser.add_argument("ipv6_address", help="An IPv6 address")
args = parser.parse_args()
print sr1(IPv6(dst=args.ipv6_address)/ICMPv6EchoRequest(), verbose=0).summary()
Explanation: Scapy as a module
So far, Scapy was only used from the command line. It is also a Python module than can be used to build specific network tools, such as ping6.py:
End of explanation
# Specify the Wi-Fi monitor interface
#conf.iface = "mon0" # uncomment to test
# Create an answering machine
class ProbeRequest_am(AnsweringMachine):
function_name = "pram"
# The fake mac of the fake access point
mac = "00:11:22:33:44:55"
def is_request(self, pkt):
return Dot11ProbeReq in pkt
def make_reply(self, req):
rep = RadioTap()
# Note: depending on your Wi-Fi card, you might need a different header than RadioTap()
rep /= Dot11(addr1=req.addr2, addr2=self.mac, addr3=self.mac, ID=RandShort(), SC=RandShort())
rep /= Dot11ProbeResp(cap="ESS", timestamp=time.time())
rep /= Dot11Elt(ID="SSID",info="Scapy !")
rep /= Dot11Elt(ID="Rates",info='\x82\x84\x0b\x16\x96')
rep /= Dot11Elt(ID="DSset",info=chr(10))
OK,return rep
# Start the answering machine
#ProbeRequest_am()() # uncomment to test
Explanation: Answering machines
A lot of attack scenarios look the same: you want to wait for a specific packet, then send an answer to trigger the attack.
To this extent, Scapy provides the AnsweringMachine object. Two methods are especially useful:
1. is_request(): return True if the pkt is the expected request
2. make_reply(): return the packet that must be sent
The following example uses Scapy Wi-Fi capabilities to pretend that a "Scapy !" access point exists.
Note: your Wi-Fi interface must be set to monitor mode !
End of explanation
from scapy.all import *
import nfqueue, socket
def scapy_cb(i, payload):
s = payload.get_data() # get and parse the packet
p = IP(s)
# Check if the packet is an ICMP Echo Request to 8.8.8.8
if p.dst == "8.8.8.8" and ICMP in p:
# Delete checksums to force Scapy to compute them
del(p[IP].chksum, p[ICMP].chksum)
# Set the ICMP sequence number to 0
p[ICMP].seq = 0
# Let the modified packet go through
ret = payload.set_verdict_modified(nfqueue.NF_ACCEPT, str(p), len(p))
else:
# Accept all packets
payload.set_verdict(nfqueue.NF_ACCEPT)
# Get an NFQUEUE handler
q = nfqueue.queue()
# Set the function that will be call on each received packet
q.set_callback(scapy_cb)
# Open the queue & start parsing packes
q.fast_open(2807, socket.AF_INET)
q.try_run()
Explanation: Cheap Man-in-the-middle with NFQUEUE
NFQUEUE is an iptables target than can be used to transfer packets to userland process. As a nfqueue module is available in Python, you can take advantage of this Linux feature to perform Scapy based MiTM.
This example intercepts ICMP Echo request messages sent to 8.8.8.8, sent with the ping command, and modify their sequence numbers. In order to pass packets to Scapy, the following iptable command put packets into the NFQUEUE #2807:
$ sudo iptables -I OUTPUT --destination 8.8.8.8 -p icmp -o eth0 -j NFQUEUE --queue-num 2807
End of explanation
class TCPScanner(Automaton):
@ATMT.state(initial=1)
def BEGIN(self):
pass
@ATMT.state()
def SYN(self):
print "-> SYN"
@ATMT.state()
def SYN_ACK(self):
print "<- SYN/ACK"
raise self.END()
@ATMT.state()
def RST(self):
print "<- RST"
raise self.END()
@ATMT.state()
def ERROR(self):
print "!! ERROR"
raise self.END()
@ATMT.state(final=1)
def END(self):
pass
@ATMT.condition(BEGIN)
def condition_BEGIN(self):
raise self.SYN()
@ATMT.condition(SYN)
def condition_SYN(self):
if random.randint(0, 1):
raise self.SYN_ACK()
else:
raise self.RST()
@ATMT.timeout(SYN, 1)
def timeout_SYN(self):
raise self.ERROR()
TCPScanner().run()
TCPScanner().run()
Explanation: Automaton
When more logic is needed, Scapy provides a clever way abstraction to define an automaton. In a nutshell, you need to define an object that inherits from Automaton, and implement specific methods:
- states: using the @ATMT.state decorator. They usually do nothing
- conditions: using the @ATMT.condition and @ATMT.receive_condition decorators. They describe how to go from one state to another
- actions: using the ATMT.action decorator. They describe what to do, like sending a back, when changing state
The following example does nothing more than trying to mimic a TCP scanner:
End of explanation |
2,399 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 Google LLC. All Rights Reserved.
Step1: RLDS
Step2: Import Modules
Step3: Load dataset
We can load the human dataset from the Panda Pick Place Can task of the Robosuite collection in TFDS. In these examples, we are assuming that certain fields are present in the steps, so datasets from different tasks will not be compatible.
Step7: Learning from Demonstrations or Offline RL
We consider the setup where an agent needs to solve a task specified by a reward $r$. We assume a dataset of episodes with the corresponding rewards is available for training. This includes
Step9: Absorbing Terminal States in Imitation Learning
Imitation learning is the setup where an agent tries to imitate a behavior, as defined by some sample episodes of that behavior.
In particular, the reward is not specified.
The dataset processing pipeline requires all the different pieces seen in the learning from demonstrations setup (create a train split, assemble the observation, ...) but also has some specifics.
One specific is related to the particular role of the terminal state in imitation learning.
While in standard RL tasks, looping over the terminal states only brings zero in terms of reward, in imitation learning, making this assumption of zero reward for transitions from a terminal state to the same terminal state induces some bias in algorithms like GAIL.
One way to counter this bias was proposed in 1. It consists in learning the reward value of the transition from the absorbing state to itself.
Implementation wise, to tell a terminal state from another state, an absorbing bit is added to the observation (1 for a terminal state, 0 for a regular state). The dataset is also augmented with terminal state to terminal state transitions so the agent can learn from those transitions.
Step11: Offline Analysis
One significant use case we envision for RLDS is the offline analysis of collected datasets.
There is no standard offline analysis procedure as what is possible is only limited by the imagination of the users. We expose in this section a fictitious use case to illustrate how custom tags stored in a RL dataset can be processed as part of an RLDS pipeline.
Let's assume we want to generate an histogram of the returns of the episodes present in the provided dataset of human episodes on the robosuite PickPlaceCan environment. This dataset holds episodes of fixed length of size 400 but also has a tag to indicate the actual end of the task.
We consider here the histogram of returns of the variable length episodes ending on the completion tag. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 Google LLC. All Rights Reserved.
End of explanation
!pip install rlds[tensorflow]
!pip install tfds-nightly --upgrade
!pip install envlogger
!apt-get install libgmp-dev
Explanation: RLDS: Examples
This colab provides some examples of RLDS usage based on real use cases. If you are looking for an introduction to RLDS, see the RLDS tutorial in Google Colab.
<table class="tfo-notebook-buttons" align="left">
<td>
<a href="https://colab.research.google.com/github/google-research/rlds/blob/main/rlds/examples/rlds_examples.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Run In Google Colab"/></a>
</td>
</table>
Install Modules
End of explanation
import functools
import rlds
import tensorflow.compat.v2 as tf
import tensorflow_datasets as tfds
Explanation: Import Modules
End of explanation
dataset_config = 'human_dc29b40a' # @param { isTemplate : true}
dataset_name = f'robosuite_panda_pick_place_can/{dataset_config}'
num_episodes_to_load = 30 # @param { isTemplate: true}
Explanation: Load dataset
We can load the human dataset from the Panda Pick Place Can task of the Robosuite collection in TFDS. In these examples, we are assuming that certain fields are present in the steps, so datasets from different tasks will not be compatible.
End of explanation
K = 5 # @param { isTemplate: true}
buffer_size = 30 # @param { isTemplate: true}
dataset = tfds.load(dataset_name, split=f'train[:{num_episodes_to_load}]')
dataset = dataset.shuffle(buffer_size, seed=42, reshuffle_each_iteration=False)
dataset = dataset.take(K)
def prepare_observation(step):
Filters the obseravtion to only keep the state and flattens it.
observation_names = ['robot0_proprio-state', 'object-state']
step[rlds.OBSERVATION] = tf.concat(
[step[rlds.OBSERVATION][key] for key in observation_names], axis=-1)
return step
dataset = rlds.transformations.map_nested_steps(dataset, prepare_observation)
def batch_to_transition(batch):
Converts a pair of consecutive steps to a custom transition format.
return {'s_cur': batch[rlds.OBSERVATION][0],
'a': batch[rlds.ACTION][0],
'r': batch[rlds.REWARD][0],
's_next': batch[rlds.OBSERVATION][1]}
def make_transition_dataset(episode):
Converts an episode of steps to a dataset of custom transitions.
# Create a dataset of 2-step sequences with overlap of 1.
batched_steps = rlds.transformations.batch(episode[rlds.STEPS], size=2, shift=1)
return batched_steps.map(batch_to_transition)
transitions_ds = dataset.flat_map(make_transition_dataset)
Explanation: Learning from Demonstrations or Offline RL
We consider the setup where an agent needs to solve a task specified by a reward $r$. We assume a dataset of episodes with the corresponding rewards is available for training. This includes:
* The ORL setup [[1], 2] where the agent is trained solely from a dataset of episodes collected in the environment.
* The LfD setup [[4], [5], [6], [7]] where the agent can also interact with the environment.
Using one of the two provided datasets on the Robosuite PickPlaceCan environment, a typical RLDS pipeline would include the following steps:
sample $K$ episodes from the dataset so the performance of the trained agent could be expressed as a function of the number of available episodes.
combine the observations used as an input of the agent. The Robosuite datasets include many fields in the observations and one could try to train the agent from the state or form the visual observations for example.
finally, convert the dataset of episodes into a dataset of transitions that can be consumed by algorithms such as SAC or TD3.
End of explanation
def duplicate_terminal_step(episode):
Duplicates the terminal step if the episode ends in one. Noop otherwise.
return rlds.transformations.concat_if_terminal(
episode, make_extra_steps=tf.data.Dataset.from_tensors)
def convert_to_absorbing_state(step):
padding = step[rlds.IS_TERMINAL]
if step[rlds.IS_TERMINAL]:
step[rlds.OBSERVATION] = tf.zeros_like(step[rlds.OBSERVATION])
step[rlds.ACTION] = tf.zeros_like(step[rlds.ACTION])
# This is no longer a terminal state as the episode loops indefinitely.
step[rlds.IS_TERMINAL] = False
step[rlds.IS_LAST] = False
# Add the absorbing bit to the observation.
step[rlds.OBSERVATION] = tf.concat([step[rlds.OBSERVATION], [padding]], 0)
return step
absorbing_state_ds = rlds.transformations.apply_nested_steps(
dataset, duplicate_terminal_step)
absorbing_state_ds = rlds.transformations.map_nested_steps(
absorbing_state_ds, convert_to_absorbing_state)
Explanation: Absorbing Terminal States in Imitation Learning
Imitation learning is the setup where an agent tries to imitate a behavior, as defined by some sample episodes of that behavior.
In particular, the reward is not specified.
The dataset processing pipeline requires all the different pieces seen in the learning from demonstrations setup (create a train split, assemble the observation, ...) but also has some specifics.
One specific is related to the particular role of the terminal state in imitation learning.
While in standard RL tasks, looping over the terminal states only brings zero in terms of reward, in imitation learning, making this assumption of zero reward for transitions from a terminal state to the same terminal state induces some bias in algorithms like GAIL.
One way to counter this bias was proposed in 1. It consists in learning the reward value of the transition from the absorbing state to itself.
Implementation wise, to tell a terminal state from another state, an absorbing bit is added to the observation (1 for a terminal state, 0 for a regular state). The dataset is also augmented with terminal state to terminal state transitions so the agent can learn from those transitions.
End of explanation
def placed_tag_is_set(step):
return tf.not_equal(tf.math.count_nonzero(step['tag:placed']),0)
def compute_return(steps):
Computes the return of the episode up to the 'placed' tag.
# Truncate the episode after the placed tag.
steps = rlds.transformations.truncate_after_condition(
steps, truncate_condition=placed_tag_is_set)
return rlds.transformations.sum_dataset(steps, lambda step: step[rlds.REWARD])
returns_ds = dataset.map(lambda episode: compute_return(episode[rlds.STEPS]))
Explanation: Offline Analysis
One significant use case we envision for RLDS is the offline analysis of collected datasets.
There is no standard offline analysis procedure as what is possible is only limited by the imagination of the users. We expose in this section a fictitious use case to illustrate how custom tags stored in a RL dataset can be processed as part of an RLDS pipeline.
Let's assume we want to generate an histogram of the returns of the episodes present in the provided dataset of human episodes on the robosuite PickPlaceCan environment. This dataset holds episodes of fixed length of size 400 but also has a tag to indicate the actual end of the task.
We consider here the histogram of returns of the variable length episodes ending on the completion tag.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.